# Neutrosophy in Unconstrained Nonlinear Optimization

## Abstract

Determining the step length in iterations of nonlinear minimization represents a problem that is not uniquely defined. Motivated by such uncertainty in defining step length, our intention was to use the capabilities of neutrosophy in this process. Our idea is to unify the usability and numerous applications of neutrosophic logic and the enormous importance of nonlinear optimization. An improvement of line search iteration for solving unconstrained optimization is proposed using appropriately defined Neutrosophic logic system in determining appropriate step size for the class of descent direction methods. The basic idea is to use an additional parameter that would monitor the behavior of the objective function and, based on that, correct the step length in known optimization methods. Mutual comparison and analysis of generated numerical results reveal better results generated by the suggested iterations compared to analogous available iterations considering the statistical ranking technique. Statistical measures show advantages of fuzzy improvements of considered line search optimization methods.

## Introduction

Let $\mathcal{U}$ denote the universe of discourse and assume $\mathcal{N}\subseteq \mathcal{U}$. The fuzzy set theory is based on the use of a membership function $T\left(u\right)\in \left[\mathrm{0,1}\right],u\in \mathcal{U}$ (Zadeh, 1965). A fuzzy set $\mathcal{N}$ in $\mathcal{U}$ is a set of ordered pairs $\mathcal{N}=\left\{〈u,T\left(u\right)〉|u\in \mathcal{U}\right\}.$

Apart from the membership function $T\left(u\right)$, an intuitionistic fuzzy set (IFS) also uses the opposite non-membership function $F\left(u\right)\in \left[\mathrm{0,1}\right],u\in \mathcal{U}$ (Atanassov, 1986). More precisely, an IFS $\mathcal{N}$ in $\mathcal{U}$ is defined as $\mathcal{N}=\left\{〈u,T\left(u\right),F\left(u\right)〉|u\in \mathcal{U}\right\}.$

Smarandache (2003) and Wang et al. (2010) extended the IFS theory by initiating the indeterminacy-membership function, which represents indecisiveness in decision-making. Consequently, each element of a set in the neutrosophic set theory is defined by three independent membership functions (Smarandache, 2003; Wang et al., 2010): the truth-membership function $T\left(x\right)$, the indeterminacy-membership function $I\left(u\right)$, and the falsity-membership $F\left(u\right)$ function. A single valued neutrosophic set (SVNS) $\mathcal{N}$ over $\mathcal{U}$ is the set of neutrosophic numbers of the form $\mathcal{N}=\left\{〈u,T\left(u\right),I\left(u\right),F\left(u\right)〉|u\in \mathcal{U}\right\},$ Values of these functions are independent and inside $\left[\mathrm{0,1}\right]$, which means $T,I,F:\mathcal{U}\to \left[\mathrm{0,1}\right]$ and $\mathrm{0}\le T\left(u\right)+I\left(u\right)+F\left(u\right)\le \mathrm{3}$.

Fuzzy logic (FL), intuitionistic fuzzy logic (IFL) and Neutrosophic logic (NL) appear as efficient tools to handle mathematical models with uncertainty, fuzziness, ambiguity, inaccuracy, incomplete certainty, incompleteness, inconsistency, redundancy.

Neutrosophic sets (NS) have important applications for denoising, clustering, segmentation, and classification in numerous medical image-processing applications. A utilization of neutrosophic theory in denoising medical images and their segmentation was proposed in (Guo et al., 2009), such that a neutrosophic image is characterized by three membership sets. Several applications of of neutrosophic systems were described in (Christianto & Smarandache, 2019). An application of neutrosophy in natural language processing and sentiment analysis was investigated in (Mishra et al., 2020).

Our goal in the present paper is to unify possibilities of neutrosophy and several fradient-descent methods for solving unconstrained optimization problems.

## Problem Statement

Our idea must be seen from two sides. First, we are guided by the huge popularity and numerous applications of NL. The other side of our problem is the ubiquitous optimization with a primordial desire to make the phenomenon or process the best possible. A combination of these two areas initiates applications of the Neutrosophic logic in determining an additional step size in main methods for solving the multivariate unconstrained optimization problem

$\mathrm{m}\mathrm{i}\mathrm{n}f\left(\mathbf{x}\right),\mathbf{x}\in {R}^{n},$(1)

with the objective $f:{R}^{n}\to R$.

The descent direction (DD) iterative flow is defined by

${\mathbf{x}}_{k+\mathrm{1}}={\mathbf{x}}_{k}+{\mathcal{l}}_{k}{\mathbf{d}}_{k},$(2)

in which ${\mathbf{x}}_{k+\mathrm{1}}$ is the new approximation, ${\mathbf{x}}_{k}$ is the former approximation, ${\mathcal{l}}_{k}>\mathrm{0}$ is a step size, and ${\mathbf{d}}_{k}$ is an appropriate search direction. The vector ${\mathbf{d}}_{k}$ must satisfy the so called descent condition ${\mathbf{g}}_{k}^{\mathrm{T}}{\mathbf{d}}_{k}<\mathrm{0}$, assuming that ${\mathbf{g}}_{k}=\nabla f\left({\mathbf{x}}_{k}\right)$ is the gradient vector of $f$ in the point ${\mathbf{x}}_{k}$. The choice of the anti-gradient direction ${\mathbf{d}}_{k}=-{\mathbf{g}}_{k}$ leads to the gradient descent (GD) iterations

${\mathbf{x}}_{k+\mathrm{1}}={\mathbf{x}}_{k}-{\mathcal{l}}_{k}{\mathbf{g}}_{k}.$(3)

The general quasi-Newton (QN) class of iterations with line search

${\mathbf{x}}_{k+\mathrm{1}}={\mathbf{x}}_{k}-{\mathcal{l}}_{k}{H}_{k}{\mathbf{g}}_{k}$(4)

utilizes an appropriate symmetric and positive-definite estimation ${B}_{k}$ of the Hessian ${G}_{k}={▽}^{\mathrm{2}}f\left({\mathbf{x}}_{k}\right)$ and then defines ${H}_{k}:={B}_{k}^{-\mathrm{1}}$ k (Sun and Yuan, 2006). The upgrade ${B}_{k+\mathrm{1}}$ from ${B}_{k}$ is established on the QN characteristic

${B}_{k+\mathrm{1}}{\mathbf{\rho }}_{k}={\sigma }_{k},{\rho }_{k}={\mathbf{x}}_{k+\mathrm{1}}-{\mathbf{x}}_{k},{\sigma }_{k}={\mathbf{g}}_{k+\mathrm{1}}-{\mathbf{g}}_{k}.$(5)

We will consider the scalar Hessian approximation (Nocedal and Wright, 1999):

${B}_{k}={\gamma }_{k}I,{\gamma }_{k}>\mathrm{0},$(6)

where $I$ is the identity matrix. Consequently, iterations under detailed consideration in this paper are given as

${\mathbf{x}}_{k+\mathrm{1}}={\mathbf{x}}_{k}-{\gamma }_{k}^{-\mathrm{1}}{\mathcal{l}}_{k}{\mathbf{g}}_{k}$(7)

and are known as (IGD) iterations. The quantity ${\mathcal{l}}_{k}$ is defined as the output of an inexact line search, while ${\gamma }_{k}$ is calculated based on Taylor series of $f\left(\mathbf{x}\right)$.

Diverse forms and improvements of the IGD iterative scheme (7) were suggested in (Petrović et al., 2018; Stanimirović & Miladinović, 2010). The SM iterative flow is originated in (Stanimirović & Miladinović, 2010) and determined by the recurrence rule

${\mathbf{x}}_{k+\mathrm{1}}={\mathbf{x}}_{k}-{\mathcal{l}}_{k}\left({\gamma }_{k}^{SM}{\right)}^{-\mathrm{1}}{\mathbf{g}}_{k},$(8)

where ${\gamma }_{k}^{SM}>\mathrm{0}$ is the gain parameter determined utilizing the Taylor approximation of $f\left({\mathbf{x}}_{k}-{\mathcal{l}}_{k}\left({\gamma }_{k}^{SM}{\right)}^{-\mathrm{1}}{\mathbf{g}}_{k}\right)$, as

${\gamma }_{k+\mathrm{1}}^{SM}=\mathrm{⅁}\left(\mathrm{2}{\gamma }_{k}^{SM}\frac{{\gamma }_{k}^{SM}\left[{f}_{k+\mathrm{1}}-{f}_{k}\right]+{\mathcal{l}}_{k}\parallel {\mathbf{g}}_{k}{\parallel }^{\mathrm{2}}}{{\mathcal{l}}_{k}^{\mathrm{2}}\parallel {\mathbf{g}}_{k}{\parallel }^{\mathrm{2}}}\right),$

such that ${f}_{k}:=f\left({\mathbf{x}}_{k}\right)$, ${f}_{k+\mathrm{1}}:=f\left({\mathbf{x}}_{k+\mathrm{1}}\right)$ and

$\mathrm{⅁}\left(\varsigma \right)=\left(\begin{array}{ll}\varsigma ,& \varsigma >\mathrm{0}\\ \mathrm{1},& \varsigma \le \mathrm{0}.\end{array}$

The following modification of the SM method was defined in (Ivanov et al., 2021) as the transformation $MSM=\mathcal{M}\left(SM\right)$ defined by

${\mathbf{x}}_{k+\mathrm{1}}=\mathcal{M}\left(SM\right)\left({\mathfrak{x}}_{k}\right)={\mathbf{x}}_{k}-{\tau }_{k}\left({\gamma }_{k}^{MSM}{\right)}^{-\mathrm{1}}{\mathbf{g}}_{k},$(9)

where ${\mathcal{l}}_{k}\in \left(\mathrm{0,1}\right)$ is the output of the line search, ${\tau }_{k}={\mathcal{l}}_{k}+{\mathcal{l}}_{k}^{\mathrm{2}}-{\mathcal{l}}_{k}^{\mathrm{3}}$ and

${\gamma }_{k+\mathrm{1}}^{MSM}=\mathrm{⅁}\left(\mathrm{2}{\gamma }_{k}^{MSM}\frac{{\gamma }_{k}^{MSM}\left[{f}_{k+\mathrm{1}}-{f}_{k}\right]+{\tau }_{k}\parallel {\mathbf{g}}_{k}{\parallel }^{\mathrm{2}}}{{\tau }_{k}^{\mathrm{2}}\parallel {\mathbf{g}}_{k}{\parallel }^{\mathrm{2}}}\right).$(10)

We propose improvements of surveyed line search iterative rules defined on the pattern (2) for solving (1). The principal idea is based on the utilization of an appropriate NLS in determining appropriate step length for various gradient descent rules. Fuzzy descent direction (FDD) iterations are defined as a modification of the DD iterations (2), as follows

${\mathbf{x}}_{k+\mathrm{1}}=\mathcal{F}\left(DD\right)\left({\mathbf{x}}_{k}\right)={\mathbf{x}}_{k}+{\mathrm{Ϝ}}_{k}{\mathcal{l}}_{k}{\mathbf{d}}_{k},$(11)

where ${\mathrm{Ϝ}}_{k}$ is appropriately defined adaptive Neutrosophic logic parameter. The set of desirable values of ${\mathrm{Ϝ}}_{k}$ is defined upon the general restrictions

${\mathrm{Ϝ}}_{k}\left(\begin{array}{ll}<\mathrm{1},& if{f}_{k+\mathrm{1}}>{f}_{k},\\ =\mathrm{1},& if{f}_{k+\mathrm{1}}={f}_{k},\\ >\mathrm{1},& if{f}_{k+\mathrm{1}}<{f}_{k}.\end{array}$(12)

The second approach is based on

${\mathrm{Ϝ}}_{k}\left(\begin{array}{ll}<\mathrm{1},& if{f}_{k+\mathrm{1}}>{f}_{k},\\ =\mathrm{1},& otherwise.\end{array}$(13)

Both approaches reduce the step length if the objective function increases. The difference is that the first approach tends to increase the step size in the case where the objective function is decreasing, while the second approach leaves such cases to the original model. We will use the restrictions (12) in numerical comparisons.

The parameter ${\mathrm{Ϝ}}_{k}$ will be determined using appropriately developed Neutrosophic logic controller (NLC). To our knowledge, such an research strategy has not been exploited so far.

## Research Questions

Our exploratory research concerns the problem of determining the step length in (2). It is a known fact that the optimal step length determined on the basis of one-dimensional optimization is not an efficient solution in (Nocedal & Wright, 1999; Sun & Yuan, 2006). The problem of determining the step length is solved by the ILS procedure, which is guided by the basic principle that an appropriate step size initiates a substantial decrease in the value of the objective. Therefore, in this space of uncertainty, the possibility always remains open for additional improvements in the step size selection. eventually,there is no general rule to predict ${\mathcal{l}}_{k}$ in each iteration of each individual method.

A neutrosophic logic system (NLS) is helpful in such situations that certainly cannot be predicted nor determined. Motivated by such a situation, we are sure that an NLS developed in a proper way is a suitable tool to define an additional gain parameter ${\mathrm{Ϝ}}_{k}$ dynamically on the basis of previous results of the objective $f$. The basic dilemma is how to determine the value ${\mathrm{Ϝ}}_{k}$ as an output of appropriately defined NLS in each iterative stage of the flow (11), such that the final searching step ${\mathrm{Ϝ}}_{k}{\mathcal{l}}_{k}$ forces more rapid decreases of the objective $f$. Our goal is to investigate some possibilities for defining proper NLS, define an additional parameter in main IGD methods and compare their effectiveness with respect to original methods.

## Purpose of the Study

An NL is a better choice than the FL and IFL in representation of real world data and their executions because of several reasons, as follows.

We originate and investigate a correlation between possibilities of NLSs and main methods available in nonlinear optimization. To be more precise, we will show that the learning rate parameter ${\mathcal{l}}_{k}$ can be supported during iterative process using an appropriate value $\mathrm{Ϝ}$ which is determined as the output of appropriately defined NLS that involves appropriately determined membership functions $T,I,F$.

## Research Methods

The first research method is a rigorous convergence analysis based on mathematical analysis.

The second research method assumes numerical testing and comparison of obtained numerical data. Test problems in ten dimensions $\left[\mathrm{100,500,1000,3000,5000,7000,8000,10000,15000,20000}\right]$ are evaluated and average values are used. The Codes Are Tested In Matlab R2017A.

The third method comprises comparison based on the statistical ranking of the proposed optimization methods against corresponding known methods.

## Findings

In this section we define three NLC-based optimization methods and describe the principles of the NLS used in defining the parameter ${\mathrm{Ϝ}}_{k}$.

### NLC-based optimization methods

Fuzzy GD method (FGD) is determined by the iterative sequence

${\mathbf{x}}_{k+\mathrm{1}}=\mathcal{F}\left(GD\right)\left({\mathbf{x}}_{k}\right)=\mathcal{F}\left({x}_{k}-{\mathcal{l}}_{k}{\mathbf{g}}_{k}\right)={\mathbf{x}}_{k}-{\mathrm{Ϝ}}_{k}{\mathcal{l}}_{k}{\mathbf{g}}_{k}.$(14)

Fuzzy SM method (FSM) is defined as

${\mathbf{x}}_{k+\mathrm{1}}=\mathcal{F}\left(SM\right)\left({\mathbf{x}}_{k}\right)={x}_{k}-{\mathrm{Ϝ}}_{k}{\mathcal{l}}_{k}\left({\gamma }_{k}^{FSM}{\right)}^{-\mathrm{1}}{\mathbf{g}}_{k},$(15)

where

${\gamma }_{k+\mathrm{1}}^{FSM}=\mathrm{⅁}\left(\mathrm{2}{\gamma }_{k}^{FSM}\frac{{\gamma }_{k}^{FSM}\left[{f}_{k+\mathrm{1}}-{f}_{k}\right]+{\mathrm{Ϝ}}_{k}{\mathcal{l}}_{k}\parallel {\mathbf{g}}_{k}{\parallel }^{\mathrm{2}}}{{\left({\mathrm{Ϝ}}_{k}{\mathcal{l}}_{k}\right)}^{\mathrm{2}}\parallel {\mathbf{g}}_{k}{\parallel }^{\mathrm{2}}}\right).$ (16)

The Fuzzy MSM method (FMSM) is defined by

${\mathbf{x}}_{k+\mathrm{1}}=\mathcal{F}\left(MSM\right)\left({\mathbf{x}}_{k}\right)={\mathbf{x}}_{k}-{\mathrm{Ϝ}}_{k}{\tau }_{k}\left({\gamma }_{k}^{FMSM}{\right)}^{-\mathrm{1}}{\mathbf{g}}_{k},$(17)

where

${\gamma }_{k+\mathrm{1}}^{FMSM}=\mathrm{⅁}\left(\mathrm{2}{\gamma }_{k}^{FMSM}\frac{{\gamma }_{k}^{FMSM}\left[{f}_{k+\mathrm{1}}-{f}_{k}\right]+{\mathrm{Ϝ}}_{k}{\tau }_{k}\parallel {\mathbf{g}}_{k}{\parallel }^{\mathrm{2}}}{{\left({\mathrm{Ϝ}}_{k}{\tau }_{k}\right)}^{\mathrm{2}}\parallel {\mathbf{g}}_{k}{\parallel }^{\mathrm{2}}}\right).$ (18)

The overall structure of optimization methods defined on the usage of NLS follows the philosophy described in the diagram of Figure 1.

### Neutrosophic logic system

To define the FMSM method, we need to define the steps Score function, Neutrosophistication and De-Neutrosophistication.

. Using three membership functions, neutrosophic logic maps the input ${\mathrm{Ⅎ}}_{k}:={f}_{k+\mathrm{1}}-{f}_{k}$ into neutrosophic triplets $〈T\left({\mathrm{Ⅎ}}_{k}\right),I\left({\mathrm{Ⅎ}}_{k}\right),F\left({\mathrm{Ⅎ}}_{k}\right)〉$. There are a huge number of convenient membership functions that can be used. Our empirical experience based on a large number of numerical tests led us to the following choice.

The truth-membership function is defined as the sigmoid function:

$\mathrm{T}\left(\mathrm{\vartheta }\right)=\mathrm{1}/\left(\mathrm{1}+{\mathrm{e}}^{-{c}_{\mathrm{1}}\left(\mathrm{\vartheta }-{c}_{\mathrm{2}}\right)}\right).$(19)

The parameter ${c}_{\mathrm{1}}$ is responsible for its slope at the crossover point $\vartheta ={c}_{\mathrm{2}}$. The falsity-membership function is the sigmoid function:

$\mathrm{F}\left(\mathrm{\vartheta }\right)=\mathrm{1}/\left(\mathrm{1}+{\mathrm{e}}^{{c}_{\mathrm{1}}\left(\mathrm{\vartheta }-{c}_{\mathrm{2}}\right)}\right).$(20)

The indeterminacy-membership function is the Gaussian function:

$\mathrm{I}\left(\mathrm{\vartheta }\right)={\mathrm{e}}^{-\frac{\left(\vartheta -{c}_{\mathrm{2}}{\right)}^{\mathrm{2}}}{\mathrm{2}{c}_{\mathrm{1}}^{\mathrm{2}}}},$(21)

where the parameter ${c}_{\mathrm{1}}$ signifies the standard deviation, and ${c}_{\mathrm{2}}$ denotes the mean. In general, the neutrosophication of the crisp value $\vartheta \in \mathbb{R}$ is its transformation into $〈\vartheta :T\left(\vartheta \right),I\left(\vartheta \right),F\left(\vartheta \right)〉$, where the membership functions are defined in (19), (20) and (21).

Since the final goal is to minimize $f\left(x\right)$, it is a straightforward decision to use ${\mathrm{Ⅎ}}_{k}:=f\left({x}_{k+\mathrm{1}}\right)-f\left({x}_{k}\right)$ as a measure in the developed NLC.

: The neutrosophic rule between the input fuzzy set $\mathfrak{I}$ and the output fuzzy set under the neutrosophic format $\mathfrak{O}=\left\{T,I,F\right\}$ is described by the following "IF-THEN" rules:

${R}_{\mathrm{1}}:\mathrm{I}\mathrm{f}\mathrm{}\mathfrak{I}=P\mathrm{t}\mathrm{h}\mathrm{e}\mathrm{n}\mathrm{}\mathfrak{O}=\left\{T,I,F\right\}{R}_{\mathrm{2}}:\mathrm{I}\mathrm{f}\mathrm{}\mathfrak{I}=N\mathrm{t}\mathrm{h}\mathrm{e}\mathrm{n}\mathrm{}\mathfrak{O}=\left\{T,I,F\right\}.$

The notations $P$ and $N$ stand for fuzzy sets, and indicate a positive and negative error, respectively.

. This step applies conversion $〈T\left({\mathrm{Ⅎ}}_{k}\right),I\left({\mathrm{Ⅎ}}_{k}\right),F\left({\mathrm{Ⅎ}}_{k}\right)〉\to {\mathrm{Ϝ}}_{k}\left({\mathrm{Ⅎ}}_{k}\right)\in \mathbb{R}$ resulting into a crisp real quantity ${\mathrm{Ϝ}}_{k}\left({\mathrm{Ⅎ}}_{k}\right)$.

The following which satisfies the requirement (12) is proposed to obtain the parameter ${\mathrm{Ϝ}}_{k}\left({\mathrm{Ⅎ}}_{k}\right)$:

${\mathrm{Ϝ}}_{k}\left({\mathrm{Ⅎ}}_{k}\right)=\left(\begin{array}{ll}\mathrm{3}-\left(T\left({\mathrm{Ⅎ}}_{k}\right)+I\left({\mathrm{Ⅎ}}_{k}\right)+F\left({\mathrm{Ⅎ}}_{k}\right)\right),& {\mathrm{Ⅎ}}_{k}<\mathrm{0}\\ \mathrm{1},& {\mathrm{Ⅎ}}_{k}=\mathrm{0}\\ \mathrm{1}-\left(T\left({\mathrm{Ⅎ}}_{k}\right)+I\left({\mathrm{Ⅎ}}_{k}\right)+F\left({\mathrm{Ⅎ}}_{k}\right)\right)/{c}_{\mathrm{1}},& {\mathrm{Ⅎ}}_{k}>\mathrm{0},\end{array}$(22)

where ${c}_{\mathrm{1}}\ge \mathrm{3}$.

### Statistical ranking

We compare six methods, of which three are FMSM, FSM, and FGD based on appropriately defined NSL, while the other three MSM, SM, and GD methods are well known in the literature. To this aim, we perform competitions on standard test functions with given initial points from (Andrei, 2008; Bongartz et al., 1995). We compare MSM, SM, GD, FSM, FGD, and FMSM methods in three decisive criteria: The CPU time in seconds - CPUts; the number of iterative steps - NI; the number of function evaluations - NFE.

The performances of the optimization methods GD, SM, MSM and their fuzzy duals FGD, FSM, FMSM are ranked on solving the 30 test functions.

Figure 2 shows the iterations' performance rank of the optimization methods. Note that a method is regarded as rank 1 if it requires the fewest iterations out of all the considered methods. If a method has the second-fewest iterations compared to all the compared methods, it would be considered rank 2. The ranking process continues until the last method of rank 6. Figure 3 shows the function evaluations performance rank of the optimization methods. Figure 4 shows the CPU time consumption performance rank

The general observation is that the FMSM is the best with respect to iterations' performance and the CPU time consumption performance. On the other hand, the MSM has the best function evaluation performance.

## Conclusion

Line search iterations for solving unconstrained optimization are improved utilizing an additional step parameter produced by appropriately defined netrosophic system. More precisely, using an appropriate neutrosophic logic, we propose a new approach in solving uncertainty in defining parameters involved in iterations for solving nonlinear optimization methods. The improvement is based on the utilization of the Neutrosophic logic in determining appropriate step size usable in various gradient descent methods.

Performed theoretical analysis reveals convergence of novel iterations under the same conditions as for corresponding original methods. Numerical comparison and statistical ranking indicate advantages of fuzzy and neutrosophic improvements of underlying line search optimization methods. Our numerical experience shows that the neutrosophic parameter ${\mathrm{Ϝ}}_{k}$ is particularly efficient as an additional step size composed with previously defined parameters. Additional research can focus on neural network optimization (Mourtas & Katsikis, 2022b) or even portfolio optimization problems (Mourtas & Katsikis, 2022a).

## Acknowledgments

Predrag Stanimirović is supported by the Science Fund of the Republic of Serbia, (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications - QUAM).

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 075-15-2022-1121).

## References

• Andrei, N. (2008). An unconstrained optimization test functions collection. Advanced Modeling and Optimization, 10(1), 147-161.

• Atanassov, K. T. (1986). Intuitionistic fuzzy sets. Fuzzy sets and systems, 20(1), 87-96. DOI:

• Bongartz, I., Conn, A. R., Gould, N., & Toint, P. L. (1995). CUTE: Constrained and unconstrained testing environment. ACM Transactions on Mathematical Software (TOMS), 21(1), 123-160. DOI:

• Christianto, V., & Smarandache, F. (2019). A review of seven applications of neutrosophic logic: In cultural psychology, economics theorizing, conflict resolution, philosophy of science, etc. Multidisciplinary Scientific Journal, 2, 128-137. DOI:

• Guo, Y., Cheng, H. D., & Zhang, Y. (2009). A new neutrosophic approach to image denoising. New Mathematics and Natural Computation, 5, 653-662. DOI:

• Ivanov, B., Stanimirović, P. S., Milovanović, G. V., Djordjević, S., & Brajević, I. (2021). Accelerated multiple step-size methods for solving unconstrained optimization problems. Optimization Methods and Software, 36(5), 998-1029. DOI:

• Mishra, K., Kandasamy, I., Kandasamy W. B, V., & Smarandache, F. (2020). A novel framework using neutrosophy for integrated speech and text sentiment analysis. Symmetry, 12(10), 1715. DOI:

• Mourtas, S. D., & Katsikis, V. N. (2022a). V-Shaped BAS: Applications on Large Portfolios Selection Problem. Computational Economics, 60(4), 1353-1373. DOI:

• Mourtas, S. D., & Katsikis, V. N. (2022b). Exploiting the Black-Litterman framework through error-correction neural networks. Neurocomputing, 498, 43-58. DOI:

• Nocedal, J., & Wright, S. J. (1999). Numerical Optimization. Springer-Verlag.

• Petrović, M., Rakočević, V., Kontrec, N., Panić, S., & Ilić, D. (2018). Hybridization of accelerated gradient descent method. Numerical Algorithms, 79(3), 769-786. DOI:

• Smarandache, F. (2003). A unifying field in logics: Neutrosophic Logic. Neutrosophy, Neutrosophic Set, Neutrosophic Probability. American Research Press, Rehoboth.

• Smarandache, F. (2016). Neutrosophic logic - A generalization of the intuitionistic fuzzy logic. DOI:

• Stanimirović, P. S., & Miladinović, M. B. (2010). Accelerated gradient descent methods with line search. Numerical Algorithms, 54(4), 503-520. DOI:

• Sun, W., & Yuan, Y.-X. (2006). Optimization Theory and Methods: Nonlinear Programming. Springer Optimization and Its Applications. Springer.

• Wang, H., Smarandache, F., Zhang, Y. Q., & Sunderraman, R. (2010). Single valued neutrosophic sets. Multispace and Multistructure, 4, 410-413.

• Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3), 338-353. DOI:

27 February 2023

#### Article Doi

https://doi.org/10.15405/epct.23021.17

#### eBook ISBN

978-1-80296-960-3

#### Publisher

European Publisher

1

-

1st Edition

1-403