Abstract
Dynamic multiobjective optimization problems are challenging and currently notwell understood class of optimization problems but it is important since many realworld optimization problems change over time. When changes appear in the problem, there is a necessity to adapt to the changes in such a way that the convergence rate is sufficiently high. The work is devoted to the analysis of the efficiency of Pareto local search algorithms for dynamic multiobjective optimization problems as a method to increase the convergence rate. We propose a hybrid of global and local search algorithms, we use NSGA2 algorithm as a global optimizer and LBFGSB as a local search algorithm. The IEEE CEC2018 benchmark set is used for the experimental comparison of investigated approaches. The experimental results show that the proposed hybridization of NSGA2 and a local search algorithm can efficiently identify the Pareto front in the case of not intense changes in the environment.
Keywords: Dynamic multiobjective optimization problem, pareto local search, NSGA2
Introduction
Many realworld optimization problems exist in the dynamic environment and there is a necessity to take into account changes that continuously occurring in the environment. These problems arise in different application areas such as control, engineering and so on.
Dynamic multiobjective optimization is a complex and currently notwell studied class of hard optimization problems. In such problems, objective functions, their parameters and constraints imposed on the search space can change over time (Yazdani, 2021). Objective functions are assumed to be presented by a "black box" model; therefore, forms of objective functions and their properties remain unknown and are not involved in the optimization process. Also, there is no possibility to evaluate derivatives of objective functions and this fact significantly complicates the choice of a suitable method for solving problems of this class.
Approaches for solving dynamic multiobjective problems (DMOOP) based on evolutionary algorithms (EA) have been presented in many publications. The local search shows the successful results as one of improving mechanisms of EAs for solving static single and multiobjective optimization problems. The same approach can be implemented for improving DMOOP solutions.
The rest of the paper is organized as follows. Section 2 presents the description of DMOOP. Section 3 contains experimental setups and some general settings. In Section 4, the experimental results are presented. In conclusion, the obtained results are summarized, and some further ideas are suggested.
Problem Statement
Dynamic multiobjective optimization problem
A dynamic multiobjective optimization problem can be deﬁned as follows:
$\left\{{f}_{\mathrm{1}}\left(\stackrel{}{x},\stackrel{}{\alpha}\left(t\right)\right),\mathrm{}{f}_{\mathrm{2}}\left(\stackrel{}{x},\stackrel{}{\alpha}\left(t\right)\right),\dots ,{f}_{k}\left(\stackrel{}{x},\stackrel{}{\alpha}\left(t\right)\right)\right\}\to \underset{\stackrel{}{x}}{\mathrm{m}\mathrm{i}\mathrm{n}},$ (1)
here ${f}_{i}$ denotes a set of objective functions, $i=\stackrel{}{1,k},k$ is the number of objective functions; $\stackrel{}{x}\in S$ is a solution vector from the feasible search region $S$; $\stackrel{}{\alpha}\left(t\right)$ is a parameter vector of an objective function that changes over the time; $t\in [0,T]$ is the time interval, in which the problem is considered.
In realworld problems, the vector $\stackrel{}{\alpha}\left(t\right)$ can contain parameters of the external environment (such as temperature, available resources and so on) that affect the objective function. In the moving peaks benchmark problems this can be a parameter of the peaks’ depth, width, and location. The vector $\stackrel{}{\alpha}\left(t\right)$ can also include other variable parameters (for example, the number of variables of the objective function) (Azzouz et al., 2017).
DMOOP includes two or more objective functions that must be optimized simultaneously. This means that due to changes in the environment, the Paretooptimal set (PS) and the Paretooptimal front (PF) also change. Therefore, within the searching process it is necessary to provide sufficient diversity to be able to efficiently approximate PF of the problem and to fit to appearing changes. This is one of the reasons why EA’s with a population of solutions is an efficient tool for solving DMOOP.
Pareto local search
Local search is used in EAs as a tool, which allows improving the convergence rate of EAs (Sindhya et al., 2008). The main feature of local search algorithms is that the solution changes only in some local area of the search space. Moreover, local search algorithms have proven the convergence to a local optimum.
However, in the case of a multiobjective problem there is the necessity to take into account values of all objective functions. For this purpose, a simple convolution of objective functions or the measure that shows the distance between a referent point (usually the worst point) and obtained PS can be used. Thus, different approaches that use local search (LS) in multiobjective optimization have been proposed as Pareto local search approaches (Chen et al., 2014; Ishibuchi & Murata, 1996; Sindhya et al., 2008). The aim of the Pareto LS algorithm is not to find a single final solution but approximate PF of the problem as accurate as it is possible.
A local search procedure is applied to solutions obtained by EA. For example, Sindhya et al. (2008) proposed Pareto LS approach, which defined as a minimizing procedure of the following objective function:
$\underset{i=\stackrel{}{\mathrm{1},k}}{\mathrm{max}}\frac{{f}_{i}\left(\stackrel{}{x}\right){z}_{i}}{{f}_{i}^{\mathit{max}}{f}_{i}^{\mathit{min}}\mathrm{}}+\rho \underset{j=\mathrm{1}}{\overset{M}{\sum}}\frac{{f}_{j}\left(\stackrel{}{x}\right){z}_{j}}{{f}_{j}^{max}{f}_{j}^{min}\mathrm{}}\to \underset{\stackrel{}{x}}{\mathrm{m}\mathrm{i}\mathrm{n}},$ (2)
here ${z}_{i}$ is a reference point; ${f}_{i}^{max}$ and ${f}_{i}^{min}$ are maximum and minimum objective values from the population.
As the result, we consider a singleobjective optimization problem which can be solved by any known LS method for singleobjective optimization.
Research Questions
In this research the following questions were raised:
 Does the use of local search for solving DMOO problems gives an effect?
 Which scheme of hybridization of local and global search is the most effective?
Purpose of the Study
The purpose of this research is to investigate whether the using of LS with EA increases the performance of solving DMOOPs.
Research Methods
Hybridization of local and global search
The hybridization of local and global search can be implemented in different ways. In this research three approaches to the hybridization have been considered:
In these three approaches LS applied to the whole population. However, in the case of the alternation of LS and global search there is a possibility to use only a part of all individuals in LS. This approach allows to perform more steps of LS. In this research, LS uses 10 and 25 solutions from the population with the size equal to 100. Wherein, the number of objective function evaluations is equal in all approaches.
Two different objective functions for applying LS are compared: the function presented in (2) and the negative hypervolume (HV). HV is the accuracy measure in multiobjective optimization that shows the distance between obtained PF and a reference point.
When using LS there is a necessity to set bounds, in which variables of the LS objective function can change. In this research the bounds are computed as follows:
$\mathit{b}\mathit{o}\mathit{u}\mathit{n}\mathit{d}\mathit{s}=\left\{{\mathit{x}}_{\mathit{i}}\pm {\mathit{d}}_{\mathit{i}}\right\},\mathit{i}=\stackrel{}{\mathit{1},\mathit{n},}$ (3)
here $\mathit{n}$ is the number of variables in the objective function; ${\mathit{x}}_{\mathit{i}}$ is the obtained solution; ${\mathit{d}}_{\mathit{i}}=\mathit{}{\mathit{n}}_{\mathit{d}}\bullet \mathit{}\left{\mathit{x}}_{\mathit{i}}^{\mathit{m}\mathit{a}\mathit{x}}{\mathit{x}}_{\mathit{i}}^{\mathit{m}\mathit{i}\mathit{n}}\right$, ${\mathit{n}}_{\mathit{d}}\in \left[\mathit{0,1}\right]$ is the rate of LS, which is set experimentally in this research. In the case of the low rate, solutions obtained by LS are located not uniformly along PF. An example of the distribution of solutions is shown in Figures 1 and 2.
Experimental settings
In this study, the multiobjective optimization algorithm NSGA2 has been used as a global search method to solve DMOOPs. NSGA2 is one of wellknown methods, its efficiency has been shown in many publications (Deb et al., 2002). DMOOP is considered as a sequence of multiobjective problems on a discrete time interval $\mathit{t}\in \{\mathit{1},...,\mathit{T}\}$:
$\left\{\mathit{f}\left(\stackrel{}{\mathit{x}},\stackrel{}{\mathit{\alpha}}\left(\mathbf{1}\right)\right)\to \underset{\stackrel{}{\mathit{x}}}{\mathbf{m}\mathbf{i}\mathbf{n}},\mathit{f}\left(\stackrel{}{\mathit{x}},\stackrel{}{\mathit{\alpha}}\left(\mathbf{2}\right)\right)\to \underset{\stackrel{}{\mathit{x}}}{\mathbf{m}\mathbf{i}\mathbf{n}},\dots ,\mathit{f}\left(\stackrel{}{\mathit{x}},\stackrel{}{\mathit{\alpha}}\left(\mathit{T}\right)\right)\to \underset{\stackrel{}{\mathit{x}}}{\mathbf{m}\mathbf{i}\mathbf{n}}\right\}.$ (4)
When a change in the environment occurs, a new population is formed from a half of solutions from the previous population and a half of randomly generated solutions (Rurich et al., 2022).
We use the LBFGSB algorithm as LS; in our implementation of LBFGSB numerical gradient approximations are used. The IEEE CEC 2018 Dynamic Multiobjective Optimization Benchmark have been used as the test set (Jiang et al., 2018), it contains different types of DMOOPs with different levels of complexity.
IGD (Inverted Generational Distance) and HV are used to evaluate the accuracy of obtained solutions. IGD shows differences between obtained PF and true PF. The DMOOP benchmark uses the averaged measures (denoted as MIGD and MHV):
$MIGD=\frac{\mathrm{1}}{T}\underset{t=\mathrm{1}}{\overset{T}{\sum}}IGD\left({P}_{t}^{\mathrm{*}},{P}_{t}\right)=\frac{\mathrm{1}}{T}\underset{t=\mathrm{1}}{\overset{T}{\sum}}\underset{i=\mathrm{1}}{\overset{{n}_{{P}_{t}}}{\sum}}\frac{{d}_{t}^{i}}{{n}_{{P}_{t}}},$ (5)
here ${\mathit{n}}_{\mathit{P}\mathit{t}}=\left\mathit{P}\mathit{t}\right$; ${\mathit{d}}_{\mathit{i}}^{\mathit{t}}$ is the Euclid distance between the $\mathit{i}$th element of ${\mathit{P}}_{\mathit{t}}$ and the nearest element of ${\mathit{P}}_{\mathit{t}}^{\mathit{*}}$; $\mathit{T}$ is the number of time intervals.
$\mathit{M}\mathit{H}\mathit{V}=\frac{\mathit{1}}{\mathit{T}}\underset{\mathit{t}=\mathit{1}}{\overset{\mathit{T}}{\sum}}\mathit{H}{\mathit{V}}_{\mathit{t}}\left({\mathit{P}}_{\mathit{t}}^{\mathit{*}}\right),$ (6)
here HV is the hypervolume operator. In the publication (Jiang et al., 2018), there is the recommendation to calculate the reference point as $({\mathit{z}}_{\mathit{1}}+\mathit{0.5},\mathit{}{\mathit{z}}_{\mathit{2}}+\mathit{0.5},\dots ,\mathit{}{\mathit{z}}_{\mathit{m}}+\mathit{0.5}),$ where ${\mathit{z}}_{\mathit{j}}$ is the maximum value of the $\mathit{j}$th objective function in true PF in the $\mathit{t}$th time moment, $\mathit{m}$ is the number of objective functions.
Findings
Table 1 and table 2 contain the experimental results with IGD and HV averaged over 20 independent runs. The highlighted values in the Table are the best IGD value for each problem among considered approaches.
In the Table 1 we can see that in the cases of using NSGA2 with LS we achieve the results, which outperforms the results of applying standalone NSGA2. The alternation of LS and NSGA2 and LS before NSGA2 show the best results. In Table 2, there are the results of using LS with all, with 10, and with 25 individuals. We can see that in the cases when only a part of solutions is used in LS, we obtain the best results.
Conclusion
The results show that using of LS allows to improve the accuracy of solving dynamic multiobjective optimization problems. Implementation of LS applied to a part of the population shows the best results in comparison with the implementation of LS applied to all population. The received result means that the use of the hybridization of local and global search in solving of dynamic multiobjective optimization problems needs more detailed consideration.
The ideas for further researches are follows. Consider other schemas and approaches of the hybridization of local and global search. Investigate the relationship between the number of individuals in LS that is needed to achieve the best result and the rate of changes. Investigate how rate of the local search can be fitted.
Acknowledgments
This research was funded by the Ministry of Science and Higher Education of the Russian Federation, Grant No. 0751520221121.
References
Azzouz, R., Bechikh, S., & Ben Said, L. (2017). Dynamic Multiobjective Optimization Using Evolutionary Algorithms: A Survey. In S. Bechikh, R. Datta, & A. Gupta (Eds), Recent Advances in Evolutionary Multiobjective Optimization. Adaptation, Learning, and Optimization, (vol 20, pp. 3170). Springer, Cham.
Chen, B., Zeng, W., Lin, Y., & Zhang, D. (2014). A New Local SearchBased Multiobjective Optimization Algorithm. IEEE Transactions on Evolutionary Computation, 19(1), 5073.
Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: NSGAII. IEEE Transactions on Evolutionary Computation, 6(2), 182197.
Ishibuchi, H., & Murata, T. (1996). Multiobjective genetic local search algorithm. IEEE International Conference on Evolutionary Computation – Nagoya, Japan (2022 May 1996). Proceedings of IEEE International Conference on Evolutionary Computation, 119124.
Jiang, S., Yang, S., Yao, X., Tan, K. C., Kaiser, M., & Krasnogor, N. (2018). Benchmark Problems for CEC2018 Competition on Dynamic Multiobjective Optimisation. https://www.semanticscholar.org/ paper/BenchmarkProblemsforCEC2021CompetitiononLiuLin/0afebb7dd4c537d28213bf8e0705caeae6482763
Rurich, M. A., Vakhnin, A. V., & Sopov, E. A. (2022). The comparison of efficiency of the population formation approaches in the dynamic multiobjective optimization problems. Siberian Aerospace Journal, 23(2), 227240.
Sindhya, K., Deb, K., & Miettinen, K. (2008). A Local Search Based Evolutionary Multiobjective Optimization Approach for Fast and Accurate Convergence. Parallel Problem Solving from Nature  PPSN X, 815824.
Yazdani, D. (2021). A Survey of Evolutionary Continuous Dynamic Optimization Over Two Decades – Part A. IEEE Transactions on Evolutionary Computation, 25(4), 609629.
Copyright information
This work is licensed under a Creative Commons AttributionNonCommercialNoDerivatives 4.0 International License
About this article
Publication Date
27 February 2023
Article Doi
eBook ISBN
9781802969603
Publisher
European Publisher
Volume
1
Print ISBN (optional)

Edition Number
1st Edition
Pages
1403
Subjects
Hybrid methods, modeling and optimization, complex systems, mathematical models, data mining, computational intelligence
Cite this article as:
Rurich, M., & Sherstnev, P. (2023). A Hybridization of Local and Global Search for Dynamic MultiObjective Optimization Problem. In P. Stanimorovic, A. A. Stupina, E. Semenkin, & I. V. Kovalev (Eds.), Hybrid Methods of Modeling and Optimization in Complex Systems, vol 1. European Proceedings of Computers and Technology (pp. 321327). European Publisher. https://doi.org/10.15405/epct.23021.39