Performances assessment of MOMA-Plus method on multiobjective optimization problems

This work is devoted to evaluate the performances of the MOMA-Plus method in solving multiobjective optimization problems. This assessment is doing on the complexity of its algorithm, the convergence and the diversity of solutions in relation to the Pareto front. All these parameters were evaluated on non-linear multiobjective test problems and obtained solutions are compared with those provided by the NSGA-II method. This comparative study made it possible to highlight the performances of MOMA-Plus method for solving non-linear multiobjective problems. 2020 Mathematics Subject Classifications: 90C29, 90C30, 49M30, 49M37


Introduction
Multi-objective optimization has been, for several decades, a discipline of Mathematics that allows to solve optimization problems where several criteria are involved simultaneously [1,9,13,14]. Several researchers have developed methods or algorithms to find compromise solutions which would be as close as possible to the best values of criteria but also verify the constraints. Most of these methods can be classified in two groups: the exact methods and metaheuristics. Metaheuristics have been developed to overcome the shortcomings of direct or traditional methods. Indeed, on some kinds of problems, direct or traditional methods are unable either to find Pareto optimal solutions, or to converge quickly towards the Pareto front, or to give a good distribution of solutions around the Pareto front. Therefore, metaheuristics are developed to overcome these shortcomings. In literature, we can find several metaheuristics among which the best known and the Indeed, in this article, we intend to evaluate the performance of the MOMA-Plus method on some test problems. For this purpose, a comparative study of the obtained results will be done on these test problems using the NSGA-II method developed by K. Deb et al [1]. Therefore, we provide a complete study on the performances of the MOMA-Plus method because in all of the former works on MOMA-Plus, no study has been done about the complexity of the algorithm, the convergence and the diversity of solutions on the Pareto front.
For a best presentation of our work, we will first describe the MOMA-Plus method. Then, make a study of performances such as: calculation of metrics of convergence and metric of diversity; and also the complexity of the MOMA-Plus algorithm. Finally, we will calculate the performance index for each test problem and relative to methods MOMA-Plus and NSGA-II. We will conclude our work with a conclusion.

MOMA-Plus method
Let's consider n, m and p be natural integers. Let's consider also the following multiobjective optimization problem: where f j , j = 1, p are the objective functions, g i , i = 1, m are the constraints of the problem and x = (x 1 , · · · , x n ) are the decision variables. Some definitions are necessary for the best understanding of this work. Let S be the set of eligible solutions, i.e. S = {x ∈ R n /g 1 (x) ≤ 0; ...; g m (x) ≤ 0}.
., p} and for a certain k ∈ {1, .., p} such that Definition 2. The ideal point is the vector z ∈ R p whose components z j are obtained by individually minimizing each objective function f j , under on all constraints. We have: The steps of the MOMA-Plus method are as follow: (i) aggregation of objective functions; (ii) penalization constraints; (iii) reduction of research space; (iv) resolution in the reduced search space (v) configuration of the solution to the initial space.

Aggregation of objective functions
The MOMA-Plus method uses an aggregation technique to transform the multiobjective optimization problem into a single-objective optimization problem. The aggregation function used here is the weighted Tchebychev distance because all problems are nonlinear. It is defined by the following equation: The coefficients λ k , k = 1, p are weights of the objective functions with p k=1 λ k = 1.
By applying (3) to the problem (1) we obtain: The problem (4) is mono-objective, therefore it can have a global optimum for a fixed λ.
The following theorems prove the equivalence between the solutions of the initial problem and the aggregate problem.
Theorem 1. Any Pareto optimal solution of the problem (1) is an optimal solution for the problem (4) and reciprocally.
Proof. Let x * be an Pareto optimal solution of the problem (1) and note that I = {1, 2, · · · , p}. Then, there is no x ∈ S such as : Let's suppose that x * is not an optimal solution for the problem (4). Then : That is equivalent to Consequently, x * is an optimal solution of problem (4). Now, let x * be an optimal solution of problem (4). Then Let's suppose that x * is not an Pareto optimal solution for the problem (1). Then : Consequently, x * is an Pareto optimal solution of problem (1)

Penalization
This step consists in transforming the problem (4) into an optimization problem without constraints. The used penalty function derives from the Lagrangian function and is defined by [21]: η is the defined penalty coefficient such as : . By using the function (5), the problem (4) becomes a single objective optimization problem without constraint given by the following formulation: with D, a subset of R n defined by the boundaries of the variables. The following theorem characterizes the global optimum of the problem (6).
Theorem 2. Let x * ∈ S be a point that realizes the global minimum of the problem (6) then x * is a point that realizes global minimum of the problem (4).
Proof. Let's suppose that x * is a point that realizes the global minimum of the problem (6) then ∀x ∈ S : L(x * ) ≤ L(x) that means that: By the definition of the set S we have Hence x * is a point that also realizes the global minimum of the problem (4).

Reduction of research space
Definition 3. The Alienor transformation is any transformation that reduces a function of several variables to a function of a single variable using the α−dense curves which corresponds to a reduction of the search space.
The α−dense curves are studied in [6] and the interested reader will be able to consult it. The Alienor transformation that we will use in this work is that of Konfé-Cherruault [4]. It is given by the following equation: In the equation (7), ω i and φ i are slowly increasing sequences and θ ∈ [0; θ max ] with Several types of Alienor transformations exist in the literature and the interested readers will be able to consult [2,3,5,6,12].
By applying the equation (7) to the variables of the problem (6), we obtain the optimization problem with single objective without constraint and with only one variable by the following formulation: Theorem 3. Any minimum of the problem (6) can be approached by a minimum of the problem (8).
The resolution of the problem (8) is to find the θ value that minimizes the function F .

Resolution in reduced space
The MOMA-Plus method is applied in a discrete interval to which the Nelder-Mead algorithm is applied, more precisely in the neighborhood of the discrete points, as shown in the following figure : This process is repeated until the coverage of the whole domain. Nelder-Mead's method known as fminsearch in Matlab, is very effective for optimization of a single variable [16]. To maximize the chances of obtaining the global optimum, the research domain has been discretized in nested domains with center x i and the search for a local solution is realized next to neighborhood of each point. It is among these solutions that the overall optimum will be chosen.

Configuration of the obtained solutions to the initial space
After the execution of Nelder-Mead's algorithm, the last step of MOMA-Plus is the configuration of the obtained solutions. Indeed, it is the transition from the optimal θ to the variables x i using in the formulation (7). Note that this solution configuration provides all the Pareto optimal solutions for the initial problem.

MOMA-Plus algorithm
The algorithm of the MOMA-Plus method is as follows : End for (vi) Display the solution x corresponding to λ k value

Performances analysis
Performances analysis is done on the test problems defined below. This analysis is mainly based on the complexity of computation time, convergence and diversity.

Complexity study
The MOMA-Plus algorithm starts with the scalarization where it is about a comparison of p quantity λ k (f k − z k ), of which the complexity at worst is O(p 2 ). Then, follows the penalization, at this level the complexity is O(m) at worst with m the number of constraints. The complexity of the transformation of decision variables into a single variable is at worst equal to O(n). Let J + 1 be the size of formed simplex for applying the Nelder-Mead algorithm where J is the size of search space. In our work, the search space is one dimension so J = 1. Therefore, the complexity of Nelder-Mead algorithm in MOMA-Plus method is constant. Then, as the search space is discretized into N + 1 points the complexity of Nelder-Mead is O(N ). After finding the optimum by the Nelder-Mead algorithm, the next step is to reconstruct the global and final solution. This step is complex. So in one iteration, the complexity is: If the size of the weight coefficients is K then the final complexity of the MOMA-Plus algorithm is : This result is justified by the bellow theorem: Proof. For the demonstration, see [18] The other method that we present in this work, and whose results will be compared. With those of the MOMA-Plus method, is the NSGA-II method (Non-dominated Sorted Genetic Algorithm II). It was developed by Deb and Srinivas [1]. It is a method that uses the concept of elitism for which the best solutions are preserved and used through the notion of Pareto dominance the distance of obstruction- [1] the crossover and mutation operations for the creation of next generation. That makes the algorithm quickly convergence towards the optimal solutions.
The complexity of the NSGA-II method is O(p.n 2 )[1]; where p is the size of the used objective functions and n is the size of the input variables. With regard to the MOMA-Plus complexity, we can confirm that the MOMA-Plus method is effective because it belongs to the class of polynomial complexity algorithms.

Test problems and numerical experiments
The problems below have been solved by K. Deb in [1], where he is doing a comparative study between the NSGA-II and some algorithms like SPEA † , PAES ‡ , etc. The result of this study was that the NSGA-II method provided satisfactory results compared to the other methods. In this work, we propose a comparative study of performances through the convergence and diversity metrics of the MOMA-Plus and NSGA-II methods. The problems we have used are recorded in the table below : Note that the domain of the decision variables of the multi-modal problem (PNL4) has changed in comparing to the initial search space.

Graphical results MOMA-Plus/NSGA-II
In this section we present the results of the simulations of the multiobjective problems. These simulations are made up of Pareto's analytical front, the different fronts resulting from MOMA-Plus and NSGA-II methods. For the simulations, we used MATLAB R2013b as simulation and programming software. For the NSGA-II method we will note: GEN=number of generations, POP=population size, PF= Pareto fraction, MUT=type of mutation, CRS=type of crossing, ST-GEN= Stall Generation, the crossing parameter that has been used for these problems is " crossoverscattered ".

MIN-EX Problem
The resolution parameters of MIN-EX problem are: • for the NSGA-II method:

SCH problem
The resolution parameters of SCH problem are: • for the NSGA-II method: • for the MOMA-Plus method:

PLN1 problem
The resolution parameters of PLN1 problem are: • for the NSGA-II method :

PLN2 problem
The resolution parameters of PLN2 problem are: • for the MOMA-Plus method: • for NSGA-II parameters:

PLN3 problem
The resolution parameters of PLN4 problem are: • for the MOMA-Plus method: • for the NSGA-II method:

PLN4 problem
The resolution parameters of PLN4 problem are • for MOMA-Plus method:   Remark 2. The problems that we have solved until here admit continuous optimal analytic Pareto fronts. So, we have represented in same figure the analytic Pareto front and this is provided by MOMA-Plus or NSGA-II method. That will be not possible for the following problems.

POL Problem
The set of Pareto optimal solution of POL problem is not continuous. Therefore, there is not an analytic front for it. So, we can not represent the two fronts in the same figure.
However, we will represent only the obtained fronts from MOMA-plus and NSGA-II.
The resolution parameters of POL problem are: • for MOMA-Plus method:    We find that NSGA-II is faster than MOMA-Plus.

Study of the convergence metric
The convergence metric we use here is defined by the following formula [9]: In this metric N is the size of the obtained solution by using MOMA-Plus or NSGA-II solutions. The below table gives the value of N for each method on each problem. d i is the Euclidean distance between the obtained solution i and that of the nearest analytical front. This metric corresponds to the performance of the method, especially its ability to converge towards the Pareto optimal analytical front. Thus, a high-performance and effective method is one whose γ value is closed to zero. However, the calculation of this metric involves two respective fronts: the given front by the used method and the Pareto optimal analytical front. The obtained results of the calculation of the convergence metric are recorded in the table below : With regard to these obtained results, we notice that the values of the convergence metric provided by the two methods are all closed to zero. It would mean that the MOMA-Plus method is effective. In addition, we can see that the MOMA-Plus method approaches Pareto optimal solutions are better than the NSGA-II method on problems (SCH) and (PLN3).

Study of the diversity metric
The metric of diversity that we use here is defined by the following formula [9] : In the relationship: d i is the Euclidean distance between two solutions closed to the Pareto front provided by the used method; d is the average of these distances; d e m is the distance between the extreme solutions of all the solutions on the Pareto front to the analytical The metric of diversity corresponds to the distribution of solutions on the Pareto front. Note that a good distribution is the one whose ∆ value is closed to zero or even equal to zero. The calculation of the metric of diversity of the MOMA-Plus and NSGA-II method are recorded in the following table : Here, we also see that MOMA-Plus method is better than NSGA-II method on problems (PLN1) and (PNL3.)

Particular Cases
The problems POL, VNT and KUR are discontinuous fronts and we have not an analytic Pareto front. This makes it difficult to study convergence and diversity. Therefore, a possible comparative study is difficult. Nevertheless, a study of convergence and diversity has been done by combining the two fronts given by MOMA-Plus and NSGA-II that is given us by the below table: With regard to the obtained results, we can see that the solutions provided by the MOMA-Plus and NSGA-II methods are very closed.

Results analysis
These two methods have given the Pareto optimal solutions in few time with good distribution and diversity on the used test problems. There are some performances metric where MOMA-Plus is better than NSGA-II on some test problems. So, MOMA-Plus can be considered for the best resolution of multiobjective optimization problems.

Conclusion
The results of this study using the MOMA-Plus method are satisfactory in view of the comparison made with the NSGA-II method. Therefore, the MOMA-Plus method can be taken into account among the reference metaheuristics in terms of its ability to solve various problems types of multiobjective problems and also its qualities to quickly converge towards the optimal solutions. Nevertheless, it is desirable that improvements in the performance of MOMA-Plus are preciously doing on executing time.