20
Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimização Multiobjetivo Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D)

Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Prof. Lucas de Souza Batista - DEE/EE/UFMG

Otimização Multiobjetivo

Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D)

Page 2: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Introdução O MOEA/D decompõe um problema de otimização

multiobjetivo em N subproblemas de otimização escalar;

Esses subproblemas são otimizados simultaneamente a partir da evolução de uma população de soluções;

Em cada geração, a população é composta pelas melhores soluções encontradas para cada subproblema;

Cada subproblema é otimizado considerando apenas informações de subproblemas vizinhos;

Apresenta baixa complexidade computacional se comparado a métodos populares: NSGA-II, MOGLS;

Page 3: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Introdução O MOEA/D provê uma maneira simples e eficiente para

introdução de técnicas de decomposição em MOEAs;

Simplifica a atribuição de fitness e manutenção de diversidade (em relação a outros MOEAs);

Permite a inclusão de técnicas de normalização dos objetivos para lidar com diferentes escalas;

Viabiliza a otimização dos subproblemas por métodos de otimização escalar, o que representa uma grande vantagem em relação a MOEAs tradicionais;

Page 4: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Problema de Otimização Multiobjetivo

712 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 11, NO. 6, DECEMBER 2007

MOEA/D: A Multiobjective Evolutionary AlgorithmBased on Decomposition

Qingfu Zhang, Senior Member, IEEE, and Hui Li

Abstract—Decomposition is a basic strategy in traditional mul-tiobjective optimization. However, it has not yet been widely usedin multiobjective evolutionary optimization. This paper proposesa multiobjective evolutionary algorithm based on decomposition(MOEA/D). It decomposes a multiobjective optimization probleminto a number of scalar optimization subproblems and optimizesthem simultaneously. Each subproblem is optimized by onlyusing information from its several neighboring subproblems,which makes MOEA/D have lower computational complexity ateach generation than MOGLS and nondominated sorting geneticalgorithm II (NSGA-II). Experimental results have demonstratedthat MOEA/D with simple decomposition methods outperformsor performs similarly to MOGLS and NSGA-II on multiobjective0–1 knapsack problems and continuous multiobjective optimiza-tion problems. It has been shown that MOEA/D using objectivenormalization can deal with disparately-scaled objectives, andMOEA/D with an advanced decomposition method can generatea set of very evenly distributed solutions for 3-objective testinstances. The ability of MOEA/D with small population, the scal-ability and sensitivity of MOEA/D have also been experimentallyinvestigated in this paper.

Index Terms—Computational complexity, decomposition, evolu-tionary algorithm, multiobjective optimization, Pareto optimality.

I. INTRODUCTION

Amultiobjective optimization problem (MOP) can be statedas follows:

(1)

where is the decision (variable) space, consistsof real-valued objective functions and is called the ob-jective space. The attainable objective set is defined as the set

.If , all the objectives are continuous and is de-

scribed by

where are continuous functions, we call (1) a continuousMOP.

Very often, since the objectives in (1) contradict each other,no point in maximizes all the objectives simultaneously. Onehas to balance them. The best tradeoffs among the objectivescan be defined in terms of Pareto optimality.

Manuscript received September 4, 2006; revised November 9, 2006.The authors are with the Department of Computer Science, University

of Essex, Wivenhoe Park, Colchester, CO4 3SQ, U.K. (e-mail: [email protected]; [email protected]).

Digital Object Identifier 10.1109/TEVC.2007.892759

Let , is said to dominate if and only iffor every and for at least one index

.1 A point is Pareto optimal to (1) ifthere is no point such that dominates .is then called a Pareto optimal (objective) vector. In other words,any improvement in a Pareto optimal point in one objective mustlead to deterioration in at least one other objective. The set ofall the Pareto optimal points is called the Pareto set (PS) and theset of all the Pareto optimal objective vectors is the Pareto front(PF) [1].

In many real-life applications of multiobjective optimization,an approximation to the PF is required by a decision makerfor selecting a final preferred solution. Most MOPs may havemany or even infinite Pareto optimal vectors. It is very time-con-suming, if not impossible, to obtain the complete PF. On theother hand, the decision maker may not be interested in havingan unduly large number of Pareto optimal vectors to deal withdue to overflow of information. Therefore, many multiobjec-tive optimization algorithms are to find a manageable numberof Pareto optimal vectors which are evenly distributed along thePF, and thus good representatives of the entire PF [1]–[4]. Someresearchers have also made an effort to approximate the PF byusing a mathematical model [5]–[8].

It is well-known that a Pareto optimal solution to a MOP,under mild conditions, could be an optimal solution of a scalaroptimization problem in which the objective is an aggregation ofall the ’s. Therefore, approximation of the PF can be decom-posed into a number of scalar objective optimization subprob-lems. This is a basic idea behind many traditional mathemat-ical programming methods for approximating the PF. Severalmethods for constructing aggregation functions can be found inthe literature (e.g., [1]). The most popular ones among them in-clude the weighted sum approach and Tchebycheff approach.Recently, the boundary intersection methods have also attracteda lot of attention [9]–[11].

There is no decomposition involved in the majority of thecurrent state-of-the-art multiobjective evolutionary algorithms(MOEAs) [2]–[4], [12]–[19]. These algorithms treat a MOP asa whole. They do not associate each individual solution withany particular scalar optimization problem. In a scalar objectiveoptimization problem, all the solutions can be compared basedon their objective function values and the task of a scalar ob-jective evolutionary algorithm (EA) is often to find one singleoptimal solution. In MOPs, however, domination does notdefine a complete ordering among the solutions in the objectivespace and MOEAs aim at producing a number of Pareto op-timal solutions as diverse as possible for representing the wholePF. Therefore, conventional selection operators, which were

1This definition of domination is for maximization. All the inequalities shouldbe reversed if the goal is to minimize the objectives in (1). “Dominate” means“be better than.”

1089-778X/$25.00 © 2007 IEEE

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

712 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 11, NO. 6, DECEMBER 2007

MOEA/D: A Multiobjective Evolutionary AlgorithmBased on Decomposition

Qingfu Zhang, Senior Member, IEEE, and Hui Li

Abstract—Decomposition is a basic strategy in traditional mul-tiobjective optimization. However, it has not yet been widely usedin multiobjective evolutionary optimization. This paper proposesa multiobjective evolutionary algorithm based on decomposition(MOEA/D). It decomposes a multiobjective optimization probleminto a number of scalar optimization subproblems and optimizesthem simultaneously. Each subproblem is optimized by onlyusing information from its several neighboring subproblems,which makes MOEA/D have lower computational complexity ateach generation than MOGLS and nondominated sorting geneticalgorithm II (NSGA-II). Experimental results have demonstratedthat MOEA/D with simple decomposition methods outperformsor performs similarly to MOGLS and NSGA-II on multiobjective0–1 knapsack problems and continuous multiobjective optimiza-tion problems. It has been shown that MOEA/D using objectivenormalization can deal with disparately-scaled objectives, andMOEA/D with an advanced decomposition method can generatea set of very evenly distributed solutions for 3-objective testinstances. The ability of MOEA/D with small population, the scal-ability and sensitivity of MOEA/D have also been experimentallyinvestigated in this paper.

Index Terms—Computational complexity, decomposition, evolu-tionary algorithm, multiobjective optimization, Pareto optimality.

I. INTRODUCTION

Amultiobjective optimization problem (MOP) can be statedas follows:

(1)

where is the decision (variable) space, consistsof real-valued objective functions and is called the ob-jective space. The attainable objective set is defined as the set

.If , all the objectives are continuous and is de-

scribed by

where are continuous functions, we call (1) a continuousMOP.

Very often, since the objectives in (1) contradict each other,no point in maximizes all the objectives simultaneously. Onehas to balance them. The best tradeoffs among the objectivescan be defined in terms of Pareto optimality.

Manuscript received September 4, 2006; revised November 9, 2006.The authors are with the Department of Computer Science, University

of Essex, Wivenhoe Park, Colchester, CO4 3SQ, U.K. (e-mail: [email protected]; [email protected]).

Digital Object Identifier 10.1109/TEVC.2007.892759

Let , is said to dominate if and only iffor every and for at least one index

.1 A point is Pareto optimal to (1) ifthere is no point such that dominates .is then called a Pareto optimal (objective) vector. In other words,any improvement in a Pareto optimal point in one objective mustlead to deterioration in at least one other objective. The set ofall the Pareto optimal points is called the Pareto set (PS) and theset of all the Pareto optimal objective vectors is the Pareto front(PF) [1].

In many real-life applications of multiobjective optimization,an approximation to the PF is required by a decision makerfor selecting a final preferred solution. Most MOPs may havemany or even infinite Pareto optimal vectors. It is very time-con-suming, if not impossible, to obtain the complete PF. On theother hand, the decision maker may not be interested in havingan unduly large number of Pareto optimal vectors to deal withdue to overflow of information. Therefore, many multiobjec-tive optimization algorithms are to find a manageable numberof Pareto optimal vectors which are evenly distributed along thePF, and thus good representatives of the entire PF [1]–[4]. Someresearchers have also made an effort to approximate the PF byusing a mathematical model [5]–[8].

It is well-known that a Pareto optimal solution to a MOP,under mild conditions, could be an optimal solution of a scalaroptimization problem in which the objective is an aggregation ofall the ’s. Therefore, approximation of the PF can be decom-posed into a number of scalar objective optimization subprob-lems. This is a basic idea behind many traditional mathemat-ical programming methods for approximating the PF. Severalmethods for constructing aggregation functions can be found inthe literature (e.g., [1]). The most popular ones among them in-clude the weighted sum approach and Tchebycheff approach.Recently, the boundary intersection methods have also attracteda lot of attention [9]–[11].

There is no decomposition involved in the majority of thecurrent state-of-the-art multiobjective evolutionary algorithms(MOEAs) [2]–[4], [12]–[19]. These algorithms treat a MOP asa whole. They do not associate each individual solution withany particular scalar optimization problem. In a scalar objectiveoptimization problem, all the solutions can be compared basedon their objective function values and the task of a scalar ob-jective evolutionary algorithm (EA) is often to find one singleoptimal solution. In MOPs, however, domination does notdefine a complete ordering among the solutions in the objectivespace and MOEAs aim at producing a number of Pareto op-timal solutions as diverse as possible for representing the wholePF. Therefore, conventional selection operators, which were

1This definition of domination is for maximization. All the inequalities shouldbe reversed if the goal is to minimize the objectives in (1). “Dominate” means“be better than.”

1089-778X/$25.00 © 2007 IEEE

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

Page 5: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Estratégias de Decomposição Weighted Sum Approach

ZHANG AND LI: MOEA/D: A MULTIOBJECTIVE EVOLUTIONARY ALGORITHM BASED ON DECOMPOSITION 713

originally designed for scalar optimization, cannot be directlyused in nondecomposition MOEAs. Arguably, if there is afitness assignment scheme for assigning an individual solutiona relative fitness value to reflect its utility for selection, thenscalar optimization EAs can be readily extended for dealingwith MOPs, although other techniques such as mating restric-tion [20], diversity maintaining [21], some properties of MOPs[22], and external populations [23] may also be needed forenhancing the performances of these extended algorithms. Forthis reason, fitness assignment has been a major issue in currentMOEA research. The popular fitness assignment strategiesinclude alternating objectives-based fitness assignment suchas the vector evaluation genetic algorithm (VEGA) [24], anddomination-based fitness assignment such as Pareto archivedevolutionary strategy (PAES) [14], strength Pareto evolutionaryalgorithm II (SPEA-II) [15], and nondominated sorting geneticalgorithm II (NSGA-II) [16].

The idea of decomposition has been used to a certain extentin several metaheuristics for MOPs [25]–[29]. For example, thetwo-phase local search (TPLS) [25] considers a set of scalar op-timization problems, in which the objectives are aggregationsof the objectives in the MOP under consideration, a scalar opti-mization algorithm is applied to these scalar optimization prob-lems in a sequence based on aggregation coefficients, a solutionobtained in the previous problem is set as a starting point forsolving the next problem since its aggregation objective is justslightly different from that in the previous one. The multiob-jective genetic local search (MOGLS) aims at simultaneous op-timization of all aggregations constructed by the weighted sumapproach or Tchebycheff approach [29]. At each iteration, it op-timizes a randomly generated aggregation objective.

In this paper, we propose a new multiobjective evolutionary al-gorithm based on decomposition (MOEA/D). MOEA/D explic-itlydecomposes theMOP(1) into scalaroptimizationsubprob-lems. It solves these subproblems simultaneously by evolvinga population of solutions. At each generation, the population iscomposed of the best solution found so far (i.e. since the start ofthe run of the algorithm) for each subproblem. The neighborhoodrelations among these subproblems are defined based on the dis-tances between their aggregation coefficient vectors. The optimalsolutions to twoneighboringsubproblemsshouldbeverysimilar.Each subproblem (i.e., scalar aggregation function) is optimizedin MOEA/D by using information only from its neighboring sub-problems. MOEA/D has the following features.

• MOEA/D provides a simple yet efficient way of intro-ducing decomposition approaches into multiobjective evo-lutionary computation. A decomposition approach, oftendeveloped in the community of mathematical program-ming, can be readily incorporated into EAs in the frame-work MOEA/D for solving MOPs.

• Since MOEA/D optimizes scalar optimization problemsrather than directly solving a MOP as a whole, issues suchas fitness assignment and diversity maintenance that causedifficulties for nondecomposition MOEAS could becomeeasier to handle in the framework of MOEA/D.

• MOEA/D has lower computational complexity at eachgeneration than NSGA-II and MOGLS. Overall, MOEA/Doutperforms, in terms of solution quality, MOGLS on 0–1multiobjective knapsack test instances when both algo-rithms use the same decomposition approach. MOEA/Dwith the Tchebycheff decomposition approach performs

similarly to NSGA-II on a set of continuous MOP testinstances. MOEA/D with an advanced decompositionapproach performs much better than NSGA-II on 3-ob-jective continuous test instances. MOEA/D using a smallpopulation is able to produce a small number of veryevenly distributed solutions.

• Objective normalization techniques can be incorporatedinto MOEA/D for dealing with disparately scaled objec-tives.

• It is very natural to use scalar optimization methods inMOEA/D since each solution is associated with a scalaroptimization problem. In contrast, one of the major short-comings of nondecomposition MOEAs is that there is noeasy way for them to take the advantage of scalar optimiza-tion methods.

This paper is organized as follows. Section II introducesthree decomposition approaches for MOPs. Section III presentsMOEA/D. Sections IV and V compare MOEA/D with MOGLSand NSGA-II and show that MOEA/D outperforms or performssimilarly to MOGLS and NSGA-II. Section VI presents moreexperimental studies on MOEA/D. Section VII concludes thispaper.

II. DECOMPOSITION OF MULTIOBJECTIVE OPTIMIZATION

There are several approaches for converting the problem ofapproximation of the PF into a number of scalar optimizationproblems. In the following, we introduce three approaches,which are used in our experimental studies.

A. Weighted Sum Approach [1]

This approach considers a convex combination of the dif-ferent objectives. Let be a weight vector,i.e., for all and . Then, theoptimal solution to the following scalar optimization problem:

(2)is a Pareto optimal point to (1),2 where we use to em-phasize that is a coefficient vector in this objective function,while is the variables to be optimized. To generate a set ofdifferent Pareto optimal vectors, one can use different weightvectors in the above scalar optimization problem. If the PFis concave (convex in the case of minimization), this approachcould work well. However, not every Pareto optimal vector canbe obtained by this approach in the case of nonconcave PFs.To overcome these shortcomings, some effort has been made toincorporate other techniques such as -constraint into this ap-proach, more details can be found in [1].

B. Tchebycheff Approach [1]

In this approach, the scalar optimization problem is in theform

(3)where is the reference point, i.e.,

3 for each . For each Pareto

2If (1) is for minimization, “maximize” in (2) should be changed to “mini-mize.”

3In the case when the goal of (1) is minimization, .

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

Page 6: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Estratégias de Decomposição Tchebycheff Approach

ZHANG AND LI: MOEA/D: A MULTIOBJECTIVE EVOLUTIONARY ALGORITHM BASED ON DECOMPOSITION 713

originally designed for scalar optimization, cannot be directlyused in nondecomposition MOEAs. Arguably, if there is afitness assignment scheme for assigning an individual solutiona relative fitness value to reflect its utility for selection, thenscalar optimization EAs can be readily extended for dealingwith MOPs, although other techniques such as mating restric-tion [20], diversity maintaining [21], some properties of MOPs[22], and external populations [23] may also be needed forenhancing the performances of these extended algorithms. Forthis reason, fitness assignment has been a major issue in currentMOEA research. The popular fitness assignment strategiesinclude alternating objectives-based fitness assignment suchas the vector evaluation genetic algorithm (VEGA) [24], anddomination-based fitness assignment such as Pareto archivedevolutionary strategy (PAES) [14], strength Pareto evolutionaryalgorithm II (SPEA-II) [15], and nondominated sorting geneticalgorithm II (NSGA-II) [16].

The idea of decomposition has been used to a certain extentin several metaheuristics for MOPs [25]–[29]. For example, thetwo-phase local search (TPLS) [25] considers a set of scalar op-timization problems, in which the objectives are aggregationsof the objectives in the MOP under consideration, a scalar opti-mization algorithm is applied to these scalar optimization prob-lems in a sequence based on aggregation coefficients, a solutionobtained in the previous problem is set as a starting point forsolving the next problem since its aggregation objective is justslightly different from that in the previous one. The multiob-jective genetic local search (MOGLS) aims at simultaneous op-timization of all aggregations constructed by the weighted sumapproach or Tchebycheff approach [29]. At each iteration, it op-timizes a randomly generated aggregation objective.

In this paper, we propose a new multiobjective evolutionary al-gorithm based on decomposition (MOEA/D). MOEA/D explic-itlydecomposes theMOP(1) into scalaroptimizationsubprob-lems. It solves these subproblems simultaneously by evolvinga population of solutions. At each generation, the population iscomposed of the best solution found so far (i.e. since the start ofthe run of the algorithm) for each subproblem. The neighborhoodrelations among these subproblems are defined based on the dis-tances between their aggregation coefficient vectors. The optimalsolutions to twoneighboringsubproblemsshouldbeverysimilar.Each subproblem (i.e., scalar aggregation function) is optimizedin MOEA/D by using information only from its neighboring sub-problems. MOEA/D has the following features.

• MOEA/D provides a simple yet efficient way of intro-ducing decomposition approaches into multiobjective evo-lutionary computation. A decomposition approach, oftendeveloped in the community of mathematical program-ming, can be readily incorporated into EAs in the frame-work MOEA/D for solving MOPs.

• Since MOEA/D optimizes scalar optimization problemsrather than directly solving a MOP as a whole, issues suchas fitness assignment and diversity maintenance that causedifficulties for nondecomposition MOEAS could becomeeasier to handle in the framework of MOEA/D.

• MOEA/D has lower computational complexity at eachgeneration than NSGA-II and MOGLS. Overall, MOEA/Doutperforms, in terms of solution quality, MOGLS on 0–1multiobjective knapsack test instances when both algo-rithms use the same decomposition approach. MOEA/Dwith the Tchebycheff decomposition approach performs

similarly to NSGA-II on a set of continuous MOP testinstances. MOEA/D with an advanced decompositionapproach performs much better than NSGA-II on 3-ob-jective continuous test instances. MOEA/D using a smallpopulation is able to produce a small number of veryevenly distributed solutions.

• Objective normalization techniques can be incorporatedinto MOEA/D for dealing with disparately scaled objec-tives.

• It is very natural to use scalar optimization methods inMOEA/D since each solution is associated with a scalaroptimization problem. In contrast, one of the major short-comings of nondecomposition MOEAs is that there is noeasy way for them to take the advantage of scalar optimiza-tion methods.

This paper is organized as follows. Section II introducesthree decomposition approaches for MOPs. Section III presentsMOEA/D. Sections IV and V compare MOEA/D with MOGLSand NSGA-II and show that MOEA/D outperforms or performssimilarly to MOGLS and NSGA-II. Section VI presents moreexperimental studies on MOEA/D. Section VII concludes thispaper.

II. DECOMPOSITION OF MULTIOBJECTIVE OPTIMIZATION

There are several approaches for converting the problem ofapproximation of the PF into a number of scalar optimizationproblems. In the following, we introduce three approaches,which are used in our experimental studies.

A. Weighted Sum Approach [1]

This approach considers a convex combination of the dif-ferent objectives. Let be a weight vector,i.e., for all and . Then, theoptimal solution to the following scalar optimization problem:

(2)is a Pareto optimal point to (1),2 where we use to em-phasize that is a coefficient vector in this objective function,while is the variables to be optimized. To generate a set ofdifferent Pareto optimal vectors, one can use different weightvectors in the above scalar optimization problem. If the PFis concave (convex in the case of minimization), this approachcould work well. However, not every Pareto optimal vector canbe obtained by this approach in the case of nonconcave PFs.To overcome these shortcomings, some effort has been made toincorporate other techniques such as -constraint into this ap-proach, more details can be found in [1].

B. Tchebycheff Approach [1]

In this approach, the scalar optimization problem is in theform

(3)where is the reference point, i.e.,

3 for each . For each Pareto

2If (1) is for minimization, “maximize” in (2) should be changed to “mini-mize.”

3In the case when the goal of (1) is minimization, .

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

Page 7: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Estratégias de Decomposição Tchebycheff Approach

Y. Qi et al.

Figure 2: Contour lines of a scalar subproblem and the geometric relationship betweenits weight vector and optimal solution under the Chebyshev decomposition scheme.

Figure 3: The geometric relationship between the weight vectors and their correspond-ing solution mapping vectors for tri-objective optimization problems.

For clarity, Figure 2 illustrates the contour lines of a scalar subproblem taking abi-objective minimization problem as an example. In Figure 2, we combine the objec-tive space and the weight space. The weight vector λ is in the weight space and itssolution mapping vector λ′ is in the objective space. It can be seen from Figure 2(a) thatthe optimal solution of the scalar subproblem with weight vector λ is located on thelowest contour line, which is also the intersection point between λ′ and the target PF.However, when the target PF is discontinuous, as shown in Figure 2(b), there might beno intersection point between λ′ and the target PF. In this situation, the optimal solutionis the point on the target PF that is located on the lowest contour line of the subproblemwith weight vector λ.

For bi-objective optimization problems, the uniformity of the weight vectors andthat of the corresponding solution mapping vectors are consistent with each other. How-ever, for tri-objective optimization problems, the situation becomes different. Figure 3illustrates the geometric relationship between the weight vectors and their correspond-ing solution mapping vectors in three-dimensional objective space. It can be seen that a

236 Evolutionary Computation Volume 22, Number 2

Page 8: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Estratégias de Decomposição Penalty-based Boundary Intersection (PBI) Approach

714 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 11, NO. 6, DECEMBER 2007

optimal point there exists a weight vector such thatis the optimal solution of (3) and each optimal solution of (3)is a Pareto optimal solution of (1). Therefore, one is able toobtain different Pareto optimal solutions by altering the weightvector. One weakness with this approach is that its aggregationfunction is not smooth for a continuous MOP. However, it canstill be used in the EA framework proposed in this paper sinceour algorithm does not need to compute the derivative of theaggregation function.

C. Boundary Intersection (BI) Approach

Several recent MOP decomposition methods such asNormal-Boundary Intersection Method [9] and NormalizedNormal Constraint Method [10] can be classified as the BIapproaches. They were designed for a continuous MOP. Undersome regularity conditions, the PF of a continuous MOP is partof the most top right4 boundary of its attainable objective set.Geometrically, these BI approaches aim to find intersectionpoints of the most top boundary and a set of lines. If theselines are evenly distributed in a sense, one can expect that theresultant intersection points provide a good approximation tothe whole PF. These approaches are able to deal with noncon-cave PFs. In this paper, we use a set of lines emanating fromthe reference point. Mathematically, we consider the followingscalar optimization subproblem5:

(4)where and , as in the above subsection, are a weight vectorand the reference point, respectively. As illustrated in Fig. 1, theconstraint ensures that is always in ,the line with direction and passing through . The goal is topush as high as possible so that it reaches the boundary ofthe attainable objective set.

One of the drawbacks of the above approach is that it has tohandle the equality constraint. In our implementation, we use apenalty method to deal with the constraint. More precisely, weconsider6:

(5)

where

is a preset penalty parameter. Let be the projection ofon the line . As shown in Fig. 2, is the distance be-

tween and . is the distance between and . If is setappropriately, the solutions to (4) and (5) should be very close.Hereafter, we call this method the penalty-based boundary in-tersection (PBI) approach.

The advantages of the PBI approach (or general BI ap-proaches ) over the the Tchebycheff approach are as follows.

4In the case of minimization, it will be part of the most left bottom.5In the case when the goal of (1) is minimization, this equality constraint in

this subproblem should be changed to .6In the case when the goal of (1) is minimization,

and to .

Fig. 1. Illustration of boundary intersection approach.

Fig. 2. Illustration of the penalty-based boundary intersection approach.

• In the case of more than two objectives, let both the PBIapproach and the Tchebycheff approach use the same setof evenly distributed weight vectors, the resultant optimalsolutions in the PBI should be much more uniformly dis-tributed than those obtained by the Tchebycheff approach[9], particularly when the number of weight vectors is notlarge.

• If dominates , it is still possible that, while it is rare for and other BI aggrega-

tion functions.However, these benefits come with a price. One has to set thevalue of the penalty factor. It is well-known that a too largeor too small penalty factor will worsen the performance of apenalty method.

The above approaches can be used to decompose the approxi-mation of the PF into a number of scalar optimization problems.A reasonably large number of evenly distributed weight vectorsusually leads to a set of Pareto optimal vectors, which may notbe evenly spread but could approximate the PF very well.

There are many other decomposition approaches in the liter-ature that could also be used in our algorithm framework. Since

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

Page 9: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Estratégias de Decomposição Penalty-based Boundary Intersection (PBI) Approach

30 Capítulo 2. Estrutural Geral do MOEA/D

d1 = |(fff(xxx) − zzz)Tλλλ|/∥λλλ∥

d2 = ∥fff(xxx) − (zzz + d1λλλ)∥ ; λλλ = λλλ/∥λλλ∥

em que θpbi ≥ 0 é um parâmetro de penalidade definido pelo usuário. A Fig. 2a ilustra asmedidas d1 e d2 considerando um vetor objetivo fff(xxx) e vetor de ponderação λλλ. Note que d1

permite avaliar a convergência de fff(xxx) para a fronteira Pareto fff(PS), enquanto d2 provêuma maneira de controlar a diversidade das soluções. Por isso, esta função de agregaçãopossibilita impor um compromisso entre convergência e diversidade das soluções obtidas,por meio de um ajuste a priori de θpbi, o qual influencia diretamente o desempenho daabordagem.

(a) PBI. (b) PBI invertida.

Figura 2 – Ilustração das distâncias d1 e d2 considerando um vetor referência arbitrário λλλe um vetor objetivo fff(xxx).

2.4.2.4 PBI Invertida (iPBI)

Na função PBI invertida, em contraste à estratégia PBI convencional, a buscaevolui a partir de um ponto de referência nadir zzz, zi ≥ maxfi(xxx) | xxx ∈ PS para todoi = 1, . . . ,nf , por meio da maximização do problema de otimização escalar definido como(SATO, 2014):

max f ipbi(xxx | λλλ, zzz) = d1 − θipbid2

sujeito a: xxx ∈ Ω(2.13)

d1 = |(fff(xxx) − zzz)Tλλλ|/∥λλλ∥

d2 = ∥fff(xxx) − (zzz − d1λλλ)∥ ; λλλ = λλλ/∥λλλ∥

o qual pode ser reescrito como um problema de minimização (d1 e d2 não são alterados):

min f ipbi(xxx | λλλ, zzz) = θipbid2 − d1

sujeito a: xxx ∈ Ω(2.14)

Page 10: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Estratégias de Decomposição Penalty-based Boundary Intersection (PBI) Approach

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2373386, IEEE Transactions on Evolutionary Computation

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 4

f1

f2

10.750.50.25

0.25

1

0.75

0.5

!"#$

!"#$

%"!$

%"!$

!"&

!"&

!"#$!"&

%"!$

%"'

!"#$

!"&

%"!$

%"'

f1

f2

10.750.50.25

0.25

1

0.75

0.5

!"#$

!"&%"!$

!"#$

!"&

%"!$

f1

f2

10.750.50.25

0.25

1

0.75

0.5

%"'

%"'

Fig. 2: Illustration of contours of PBI function with different settings of θ and w.

of MOEA/D with PBI (θ = 0.1) is similar to MOEA/D withweighted sum on multiobjective knapsack problems. Whenθ = 1.0, the contours of PBI are the same as those of theweighted Tchebycheff. Moreover, the contours in this casehave the same shape as a dominating region of a point.This implies that MOEA/D with PBI (θ = 1.0) shouldhave a similar search behavior as MOEA/D with weightedTchebycheff. In this paper, we set θ = 5.0 for empiricalstudies, in view of its reportedly promising results on tacklingvarious continuous MOPs in the literature [41], [49]. It isworth noting that MOEA/D with weighted sum and PBI withθ = 0.1 outperform MOEA/D with weighted Tchebycheff andPBI with θ = 5.0 on multiobjective knapsack problems [48].

B. MOEA/DAs a representative of the decomposition-based method,

the basic idea of MOEA/D is to decompose a MOP into anumber of single objective optimization subproblems throughaggregation functions and optimizes them simultaneously.Since the optimal solution of each subproblem is proved tobe Pareto-optimal to the MOP in question, the collection ofoptimal solutions can be treated as a good EF approximation.There are two major features of MOEA/D: one is localmating, the other is local replacement. In particular, localmating means that the mating parents are usually selected fromsome neighboring weight vectors. Then, an offspring solutionis generated by applying crossover and mutation over theseselected parents. Local replacement means that an offspringis usually compared with solutions among some neighboringweight vectors. However, as discussed in [60], with a certainprobability, it is helpful for the population diversity to conductmating and replacement from all weight vectors. A parentsolution can be replaced by an offspring only when it hasa better aggregation function value.

C. NSGA-IIIIt is a recently proposed algorithm for many-objective

optimization. Similar to the idea of weight vector generationin MOEA/D, NSGA-III specifies a set of reference points thatevenly spread over the objective space. In each generation,the objective function values of each solution is normalizedto [0, 1]. Each solution is associated with a reference point

Algorithm 1: General framework of MOEA/DDOutput: population P

1 [P,W,E]←INITIALIZATION() ; // P is theparent population, W is the weightvector set and E is the neighborhoodindex set

2 while termination criterion is not fulfilled do3 for i← 1 to N do4 P ←MATING SELECTION(E(i), P );5 S ←VARIATION(P );6 foreach xc ∈ S do // xc is an offspring7 P ←UPDATE POPULATION(P,xc)8 end9 end

10 end11 return P

based on its perpendicular distance to the reference line. Afterthe generation of a population of offspring solutions, theyare combined with their parents to form a hybrid population.Then, a non-dominated sorting procedure is applied to dividethe hybrid population into several non-domination levels.Solutions in the first level have the highest priority to beselected as the next parents, so on and so forth. Solutionsin the last acceptable level are selected based on a niche-preservation operator, where solutions associated with a lesscrowded reference line have a larger chance to be selected.

III. PROPOSED ALGORITHM: MOEA/DD

A. Framework of Proposed Algorithm

Algorithm 1 presents the general framework of MOEA/DD.First, the initialization procedure generates N initial solutionsand N weight vectors. Within the main while-loop, in casethe termination criteria are not met, for each weight vector, themating selection procedure chooses some parents for offspringgeneration. Then, the offspring is used to update the parentpopulation according to some elite-preserving mechanism. Inthe following paragraphs, the implementation details of eachcomponent in MOEA/DD will be explained step by step.

Page 11: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Vetores de Agregação Simplex-Lattice Design

em que H é um inteiro positivo (definido pelo usuário);

O número de vetores (igual ao tamanho da população) é dado por:

26 Capítulo 2. Estrutural Geral do MOEA/D

• Definição de um conjunto de subproblemas de otimização escalar, i.e., vetores dereferência, tal que seja possível determinar (ou aproximar) um conjunto bem distri-buído de soluções candidatas.

• Definição de uma função de agregação escalar apropriada, que permita a estimaçãode soluções Pareto-ótimas.

• Atribuição de relações de vizinhança entre subproblemas, considerando o espaço deobjetivos e/ou o domínio de decisão.

• Definição de uma estratégia de seleção adequada, com o intuito de promover tantoa convergência quanto a diversidade das soluções estimadas.

• Seleção de operadores evolucionários eficientes, para exploração e refinamento doespaço de busca.

• Definição de uma estratégia apropriada de atualização das soluções estimadas, tam-bém capaz de promover tanto a convergência quanto a diversidade das soluçõesestimadas.

• Normalização do domínio de objetivos para lidar com funções de desempenho comdeferentes ordens de magnitude (escalas).

Cada um destes blocos são discutidos nas seções seguintes.

2.4.1 Vetores de Agregação

A maneira como os vetores de agregação são gerados na estrutura do MOEA/Dé usualmente inspirada a partir da teoria de projeto e modelagem em experimentos commisturas (CHAN, 2000). Entre os principais métodos nesta área, simplex-lattice design

(SCHEFFÉ, 1958) é a abordagem mais comumente adotada no contexto de MOEA/Ds.Esta estratégia e várias extensões promissoras são apresentadas a seguir.

2.4.1.1 Simplex-Lattice Design

Seja λλλ1, . . . ,λλλN um conjunto de vetores de ponderação, tal que λλλi = (λi1, . . . ,λi

nf)T ,

i = 1, . . . ,N , λij ≥ 0 para todo j = 1, . . . ,nf e

!nf

j=1 λij = 1. Na abordagem simplex-lattice

design (CHAN, 2000; ZHANG; LI, 2007; LI; ZHANG, 2009), cada vetor de ponderação égerado tal que

λij ∈

"

0H

,1H

,2H

, · · · ,H

H

#

, ∀ i = 1, . . . ,N e j = 1, . . . ,nf (2.4)

em que H é um inteiro positivo (definido pelo usuário) que estabelece o número de divisõesconsideradas ao longo de cada coordenada dos objetivos. O número total de vetores de

2.4. Estrutura do MOEA/D 27

ponderação (igual ao tamanho da população de soluções) é calculado como

N = CH+nf −1nf −1 (2.5)

em que C representa a operação de combinação. Por exemplo, para um problema arbitráriocom três objetivos (nf = 3), um valor distinto de H produz um conjunto específico devetores de ponderação, i.e.,

H = 1, N = C32 = 3, λλλ ∈ (1,0,0),(0,1,0),(0,0,1)

H = 2, N = C42 = 6, λλλ ∈ (1,0,0),(0,1,0),(0,0,1),(0,1/2,1/2),(1/2,0,1/2),(1/2,1/2,0)

H = 3, N = C52 = 10, λλλ ∈

(1,0,0),(0,1,0),(0,0,1),(0,1/3,2/3),(1/3,0,2/3),(1/3,2/3,0)

(0,2/3,1/3),(2/3,0,1/3),(2/3,1/3,0),(1/3,1/3,1/3) .

(2.6)

Dessa forma, um (H,nf )-simplex-lattice geral pode ser usado para representarC

H+nf −1nf −1 vetores de referência (vetores de ponderação) no domínio de objetivos.

2.4.1.2 Uniform Design

O método uniform design (TAN et al., 2012) é basicamente desenvolvido por meiode dois passos. Inicialmente, um conjunto de vetores de discrepância reduzida U = uuui =(ui1, . . . ,ui(nf −1))T , i = 1, . . . ,N é gerado. O mecanismo good lattice point é frequente-mente adotado para este propósito (TAN et al., 2012; TAN et al., 2013). Após isso, osvetores coeficientes λλλ1, . . . ,λλλN são obtidos transformando U , tal que

∑nf

j=1 λij = 1 para

todo i = 1, . . . ,N .

Considerando o primeiro passo, um conjunto candidato de inteiros positivos dis-tintos é calculado como:

HN = hj | 1 ≤ hj < N, gcd(N,hj) = 1, nf − 1 < N, j = 1, . . . ,nf − 1 (2.7)

em que gcd define o operador máximo divisor comum. Para qualquer nf − 1 elementosdistintos de HN , (h1, . . . ,hnf −1), uma matriz U = (uij), N × (nf − 1), em que uij =ihj mod N , i = 1, . . . ,N , j = 1, . . . ,nf − 1, é gerada. Assumindo U(N,hhh) uma matrizespecífica onde hhh = (h1, . . . ,hnf −1) é um vetor gerador de U , denota-se por GN,nf −1 oconjunto de todas as matrizes U(N,hhh). A partir deste ponto, é possível identificar o vetorgerador hhh, com U(N,hhh) ∈ GN,nf −1, que representa a menor discrepância dentre o conjuntoGN,nf −1. A medida de discrepância (CD2) é a mais comumente usada na literatura (FANG;LIN, 2003), definida como:

CD2(U) =(

1312

)nf −1

−2N

N∑

i=1

nf −1∏

j=1

(

1 +12

uij −12

−12

uij −12

2)

+1

N2

N∑

i=1

N∑

k=1

nf −1∏

j=1

(

1 +12

uij −12

+12

ukj −12

−12

|uij − ukj|

)

.

(2.8)

Page 12: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Vetores de Agregação Simplex-Lattice Design

Por exemplo, assumindo nf = 3 objetivos e diferentes valores de H, tem-se:

2.4. Estrutura do MOEA/D 27

ponderação (igual ao tamanho da população de soluções) é calculado como

N = CH+nf −1nf −1 (2.5)

em que C representa a operação de combinação. Por exemplo, para um problema arbitráriocom três objetivos (nf = 3), um valor distinto de H produz um conjunto específico devetores de ponderação, i.e.,

H = 1, N = C32 = 3, λλλ ∈ (1,0,0),(0,1,0),(0,0,1)

H = 2, N = C42 = 6, λλλ ∈ (1,0,0),(0,1,0),(0,0,1),(0,1/2,1/2),(1/2,0,1/2),(1/2,1/2,0)

H = 3, N = C52 = 10, λλλ ∈

(1,0,0),(0,1,0),(0,0,1),(0,1/3,2/3),(1/3,0,2/3),(1/3,2/3,0)

(0,2/3,1/3),(2/3,0,1/3),(2/3,1/3,0),(1/3,1/3,1/3) .

(2.6)

Dessa forma, um (H,nf )-simplex-lattice geral pode ser usado para representarC

H+nf −1nf −1 vetores de referência (vetores de ponderação) no domínio de objetivos.

2.4.1.2 Uniform Design

O método uniform design (TAN et al., 2012) é basicamente desenvolvido por meiode dois passos. Inicialmente, um conjunto de vetores de discrepância reduzida U = uuui =(ui1, . . . ,ui(nf −1))T , i = 1, . . . ,N é gerado. O mecanismo good lattice point é frequente-mente adotado para este propósito (TAN et al., 2012; TAN et al., 2013). Após isso, osvetores coeficientes λλλ1, . . . ,λλλN são obtidos transformando U , tal que

∑nf

j=1 λij = 1 para

todo i = 1, . . . ,N .

Considerando o primeiro passo, um conjunto candidato de inteiros positivos dis-tintos é calculado como:

HN = hj | 1 ≤ hj < N, gcd(N,hj) = 1, nf − 1 < N, j = 1, . . . ,nf − 1 (2.7)

em que gcd define o operador máximo divisor comum. Para qualquer nf − 1 elementosdistintos de HN , (h1, . . . ,hnf −1), uma matriz U = (uij), N × (nf − 1), em que uij =ihj mod N , i = 1, . . . ,N , j = 1, . . . ,nf − 1, é gerada. Assumindo U(N,hhh) uma matrizespecífica onde hhh = (h1, . . . ,hnf −1) é um vetor gerador de U , denota-se por GN,nf −1 oconjunto de todas as matrizes U(N,hhh). A partir deste ponto, é possível identificar o vetorgerador hhh, com U(N,hhh) ∈ GN,nf −1, que representa a menor discrepância dentre o conjuntoGN,nf −1. A medida de discrepância (CD2) é a mais comumente usada na literatura (FANG;LIN, 2003), definida como:

CD2(U) =(

1312

)nf −1

−2N

N∑

i=1

nf −1∏

j=1

(

1 +12

uij −12

−12

uij −12

2)

+1

N2

N∑

i=1

N∑

k=1

nf −1∏

j=1

(

1 +12

uij −12

+12

ukj −12

−12

|uij − ukj|

)

.

(2.8)

Page 13: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Vetores de Agregação Simplex-Lattice Design

Por exemplo, assumindo nf = 3 e H = 5:

Proposed Method

Generation of reference directions and assignment of neighbourhood A structured set of reference points (β ) is generated spanning a hyper-plane with unit intercepts in each objective axis. The approach generates W points on the hyper-plane with a uniform spacing of δ = 1/p for any number of objectives M. The process of generation of the reference points is illustrated for a 3-objective optimization problem i.e. (M=3) and with an assumed spacing of δ = 0.2 i.e. (p = 5) in the Figure below.

Fig. 4. (a) the reference points are generated computing β’s recursively (b) the table shows the combination of all β’s in each column

(a)

Proposed Method

Generation of reference directions and assignment of neighbourhood A structured set of reference points (β ) is generated spanning a hyper-plane with unit intercepts in each objective axis. The approach generates W points on the hyper-plane with a uniform spacing of δ = 1/p for any number of objectives M. The process of generation of the reference points is illustrated for a 3-objective optimization problem i.e. (M=3) and with an assumed spacing of δ = 0.2 i.e. (p = 5) in the Figure below.

Fig. 4. (a) the reference points are generated computing β’s recursively (b) the table shows the combination of all β’s in each column

(a)

Page 14: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Vetores de Agregação Simplex-Lattice Design

Por exemplo, assumindo nf = 3 e H = 4:

1089-778X (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TEVC.2014.2373386, IEEE Transactions on Evolutionary Computation

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 5

Algorithm 2: Initialization procedure (INITIALIZATION)Output: parent population P , weight vector set W ,

neighborhood set of weight vectors E1 Generate an initial parent population P = x1, · · · ,xN

by random sampling from Ω;2 if m < 7 then3 Use Das and Dennis’s method to generate

N =!H+m−1

m−1

"weight vectors W = w1, · · · ,wN;

4 else5 Use Das and Dennis’s method to generate

N1 =!H1+m−1

m−1

"weight vectors B = b1, · · · ,bN1

and N2 =!H2+m−1

m−1

"weight vectors

I = i1, · · · , iN2, where N1 +N2 = N ;6 for k ← 1 to N2 do7 for j ← 1 to m do8 ikj = 1−τ

m + τ × ikj ;9 end

10 end11 Wm×N ← Bm×N1 ∪ Im×N2 ;12 end13 for i← 1 to N do14 E(i) = i1, · · · , iT where wi1 , · · · ,wiT are the T

closest weight vectors to wi;15 end16 Use the non-dominated sorting method to divide P into

several non-domination levels F1, F2, · · · , Fl;17 Associate each member in P with a unique subregion;18 return P , W , E

B. Initialization Procedure

The initialization procedure of MOEA/DD (line 1 of Algo-rithm 1), whose pseudo-code is given in Algorithm 2, containsthree main aspects: the initialization of parent populationP , the identification of non-domination level structure ofP , the generation of weight vectors and the assignment ofneighborhood. To be specific, the initial parent populationP is randomly sampled from Ω via a uniform distribution.A set of weight vectors W = w1, · · · ,wN is generatedby a systematic approach developed from Das and Dennis’smethod [59]. In this approach, weight vectors are sampledfrom a unit simplex. N =

!H+m−1m−1

"points, with a uniform

spacing δ = 1/H , where H > 0 is the number of divisionsconsidered along each objective coordinate, are sampled onthe simplex for any number of objectives. Fig. 3 gives asimple example to illustrate the Das and Dennis’s weightvector generation method.

As discussed in [49], in order to have intermediate weightvectors within the simplex, we should set H ≥ m. However,in a high-dimensional objective space, we will have a largeamount of weight vectors even if H = m (e.g., for an seven-objective case, H = 7 will results in

!7+7−17−1

"= 1716 weight

vectors). This obviously aggravates the computational burdenof an EMO algorithm. On the other hand, if we simply remedythis issue by lowering H (e.g., set H < m), it will make allweight vectors sparsely lie along the boundary of the simplex.

0 0.25 0.5 0.75 1

0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 0......

1 0.75 0.5 0.25 0 0.75 0.5 0.25 0 0

Fig. 3: Structured weight vector w = (w1, w2, w3)T genera-tion process with δ = 0.25, i.e., H = 4 in three-dimensionalspace [59].

!4+3−13−1

"= 15 weight vectors are sampled from an

unit simplex.

This is apparently harmful to the population diversity. Toavoid such situation, we present a two-layer weight vectorgeneration method. At first, the sets of weight vectors in theboundary and inside layers (denoted as B = b1, · · · ,bN1and I = i1, · · · , iN2, respectively, where N1 + N2 = N )are initialized according to the Das and Dennis’s method, withdifferent H settings. Then, the coordinates of weight vectorsin the inside layer are shrunk by a coordinate transforma-tion. Specifically, as for a weight vector in the inside layerik = (ik1 , · · · , ikm)T , k ∈ 1, · · · , N2, its j-th component isre-evaluated as:

ikj =1− τ

m+ τ × ikj (5)

where j ∈ 1, · · · ,m and τ ∈ [0, 1] is a shrinkage factor(here we set τ = 0.5 without loss of generality). At last, Band I are combined to form the final weight vector set W .Fig. 4 presents a simple example to illustrate our two-layerweight vector generation method.

Each weight vector wi = (wi1, · · · , wi

m)T , i ∈ 1, · · · , N,specifies a unique subregion, denoted as Φi, in the objectivespace, where Φi is defined as:

Φi = F(x) ∈ Rm|⟨F(x),wi⟩ ≤ ⟨F(x),wj⟩ (6)

where j ∈ 1, · · · , N, x ∈ Ω and ⟨F(x),wj⟩ is the acuteangle between F(x) and wj . It is worth noting that a recentlyproposed EMO algorithm MOEA/D-M2M [40] also employsa similar method to divide the objective space. However, itsidea is in essence different from our method. To be specific,in MOEA/DD, the assignment of subregions is to facilitatethe local density estimation which will be illustrated in detaillater; while, in MOEA/D-M2M, different weight vectors areused to specify different subpopulations for approximating asmall segment of the EF.

For each weight vector wi, i ∈ 1, · · · , N, its neigh-borhood consists of T closest weight vectors evaluated byEuclidean distances. Afterwards, solutions in P are divided

Page 15: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

MOEA/D Framework

ZHANG AND LI: MOEA/D: A MULTIOBJECTIVE EVOLUTIONARY ALGORITHM BASED ON DECOMPOSITION 715

our major purpose is to study the feasibility and efficiency of thealgorithm framework. We only use the above three decomposi-tion approaches in the experimental studies in this paper.

III. THE FRAMEWORK OF MULTIOBJECTIVE EVOLUTIONARYALGORITHM BASED ON DECOMPOSITION (MOEA/D)

A. General Framework

Multiobjective evolutionary algorithm based on decomposi-tion (MOEA/D), the algorithm proposed in this paper, needs todecompose the MOP under consideration. Any decompositionapproaches can serve this purpose. In the following description,we suppose that the Tchebycheff approach is employed. It isvery trivial to modify the following MOEA/D when other de-composition methods are used.

Let be a set of even spread weight vectors andbe the reference point. As shown in Section II, the problem ofapproximation of the PF of (1) can be decomposed into scalaroptimization subproblems by using the Tchebycheff approachand the objective function of the th subproblem is

(6)

where . MOEA/D minimizes all theseobjective functions simultaneously in a single run.

Note that is continuous of , the optimal solution ofshould be close to that of if and

are close to each other. Therefore, any information aboutthese ’s with weight vectors close to should be helpfulfor optimizing . This is a major motivation behindMOEA/D.

In MOEA/D, a neighborhood of weight vector is definedas a set of its several closest weight vectors in .The neighborhood of the th subproblem consists of all the sub-problems with the weight vectors from the neighborhood of .The population is composed of the best solution found so farfor each subproblem. Only the current solutions to its neigh-boring subproblems are exploited for optimizing a subproblemin MOEA/D.

At each generation , MOEA/D with the Tchebycheff ap-proach maintains:

• a population of points , where is thecurrent solution to the th subproblem;

• , where is the -value of , i.e.,for each ;

• , where is the best value found so farfor objective ;

• an external population (EP), which is used to store non-dominated solutions found during the search.

The algorithm works as follows:Input:

• MOP (1);• a stopping criterion;• : the number of the subproblems considered in

MOEA/D;• a uniform spread of weight vectors: ;• : the number of the weight vectors in the neighborhood

of each weight vector.

Output: .

Step 1) Initialization:

Step 1.1) Set .

Step 1.2) Compute the Euclidean distances between anytwo weight vectors and then work out the closest weightvectors to each weight vector. For each , set

, where are the closestweight vectors to .

Step 1.3) Generate an initial populationrandomly or by a problem-specific method. Set

.

Step 1.4) Initialize by a problem-specificmethod.

Step 2) Update:

For , do

Step 2.1) Reproduction: Randomly select two indexesfrom , and then generate a new solution from and

by using genetic operators.

Step 2.2) Improvement: Apply a problem-specific repair/improvement heuristic on to produce .

Step 2.3) Update of : For each , if, then set .

Step 2.4) Update of Neighboring Solutions: For each index, if , then set

and .

Step 2.5) Update of :

Remove from all the vectors dominated by .

Add to if no vectors in dominate .

Step 3) Stopping Criteria: If stopping criteria is satisfied,then stop and output . Otherwise, go to Step 2.

In initialization, contains the indexes of the closestvectors of . We use the Euclidean distance to measure thecloseness between any two weight vectors. Therefore, ’sclosest vector is itself, and then . If , theth subproblem can be regarded as a neighbor of the th sub-

problem.In the th pass of the loop in Step 2, the neighboring sub-

problems of the th subproblem are considered. Since andin Step 2.1 are the current best solutions to neighbors of the

th subproblem, their offspring should hopefully be a goodsolution to the th subproblem. In Step 2.2, a problem-specificheuristic7 is used to repair in the case when invalidates anyconstraints, and/or optimize the th . Therefore, the resultantsolution is feasible and very likely to have a lower functionvalue for the neighbors of th subproblem. Step 2.4 considers allthe neighbors of the th subproblem, it replaces with ifperforms better than with regard to the th subproblem.is needed in computing the value of in Step 2.4.

Since it is often very time-consuming to find the exact ref-erence point , we use , which is initialized in Step 1.4 by a

7An exemplary heuristic can be found in the implementation of MOEA/D forthe MOKP in Section IV.

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

Page 16: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

MOEA/D Framework

ZHANG AND LI: MOEA/D: A MULTIOBJECTIVE EVOLUTIONARY ALGORITHM BASED ON DECOMPOSITION 715

our major purpose is to study the feasibility and efficiency of thealgorithm framework. We only use the above three decomposi-tion approaches in the experimental studies in this paper.

III. THE FRAMEWORK OF MULTIOBJECTIVE EVOLUTIONARYALGORITHM BASED ON DECOMPOSITION (MOEA/D)

A. General Framework

Multiobjective evolutionary algorithm based on decomposi-tion (MOEA/D), the algorithm proposed in this paper, needs todecompose the MOP under consideration. Any decompositionapproaches can serve this purpose. In the following description,we suppose that the Tchebycheff approach is employed. It isvery trivial to modify the following MOEA/D when other de-composition methods are used.

Let be a set of even spread weight vectors andbe the reference point. As shown in Section II, the problem ofapproximation of the PF of (1) can be decomposed into scalaroptimization subproblems by using the Tchebycheff approachand the objective function of the th subproblem is

(6)

where . MOEA/D minimizes all theseobjective functions simultaneously in a single run.

Note that is continuous of , the optimal solution ofshould be close to that of if and

are close to each other. Therefore, any information aboutthese ’s with weight vectors close to should be helpfulfor optimizing . This is a major motivation behindMOEA/D.

In MOEA/D, a neighborhood of weight vector is definedas a set of its several closest weight vectors in .The neighborhood of the th subproblem consists of all the sub-problems with the weight vectors from the neighborhood of .The population is composed of the best solution found so farfor each subproblem. Only the current solutions to its neigh-boring subproblems are exploited for optimizing a subproblemin MOEA/D.

At each generation , MOEA/D with the Tchebycheff ap-proach maintains:

• a population of points , where is thecurrent solution to the th subproblem;

• , where is the -value of , i.e.,for each ;

• , where is the best value found so farfor objective ;

• an external population (EP), which is used to store non-dominated solutions found during the search.

The algorithm works as follows:Input:

• MOP (1);• a stopping criterion;• : the number of the subproblems considered in

MOEA/D;• a uniform spread of weight vectors: ;• : the number of the weight vectors in the neighborhood

of each weight vector.

Output: .

Step 1) Initialization:

Step 1.1) Set .

Step 1.2) Compute the Euclidean distances between anytwo weight vectors and then work out the closest weightvectors to each weight vector. For each , set

, where are the closestweight vectors to .

Step 1.3) Generate an initial populationrandomly or by a problem-specific method. Set

.

Step 1.4) Initialize by a problem-specificmethod.

Step 2) Update:

For , do

Step 2.1) Reproduction: Randomly select two indexesfrom , and then generate a new solution from and

by using genetic operators.

Step 2.2) Improvement: Apply a problem-specific repair/improvement heuristic on to produce .

Step 2.3) Update of : For each , if, then set .

Step 2.4) Update of Neighboring Solutions: For each index, if , then set

and .

Step 2.5) Update of :

Remove from all the vectors dominated by .

Add to if no vectors in dominate .

Step 3) Stopping Criteria: If stopping criteria is satisfied,then stop and output . Otherwise, go to Step 2.

In initialization, contains the indexes of the closestvectors of . We use the Euclidean distance to measure thecloseness between any two weight vectors. Therefore, ’sclosest vector is itself, and then . If , theth subproblem can be regarded as a neighbor of the th sub-

problem.In the th pass of the loop in Step 2, the neighboring sub-

problems of the th subproblem are considered. Since andin Step 2.1 are the current best solutions to neighbors of the

th subproblem, their offspring should hopefully be a goodsolution to the th subproblem. In Step 2.2, a problem-specificheuristic7 is used to repair in the case when invalidates anyconstraints, and/or optimize the th . Therefore, the resultantsolution is feasible and very likely to have a lower functionvalue for the neighbors of th subproblem. Step 2.4 considers allthe neighbors of the th subproblem, it replaces with ifperforms better than with regard to the th subproblem.is needed in computing the value of in Step 2.4.

Since it is often very time-consuming to find the exact ref-erence point , we use , which is initialized in Step 1.4 by a

7An exemplary heuristic can be found in the implementation of MOEA/D forthe MOKP in Section IV.

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

Page 17: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

MOEA/D FrameworkZHANG AND LI: MOEA/D: A MULTIOBJECTIVE EVOLUTIONARY ALGORITHM BASED ON DECOMPOSITION 715

our major purpose is to study the feasibility and efficiency of thealgorithm framework. We only use the above three decomposi-tion approaches in the experimental studies in this paper.

III. THE FRAMEWORK OF MULTIOBJECTIVE EVOLUTIONARYALGORITHM BASED ON DECOMPOSITION (MOEA/D)

A. General Framework

Multiobjective evolutionary algorithm based on decomposi-tion (MOEA/D), the algorithm proposed in this paper, needs todecompose the MOP under consideration. Any decompositionapproaches can serve this purpose. In the following description,we suppose that the Tchebycheff approach is employed. It isvery trivial to modify the following MOEA/D when other de-composition methods are used.

Let be a set of even spread weight vectors andbe the reference point. As shown in Section II, the problem ofapproximation of the PF of (1) can be decomposed into scalaroptimization subproblems by using the Tchebycheff approachand the objective function of the th subproblem is

(6)

where . MOEA/D minimizes all theseobjective functions simultaneously in a single run.

Note that is continuous of , the optimal solution ofshould be close to that of if and

are close to each other. Therefore, any information aboutthese ’s with weight vectors close to should be helpfulfor optimizing . This is a major motivation behindMOEA/D.

In MOEA/D, a neighborhood of weight vector is definedas a set of its several closest weight vectors in .The neighborhood of the th subproblem consists of all the sub-problems with the weight vectors from the neighborhood of .The population is composed of the best solution found so farfor each subproblem. Only the current solutions to its neigh-boring subproblems are exploited for optimizing a subproblemin MOEA/D.

At each generation , MOEA/D with the Tchebycheff ap-proach maintains:

• a population of points , where is thecurrent solution to the th subproblem;

• , where is the -value of , i.e.,for each ;

• , where is the best value found so farfor objective ;

• an external population (EP), which is used to store non-dominated solutions found during the search.

The algorithm works as follows:Input:

• MOP (1);• a stopping criterion;• : the number of the subproblems considered in

MOEA/D;• a uniform spread of weight vectors: ;• : the number of the weight vectors in the neighborhood

of each weight vector.

Output: .

Step 1) Initialization:

Step 1.1) Set .

Step 1.2) Compute the Euclidean distances between anytwo weight vectors and then work out the closest weightvectors to each weight vector. For each , set

, where are the closestweight vectors to .

Step 1.3) Generate an initial populationrandomly or by a problem-specific method. Set

.

Step 1.4) Initialize by a problem-specificmethod.

Step 2) Update:

For , do

Step 2.1) Reproduction: Randomly select two indexesfrom , and then generate a new solution from and

by using genetic operators.

Step 2.2) Improvement: Apply a problem-specific repair/improvement heuristic on to produce .

Step 2.3) Update of : For each , if, then set .

Step 2.4) Update of Neighboring Solutions: For each index, if , then set

and .

Step 2.5) Update of :

Remove from all the vectors dominated by .

Add to if no vectors in dominate .

Step 3) Stopping Criteria: If stopping criteria is satisfied,then stop and output . Otherwise, go to Step 2.

In initialization, contains the indexes of the closestvectors of . We use the Euclidean distance to measure thecloseness between any two weight vectors. Therefore, ’sclosest vector is itself, and then . If , theth subproblem can be regarded as a neighbor of the th sub-

problem.In the th pass of the loop in Step 2, the neighboring sub-

problems of the th subproblem are considered. Since andin Step 2.1 are the current best solutions to neighbors of the

th subproblem, their offspring should hopefully be a goodsolution to the th subproblem. In Step 2.2, a problem-specificheuristic7 is used to repair in the case when invalidates anyconstraints, and/or optimize the th . Therefore, the resultantsolution is feasible and very likely to have a lower functionvalue for the neighbors of th subproblem. Step 2.4 considers allthe neighbors of the th subproblem, it replaces with ifperforms better than with regard to the th subproblem.is needed in computing the value of in Step 2.4.

Since it is often very time-consuming to find the exact ref-erence point , we use , which is initialized in Step 1.4 by a

7An exemplary heuristic can be found in the implementation of MOEA/D forthe MOKP in Section IV.

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

Page 18: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

MOEA/D Framework

ZHANG AND LI: MOEA/D: A MULTIOBJECTIVE EVOLUTIONARY ALGORITHM BASED ON DECOMPOSITION 715

our major purpose is to study the feasibility and efficiency of thealgorithm framework. We only use the above three decomposi-tion approaches in the experimental studies in this paper.

III. THE FRAMEWORK OF MULTIOBJECTIVE EVOLUTIONARYALGORITHM BASED ON DECOMPOSITION (MOEA/D)

A. General Framework

Multiobjective evolutionary algorithm based on decomposi-tion (MOEA/D), the algorithm proposed in this paper, needs todecompose the MOP under consideration. Any decompositionapproaches can serve this purpose. In the following description,we suppose that the Tchebycheff approach is employed. It isvery trivial to modify the following MOEA/D when other de-composition methods are used.

Let be a set of even spread weight vectors andbe the reference point. As shown in Section II, the problem ofapproximation of the PF of (1) can be decomposed into scalaroptimization subproblems by using the Tchebycheff approachand the objective function of the th subproblem is

(6)

where . MOEA/D minimizes all theseobjective functions simultaneously in a single run.

Note that is continuous of , the optimal solution ofshould be close to that of if and

are close to each other. Therefore, any information aboutthese ’s with weight vectors close to should be helpfulfor optimizing . This is a major motivation behindMOEA/D.

In MOEA/D, a neighborhood of weight vector is definedas a set of its several closest weight vectors in .The neighborhood of the th subproblem consists of all the sub-problems with the weight vectors from the neighborhood of .The population is composed of the best solution found so farfor each subproblem. Only the current solutions to its neigh-boring subproblems are exploited for optimizing a subproblemin MOEA/D.

At each generation , MOEA/D with the Tchebycheff ap-proach maintains:

• a population of points , where is thecurrent solution to the th subproblem;

• , where is the -value of , i.e.,for each ;

• , where is the best value found so farfor objective ;

• an external population (EP), which is used to store non-dominated solutions found during the search.

The algorithm works as follows:Input:

• MOP (1);• a stopping criterion;• : the number of the subproblems considered in

MOEA/D;• a uniform spread of weight vectors: ;• : the number of the weight vectors in the neighborhood

of each weight vector.

Output: .

Step 1) Initialization:

Step 1.1) Set .

Step 1.2) Compute the Euclidean distances between anytwo weight vectors and then work out the closest weightvectors to each weight vector. For each , set

, where are the closestweight vectors to .

Step 1.3) Generate an initial populationrandomly or by a problem-specific method. Set

.

Step 1.4) Initialize by a problem-specificmethod.

Step 2) Update:

For , do

Step 2.1) Reproduction: Randomly select two indexesfrom , and then generate a new solution from and

by using genetic operators.

Step 2.2) Improvement: Apply a problem-specific repair/improvement heuristic on to produce .

Step 2.3) Update of : For each , if, then set .

Step 2.4) Update of Neighboring Solutions: For each index, if , then set

and .

Step 2.5) Update of :

Remove from all the vectors dominated by .

Add to if no vectors in dominate .

Step 3) Stopping Criteria: If stopping criteria is satisfied,then stop and output . Otherwise, go to Step 2.

In initialization, contains the indexes of the closestvectors of . We use the Euclidean distance to measure thecloseness between any two weight vectors. Therefore, ’sclosest vector is itself, and then . If , theth subproblem can be regarded as a neighbor of the th sub-

problem.In the th pass of the loop in Step 2, the neighboring sub-

problems of the th subproblem are considered. Since andin Step 2.1 are the current best solutions to neighbors of the

th subproblem, their offspring should hopefully be a goodsolution to the th subproblem. In Step 2.2, a problem-specificheuristic7 is used to repair in the case when invalidates anyconstraints, and/or optimize the th . Therefore, the resultantsolution is feasible and very likely to have a lower functionvalue for the neighbors of th subproblem. Step 2.4 considers allthe neighbors of the th subproblem, it replaces with ifperforms better than with regard to the th subproblem.is needed in computing the value of in Step 2.4.

Since it is often very time-consuming to find the exact ref-erence point , we use , which is initialized in Step 1.4 by a

7An exemplary heuristic can be found in the implementation of MOEA/D forthe MOKP in Section IV.

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

use “>" for minimization

Page 19: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

MOEA/D Framework

ZHANG AND LI: MOEA/D: A MULTIOBJECTIVE EVOLUTIONARY ALGORITHM BASED ON DECOMPOSITION 715

our major purpose is to study the feasibility and efficiency of thealgorithm framework. We only use the above three decomposi-tion approaches in the experimental studies in this paper.

III. THE FRAMEWORK OF MULTIOBJECTIVE EVOLUTIONARYALGORITHM BASED ON DECOMPOSITION (MOEA/D)

A. General Framework

Multiobjective evolutionary algorithm based on decomposi-tion (MOEA/D), the algorithm proposed in this paper, needs todecompose the MOP under consideration. Any decompositionapproaches can serve this purpose. In the following description,we suppose that the Tchebycheff approach is employed. It isvery trivial to modify the following MOEA/D when other de-composition methods are used.

Let be a set of even spread weight vectors andbe the reference point. As shown in Section II, the problem ofapproximation of the PF of (1) can be decomposed into scalaroptimization subproblems by using the Tchebycheff approachand the objective function of the th subproblem is

(6)

where . MOEA/D minimizes all theseobjective functions simultaneously in a single run.

Note that is continuous of , the optimal solution ofshould be close to that of if and

are close to each other. Therefore, any information aboutthese ’s with weight vectors close to should be helpfulfor optimizing . This is a major motivation behindMOEA/D.

In MOEA/D, a neighborhood of weight vector is definedas a set of its several closest weight vectors in .The neighborhood of the th subproblem consists of all the sub-problems with the weight vectors from the neighborhood of .The population is composed of the best solution found so farfor each subproblem. Only the current solutions to its neigh-boring subproblems are exploited for optimizing a subproblemin MOEA/D.

At each generation , MOEA/D with the Tchebycheff ap-proach maintains:

• a population of points , where is thecurrent solution to the th subproblem;

• , where is the -value of , i.e.,for each ;

• , where is the best value found so farfor objective ;

• an external population (EP), which is used to store non-dominated solutions found during the search.

The algorithm works as follows:Input:

• MOP (1);• a stopping criterion;• : the number of the subproblems considered in

MOEA/D;• a uniform spread of weight vectors: ;• : the number of the weight vectors in the neighborhood

of each weight vector.

Output: .

Step 1) Initialization:

Step 1.1) Set .

Step 1.2) Compute the Euclidean distances between anytwo weight vectors and then work out the closest weightvectors to each weight vector. For each , set

, where are the closestweight vectors to .

Step 1.3) Generate an initial populationrandomly or by a problem-specific method. Set

.

Step 1.4) Initialize by a problem-specificmethod.

Step 2) Update:

For , do

Step 2.1) Reproduction: Randomly select two indexesfrom , and then generate a new solution from and

by using genetic operators.

Step 2.2) Improvement: Apply a problem-specific repair/improvement heuristic on to produce .

Step 2.3) Update of : For each , if, then set .

Step 2.4) Update of Neighboring Solutions: For each index, if , then set

and .

Step 2.5) Update of :

Remove from all the vectors dominated by .

Add to if no vectors in dominate .

Step 3) Stopping Criteria: If stopping criteria is satisfied,then stop and output . Otherwise, go to Step 2.

In initialization, contains the indexes of the closestvectors of . We use the Euclidean distance to measure thecloseness between any two weight vectors. Therefore, ’sclosest vector is itself, and then . If , theth subproblem can be regarded as a neighbor of the th sub-

problem.In the th pass of the loop in Step 2, the neighboring sub-

problems of the th subproblem are considered. Since andin Step 2.1 are the current best solutions to neighbors of the

th subproblem, their offspring should hopefully be a goodsolution to the th subproblem. In Step 2.2, a problem-specificheuristic7 is used to repair in the case when invalidates anyconstraints, and/or optimize the th . Therefore, the resultantsolution is feasible and very likely to have a lower functionvalue for the neighbors of th subproblem. Step 2.4 considers allthe neighbors of the th subproblem, it replaces with ifperforms better than with regard to the th subproblem.is needed in computing the value of in Step 2.4.

Since it is often very time-consuming to find the exact ref-erence point , we use , which is initialized in Step 1.4 by a

7An exemplary heuristic can be found in the implementation of MOEA/D forthe MOKP in Section IV.

Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on May 6, 2009 at 06:54 from IEEE Xplore. Restrictions apply.

Page 20: Prof. Lucas de Souza Batista - DEE/EE/UFMG Otimizaçãolusoba/disciplinas/eee910/slides/lusoba/MOE… · Introdução O MOEA/D decompõe um problema de otimização multiobjetivo

Referências

Q. Zhang & H. Li, MOEA/D: A multiobjective evolutionary algorithm based on decomposition, IEEE TEC, v. 11, no. 6, pp. 712-731, 2007.

H. Li & Q. Zhang, Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II, v. 13, no. 2, pp. 284-302, 2009.