TY - GEN
T1 - Solving Dynamic Multi-objective Optimization Problems Using Incremental Support Vector Machine
AU - Hu, Weizhen
AU - Jiang, Min
AU - Gao, Xing
AU - Tan, Kay Chen
AU - Cheung, Yiu Ming
N1 - Funding Information:
This work was supported by the National Natural Science Foundation of China (Grant No.61673328) and Shenzhen Scientific Research and Development Funding Program ( Grant No. JCYJ20180307123637294).
PY - 2019/6
Y1 - 2019/6
N2 - The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments. One of the promising approaches for solving the DMOPs is reusing the obtained Pareto optimal set (POS) to train prediction models via machine learning approaches. In this paper, we train an Incremental Support Vector Machine (ISVM) classifier with the past POS, and then the solutions of the DMOP we want to solve at the next moment are filtered through the trained ISVM classifier. A high-quality initial population will be generated by the ISVM classifier, and a variety of different types of population-based dynamic multi-objective optimization algorithms can benefit from the population. To verify this idea, we incorporate the proposed approach into three evolutionary algorithms, the multi-objective particle swarm optimization(MOPSO), Nondominated Sorting Genetic Algorithm II (NSGA-II), and the Regularity Model-based multi-objective estimation of distribution algorithm(RE-MEDA). We employ experimentS to test these algorithms, and experimental results show the effectiveness.
AB - The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments. One of the promising approaches for solving the DMOPs is reusing the obtained Pareto optimal set (POS) to train prediction models via machine learning approaches. In this paper, we train an Incremental Support Vector Machine (ISVM) classifier with the past POS, and then the solutions of the DMOP we want to solve at the next moment are filtered through the trained ISVM classifier. A high-quality initial population will be generated by the ISVM classifier, and a variety of different types of population-based dynamic multi-objective optimization algorithms can benefit from the population. To verify this idea, we incorporate the proposed approach into three evolutionary algorithms, the multi-objective particle swarm optimization(MOPSO), Nondominated Sorting Genetic Algorithm II (NSGA-II), and the Regularity Model-based multi-objective estimation of distribution algorithm(RE-MEDA). We employ experimentS to test these algorithms, and experimental results show the effectiveness.
KW - Dynamic Multi-objective Optimization Problems
KW - Incremental Support Vector Machine
KW - Pareto Optimal Set
UR - http://www.scopus.com/inward/record.url?scp=85071288953&partnerID=8YFLogxK
U2 - 10.1109/CEC.2019.8790005
DO - 10.1109/CEC.2019.8790005
M3 - Conference proceeding
AN - SCOPUS:85071288953
T3 - 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings
SP - 2794
EP - 2799
BT - 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings
PB - IEEE
T2 - 2019 IEEE Congress on Evolutionary Computation, CEC 2019
Y2 - 10 June 2019 through 13 June 2019
ER -