Optimizing ZDT1 (n=30) multi-objective problem using Genetic Algorithm - A MATLAB tutorial

preview_player
Показать описание

In this tutorial, I show implementation of the ZDT1 multi-objective test problem and optimize it using the built-in Multi-objective Genetic Algorithm in MATLAB. The given objective function is a standard test function that helps a beginner user to understand the basic concept of optimization in MATLAB easier. The given objective function or fitness function has one vector input including 'n=30' variables and two outputs (objective values). I write two separate functions one for the fitness function and one for the main algorithm. I plot the pareto-front that illustrates the obtained solutions in a proper way. We use different setting of the algorithm using the 'optimoptions' function.

optimizing multi-objective ZDT1 test problem using Genetic Algorithm:
A simple optimization using Genetic Algorithm:
A simple constrained optimization using Genetic Algorithm:
A simple multi-objective optimization using Genetic Algorithm:
A mixed-integer optimization using Linear Programming:
A simple single-objective optimization using Particle Swarm Optimization Algorithm:
A simple single-objective optimization using Pattern Search:
Рекомендации по теме
Комментарии
Автор

Can u upload an example where well trained ANN (Artificial Neural Network) is embedded in the program of the GA (Genetic Algorithm) analysis as a fitting function.

punitpadhy
Автор

Great video (again); thanks a lot for sharing.

Observing the fit of the GA output to the ideal curve there seems to be some obviously visible 'cyclicity' in terms of fit/tolerance. i.e. periods of relatively worse fit vs periods of excellent fit; is this due to the random 'mutations' that take place at each generation? (mutations which always seem to provide worse fit than their parents).
- if this is the case, it seems that all mutations take place at once (so the total error at that point in time actually increases during this episode then 'resettles' back to a better fit; (as the mutations causing this worse fit are discarded due to worse overal fit than their parents?).
Is that what is happening or what is it that causes this 'error-cyclicity' in the fit?
Thanks for any help in understanding.
(Watch it again during the 30000 iterations part and you will see what I mean).
My understanding of GA is limited but from this output alone; wouldn't it be better to have mutations:
A. occuring continously and randomly to a set of different families (and not all together; not linear/sequential generations of a single expanding family), and/or,
B. only mutate those solutions/'children' that have the highest relative error? (To leave the ones alone that have the closest fit at that point in time; e.g. mutate the worst fitting say 25 or 50% only; to save computation cost).

CtoC