Loading...
「ツール」は右上に移動しました。
利用したサーバー: wtserver3
665いいね 42897回再生

Introduction To Optimization: Gradient Free Algorithms (2/2) Simulated Annealing, Nelder-Mead

A brief overview of Simulated Annealing, the Nelder-Mead method, and a survey of various metaphor and biological inspired optimization algorithms. This video is part of an introductory series on optimization.

TRANSCRIPT:

Simulated Annealing
Annealing is a process in which metal or glass is heated, and then allowed to slowly cool at a controlled rate. Annealing changes the properties of a metal, making it more ductile and workable. Simulated annealing uses the idea of slowly cooling metal in an optimization algorithm. An initial guess is taken. Then another solution is randomly guessed near the previous solution. This new solution is evaluated to see if it is better than the previous solution. Initially, some bad solutions are accepted to allow the algorithm to explore, perhaps finding its way out of a valley towards a better solution. As time goes on, the “temperature” is reduced, and fewer bad solutions are accepted. Eventually, only solutions that are better than the previous one are accepted. It’s a bit like a ball bouncing on a vibrating surface. As the vibration decreases, the ball bounces less and less. Eventually it will come to a stop, hopefully at the minimum point.

Nelder-Mead
The Nelder-Mead downhill simplex algorithm is another commonly used gradient free algorithm. It uses a triangular shape, or simplex, to search for an optimal solution. The simplex shape flip flops towards its goal, growing, shrinking, and changing its shape according to a set of rules.

Other Metaphor Inspired Algorithms
Along with those already mentioned, many other gradient free optimization algorithms exist. A large number of these are based on analogies to natural processes, such as:
Ant colony optimization (Dorigo, 1992)
Particle swarm optimization (Kennedy & Eberhart 1995)
Harmony search (Geem, Kim & Loganathan 2001)
Artificial bee colony algorithm (Karaboga 2005)
Bees algorithm (Pham 2005)
Glowworm swarm optimization (Krishnanand & Ghose 2005)
Shuffled frog leaping algorithm (Eusuff, Lansey & Pasha 2006)
Imperialist competitive algorithm (Atashpaz-Gargari & Lucas 2007)
River formation dynamics (Rabanal, Rodríguez & Rubio 2007)
Intelligent water drops algorithm (Shah-Hosseini 2007)
Gravitational search algorithm (Rashedi, Nezamabadi-pour & Saryazdi 2009)
Cuckoo search (Yang & Deb 2009)
Bat algorithm (Yang 2010)
Flower pollination algorithm (Yang 2012)
Cuttlefish optimization algorithm (Eesa, Mohsin, Brifcani & Orman 2013)
Artificial swarm intelligence (Rosenberg 2014)
Many of these algorithms contain similar mathematical ideas concealed by differing analogies.

Conclusion
In summary, a large variety of gradient free optimization methods exist, each with their own strengths and weaknesses. Some of the most commonly used methods are genetic algorithms, particle swarm, simulated annealing, and the Nelder-Mead simplex algorithm.
In general, gradient free methods are easier to implement, and have the advantage of not requiring derivatives. This means that they can be applied to problems that are discrete, discontinuous, or otherwise nondifferentiable. Many of the algorithms also do a good job of searching for a global solution rather than a local solution.
On the downside, gradient free algorithms tend to be very slow for large problems. Many of the algorithms are also stochastic, meaning that they are based on chance, and will not always find the same solution. Finally, there is no guarantee that these algorithms will return an optimal solution, meaning that while the solution found might be better that what you started with, you won’t know if it is the best solution possible.

コメント