As the end of this year is coming soon, I would like to consider what comes next in artificial intelligence next year. So this week I reviewed several research papers and found something interesting to me. This is about “Genetic Algorithm (GA)”. The paper (1) explains GA can be applied to deep reinforcement learning to obtain optimizations over parameters. This must be exciting for many researchers and programmers of deep learning.
According to Wikipedia, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection. As I had the project using GA in Tokyo more than 10 years ago, I would like to re-perform GA in the context of deep learning in 2018. To prepare for that, I would like to explain major components of GA below
- Gene: They are instruments that we can modify to optimize. It has similar functions to human’s gene. In order to adjust environments around them, they can be modified through GA operations as I explain below.
- Generation: This is the corrections of an individual gene at the certain period. As time passes, new generations will be created recurrently.
- Selection: Based on fitness values, better genes are selected to make next generations. There are many ways to do that. The best gene may be retained.
- Crossover: Parts of components of genes are exchanged between different genes. It can accelerate diversifications among genes.
- Mutation: Part of components are changed into another one, Mutation and crossover are derived from the processes of evolutions in nature.
This is an awesome image to explain GA with ease (2). Based on fitness values, genes with higher scores are selected. When reproductions are performed, some of the selected genes can remain as the same before (Elite Strategy). Crossover and mutation are performed against the rest of genes to create next generations. You can see the fitness values in generation t+1 are bigger than generation t. This is a basic framework of GA. There are many variations of GA in terms of the way to create next generations.
I put the simple python code of GA for portfolio management in Github. If you are interested in GA more details. Please look at it here.
Although GA has a long history, its application to deep learning is relatively new. In TOSHI STATS, which is AI start-up, I continue to research how GA can be applied to deep learning so that optimizations can be obtained effectively. Hope I can update you soon in 2018. Happy new year to everyone!
1.Deep Neuroevolution: Genetic Algorithms are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning, Felipe Petroski Such Vashisht Madhavan Edoardo Conti Joel Lehman Kenneth O. Stanley Jeff Clune, Uber AI Labs, 18 December 2017
2. IBA laboratory, a research laboratory of Genetic and Evolutionary Computations (GEC) of the Graduate School of Engineering, The University of Tokyo, Japan.
Notice: TOSHI STATS SDN. BHD. and I do not accept any responsibility or liability for loss or damage occasioned to any person or property through using materials, instructions, methods, algorithm or ideas contained herein, or acting or refraining from acting as a result of such use. TOSHI STATS SDN. BHD. and I expressly disclaim all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty on TOSHI STATS SDN. BHD. and me to correct any errors or defects in the codes and the software.