Genetic algorithms

evolution

 

Back when I first became interested in doing research work, I remember wanting to learn about artificial intelligence. It seemed really cool, like science fiction cool, and maybe I could build an android. 😉 Once you get into it a bit more you find out that artificial intelligence is often (but not always) just a way of describing things we haven’t yet taught a computer to do for us. A lot of what computers do now would have been considered “AI” years ago, but once we understand it and create an algorithm to do it, the “unknown poorly defined intelligence magic” part goes away. It becomes just another things computers can do for us, and maybe another branch of computer science.

Still, I always think it’s a good sign when someone starts telling me they are interested in AI. It means they want to explore some interesting and unknown concept.

Back then I started reading about genetic algorithms and neural networks. I wrote a program that designed more effective neural networks using a genetic algorithm (e.g. selecting for superior input values out of a massive dataset). It definitely wasn’t anything groundbreaking, people had been doing this years and years earlier, but I didn’t know that at the time, and the program did actually work too which felt awesome.

From wikipedia

In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible.[2]

The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual’s genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.

 

 

Genetic algorithms allow you to solve computationally intensive problems, like for example the the knapsack problem. These sorts of hard problems present a roadblock that can’t be overcome by using “standard business logic computer programming techniques” (see NP-complete), you need to use some “heavy artillery” in the form of strong domain and/or mathematics knowledge. As long as you can test the fitness of potential solutions, you can apply a genetic algorithm to solve the problem. As such genetic algorithms allow you to find solutions to these problems without domain knowledge and without using strong mathematics. They give you a taste of what it’s like to solve a hard problem – but without having to do the hard part.

While this is powerful, it’s also akin to feeling around in the dark. Knowing more about the problem domain and having a strong mathematics background can allow you to use much more sophisticated and powerful optimization techniques. The motivation to explore these techniques is much stronger after genetic algorithms have give you a taste of what it’s like to solve a hard problem. As a result, whenever someone tells me they are “interested in AI” I have directed them towards genetic algorithms as a first step.

 

Kevin Browne

Editor of Software Hamilton.