In this project, I’m going to simulate (on my Arduino Uno) a simple evolutionary process according to the ‘survival of the fittest’ principle. Technically, it’s a number of random six-letter words evolving towards a target word. The following story is just to support the idea.
Imagine a primitive tribe, living completely isolated somewhere deep in the jungle. Let’s say the tribe has a stable population of 100 individuals. They are still in the earliest stage of developing language: every individual can only speak one single personal six-letter ‘word’ (any combination of 6 letters).
Once every twenty years, at summer solstice, they gather for a ritual ceremony in front of an ancient statue. One by one, every member of the tribe steps forward to the statue and, as an act of worship, speaks his or her personal word.
What they don’t know is that the statue actually has a six-letter name! After centuries of nonsense words being thrown at him, he decides to start rewarding tribe members by raising their sexual attractiveness according to how close their personal word comes to his name.
After the ceremony, somewhat loosened up by fermented blueberry juice, they start working on the new generation of (exactly 100) members. Although no member of the tribe is excluded from the mating process on beforehand, sexual attractiveness definitely improves the chance of passing one’s genes to the next generation. And guess what: personal six-letter words are genetically determined…
For the sake of simplicity, let’s say that our tribe members are all hermaphroditic and highly fertile. That means that 100 matings between any two members will be needed to maintain a stable population. That’s because all babies will grow up healthy and all current tribe members will die within the next 20 years, so before the next vicennial ceremony.
Every new born member will inherit a mixture of its parents’ six-letter-word genes. We can play with different mixing-rules in our simulation, but for now let’s assume that every letter in the baby’s word is randomly (50/50) chosen from one of its parents’ letters at the same position.
We’re almost done now, but evolution has one more trick in store: mutation. Every time a baby’s letter is taken from one of its parents, there’s a small chance of an error to occur in the copy process (one of those irritating diacritics perhaps?). In that case, nature will replace the wrong character with a random letter from the A-Z range.
What to expect (or hope) from this?
By rewarding ‘fit’ tribe members by raising their chance of passing their genes, our statue’s secret hope was that, on a happy day, an enlightened member would finally call him by his immortal name. Will my simulation be able to confirm his hope? Luckily for him, we’ve come along some parameters and strategies that, once we’ve programmed our model, can be played with:
- population size
- initial assignment of six-letter-word genes (initial variation)
- reward system (linear, exponential, …)
- couple selection algorithm (how do they benefit from their reward)
- gene exchange method (equal chance or with some dominance of the fittest)
- mutation chance
The only real challenge was implementing a parent selection method that favors the ‘fittest’. Using a Fortune Wheel array (in which every member is represented proportionally to its reward) would be very memory inefficient, as the size of that array would depend on the reward system and the population’s overall ‘fitness’. There had to be a more economical method.
The solution was somewhat inspired by real life experience. For every parent role, two competitors will be randomly selected from the population. Next, we randomly pick one of them, favoring the fittest. We don’t need an array for that: if their rewards are r1 and r2, we can take p=random(0,r1+r2). Now, candidate #1 wins if p<r1. If not, candidate #2 wins.
For consecutive generations, the display shows the six-letter word of the ‘fittest’ tribe member. Although the sketch from the videos still uses the Fortune Wheel method and a population size of only 55, you can see that evolution is a very robust and powerful mechanism!
Even with a small random population and a linear reward system, this simple evolution model proves to be remarkably efficient. However, applying it a second time, starting with the resulting population from a previous run that had a different target word, shows that variation is crucial.
This second run takes far longer (on average) to complete because the first run has drastically reduced variation within the population. I even suspect it would regularly run forever if it wasn’t for mutation to come to the rescue.
Time to reflect on what we’ve got so far. During the experiment, I’ve tried to find the best model by tuning a number of parameters and playing with different strategies. Some settings turned out to give better results than others. Or to put it differently: some combinations ‘fit’ our goals better than others (does the f-word ring a bell?).
What if we would consider parameters to be the genes of our evolution model? The answer: we could apply our evolution model to, well, our evolution model…
[To be continued]