Inspiring Chaos

Wonder why I didn’t discover Jason Rampe’s Softology blog much earlier, as it deals with most of the math related topics of my own blog (and many more). His expertise and visualizations, as well as his software application Visions of Chaos, live at the opposite end of the spectrum when compared to my modest tinkering. As a result, most of his visualizations are far beyond the specs of a microcontroller and a small display, but I discovered a few mathematical models that I simply had to try right away:


  • Diffusion-limited Aggregation

“Diffusion-limited aggregation creates branched and coral like structures by the process of randomly moving particles that touch and stick to existing stationary particles.” – [Softology blog].

After coding a 2D version for my M5Stack Fire (ESP32), I was surprised to see how this simple principle generates these nice structures. The process starts with a single stationary (green) particle in the center. Then, one by one starting from a randomly chosen point at the border, every next particle takes a random walk inside the circle until it hits a stationary particle to which it sticks, becoming a stationary (green) particle itself.


  • Reaction-Diffusion model

It was Alan Turing who came up with a model to explain pattern formation in the skin of animals like zebras and tigers (morphogenesis). It’s based on the reaction of two chemical substances within cells and the diffusion of those chemicals across neighbour cells. Certain diffusion rates and combinations of other parameters can produce stable Turing Patterns.

The above (Wikipedia) picture shows the skin of a Giant pufferfish. The pictures below show some first results of my attempts with different settings on a 200×200 pixel grid.

From left to right: ‘curved stripes’ settings (C), ‘dots’ settings (D), 50% C + 50% D, 67% C + 33% D

There’s still plenty to experiment for me here. The algorithm that I wrote is strikingly simple and fully parameterized, but also quite heavy on the ESP32….


  • Mitchell Gravity Set Fractals

A simulation of particles, following Newton’s law of gravity. This simple example has six equal masses placed at the corners of a regular hexagon. Particles (pixels) are colored according to the time they need to cross an escape range and disappear in ‘space’.


  • Agent-based models

After coding my own variant of the Foraging Ant Colony example from the Softology blog, I’m currently working on a couple of obstacle escape mechanisms. Coming soon.



Multi-Fractal Explorer

One program to rule them all…

The obvious code similarity between most of my fractal sketches prompted me to write a generic fractal explorer, using a simplified version of my recently finished ESP32 framework for self-replicating cellular automata.

Unlike the more complicated functions for self-replicating automata, most fractal-specific functions only take a few lines of code, so rather than being a template for separate fractal sketches, it’s a single program that can browse and zoom-in on multiple fractal types (Mandelbrot, Julia, Newton etc.) by calling their corresponding fractal functions.


The video shows the ‘renormalized’ Mandelbrot fractal on a 320×240 display, with a fixed zoom factor per click. The clicked position of the fractal stays mapped to the clicked pixel. If the accumulated zoom level exceeds a threshold value, the algorithm will switch to double precision calculations for better resolution.

I may add a simple web server for switching fractal type, setting zoom factor or changing color palette over WiFi, but the real fun here was to write a generic sketch that gets maximum speed from the ESP32 microcontroller by using both cores, as well as a couple of techniques and tricks developed for earlier projects.


Strange attraction

fractals again…

If chaos theory applies to this blog, then fractals are definitely among its strange attractors… Reading an article about ‘renormalizing’ iteration escape values persuaded me to focus on colorizing fractals once more. The article presents a simple method for calculating real iteration escape values (i.e. C++ floats instead of integers).

With integer escape values, the number of available colors is limited by the algorithm’s  maximum number of iterations (maxIter), whereas the escape values from the descibed method are real numbers in the [0, maxIter) range, so if you want to color a fractal using n*maxIter colors, you can just muliply all escape values by n and use their integer part as an index for your color array. In exchange for just two extra iterations per pixel, fractal images will often look better, especially for small maxIter values.

The pictures show both methods, applied to the Mandelbrot fractal on a 480×320 display.

Integer escape values (maxIter = 128; 128 colors):


Renormalized float escape values (maxIter = 128; 256 colors):

[zooming-in caused some frequency interference between display and camera]


And this picture shows Julia in her new ‘renormalized’ dress:


Here’s the relevant function taken from my ‘Mandelzoom’ sketch that now uses 256 colors, although maxIter is still set to 128.


Chips from the lathe

This first post of 2020 just shows some videos of mini projects from the past 3 months.

Here’s an improvement (I hope) of my previous attempt to simulate fire on a small TFT display. I’ve added a glowing particles effect, did some parameter fine tuning and changed the color palette. The simulated area forms a layer over a jpg image, read from SD card.

Alas, my phone’s camera misses most of the improvements…


The next video shows the tesselation process of a 2D area according to the Majority Voting principle. In the initial state, every pixel randomly gets one out of three* possible colors. Then, in each (discrete and simultaneous) next step, every pixel takes the color of the majority of its ‘neighbours’. It’s no surprise that the chosen definition of neighborhood has great influence on this self-organizing process and its final state.

* The classic example uses 2 colors, but I chose to have 3 for a more fashionable (?) look.


Finally, meet Perrier Loop, the most complex Self-replicating Cellular Automaton that I managed to squeeze into my cellular automata framework for ESP32 (so far).  Grid cells can be in one out of 64 states (colors). Their state transitions are governed by 637 rules that use 16 variables (placeholders for up to 7 states, so the real number of rules is much larger). Each cell complies to this exact same set of rules to determine its next state, based on its current state and that of its 4 orthogonal neighbours (Von Neumann neighborhood).

In mathematical terminology, Perrier Loop is a self-replicating universal Turing machine.

And it looks so simple on this 320×240 display….

Self-replicating Cellular Automata

Studying John von Neumann’s self-reproducing universal constructor* reminded me of some unfinished business in my earlier Cellular Automata (CA) projects. Always looking for coding challenges, I wondered if I could write a sketch for simulating self-reproducing CA loops that would fit inside an ESP32.

The general idea is to have a 2-dimensional grid of cells, each of which can be in one out of n states. These cells periodically change their states simultaneously, according to a set of rules. Each cell complies with the same set of rules, which lets it calculate its new state based on its current state and that of its direct neighbours. The way this ‘organism’ will evolve also depends on its initial pattern. A self-reproducing CA is a combination of states, rules and initial pattern that will continuously produce copies of the initial pattern, that will produce more copies of the initial pattern… etc. etc.

I guessed that Von Neumann’s universal automaton, with its 29 cell states and large set of rules, could be well out of ESP32’s league. Fortunately there are some famous (be it less universal) simplifications, so I started with the simplest one I could find: Chou-Reggia-2.

ESP32 WROVER simulating Chou-Reggia-2 on a 320×240 display (1-pixel per cell)

The video shows the indefatigable CA at work, producing ever more copies of the initial pattern (just 5 cells). It mimics a much sought after form of life: old cells don’t die…

They’re just like kids, they grow up so fast 😉


In order to squeeze my Chou-Reggia-2 algorithm into the ESP32, I had to come up with some new programming- and storage techniques for cellular automata. I ended up writing a framework for CA sketches, that can import start patterns and rules from Golly. It runs on ESP32 WROVER modules, using PSRAM for storing a double display/states buffer. The two cores of the ESP32 cooperatively calculate a new grid state based on the current one.

As a proof of concept for the new framework, I initialised it with the specific data for Langton’s Loop, an automaton similar to Chou-Reggia-2, but with more rules. Everything worked fine, as did two slightly more complicated automata that have variables in their Golly rules, as well as substantially more states: Evoloop-finite and Perrier Loop.

Now that I have it up and running, my CA framework for ESP32 WROVER modules reduces simulation of most 2D cellular automata to feeding it with their specific data: states, transition rules (with or without variables), neighborhood (Moore or Von Neumann) and rule symmetry, all imported from Golly.

[ After further optimization, my CA sketches run 5x faster than what is shown in the videos! ]

Close up of Langton’s Loop on an ESP32-driven 320×240 TFT display (slow version)

Because every ‘cell’ is represented by a single pixel on a small 2.4″ display, I had to zoom in on the pattern in order to show the difference between ‘living’ and ‘dead’ cells (Langton’s ‘organism’ grows like coral: inner cells die, but their cell-walls are preserved).

(See this post for an implementation of Perrier Loop inside the same framework)

* Von Neumann’s abstract and purely logical reasoning about self-replication eventually helped biologists to understand how (real) cells manage to make exact copies of themselves.



Quest for Fire

By no means a gamer myself, I’ve always been interested in the math behind the graphics in video games. A previous post showed how simple clouds can be produced by the Diamond-Square algorithm, running on an esp8266. That algorithm can generate maps and landscapes as well.

Another visual effect on my todo list was algorithmically simulated fire.


The video shows my first attempt on a small TFT display. I had to write a relatively simple algorithm, because it needs to run on a microcontroller. Some further experimenting with the color palette, weighting factors and randomization, as well as adding Perlin noise, will hopefully result in a more realistic fire. I also plan to add a rotary encoder for regulating the ‘fire’.

[Update: a newer version is here]

Busy Beaver

Today I discovered that Langton’s Ant is not the only Turing machine that’s nicknamed after an animal. There’s also the Busy Beaver [sic], invented by Tibor Radó. Despite its chaotic behaviour and unpredictable* outcome, the machine itself is extremely simple and can easily be simulated on a TFT display by an Arduino.

[video shows the final 3 steps of the busiest of all 4-state, 2-symbol beavers]

The beaver’s workflow is: read cell color -> lookup instructions for current [cell color/system state] combination  -> adjust state -> adjust cell color -> move head -> start again, unless state equals X.

In general, the components of a Turing machine, as described by Alan Turing in 1936, are:

  • an infinite tape, divided into an infinite number of cells, each of which can contain a symbol from a finite set of characters, including a ‘blank’ (the alfabet)
  • a head that can read and optionally change the content of a cell, as well as optionally move to the next or previous cell on tape
  • a finite set of n possible states that the system can be in, plus at least one additional state that marks the end of the process
  • a finite set of instructions that, based on the current state and the currently read symbol, decides about: (1) whether the symbol will be erased, overwritten or left as is; (2) whether the head will move one cell (left or right), or keeps it position over the tape; (3) what the next system state will beome.

The alfabet in my sketch consists of 0 (‘blank’, white cell) and 1 (blue cell). The four system states are represented by A, B, C and D. Note that this 4-state machine has a 5th state X, the exit state. A beaver must have exactly one (1) exit state. Also, all of its instructions must include movement of the head.

The algorithm could (in theory) simulate any 4-state, 2-symbol Turing machine on a 480×320 TFT diplay, but my sketch is initialized for simulating the ‘busiest’ of all possible 25,600,000,000** 4-state Busy Beavers: the one that stops with the maximum numbers of ‘1‘s on tape (13, after 107 steps). Luckily, it will not run out of Arduino’s limited virtual tape during the process (‘real’ Turing machines have an infinite tape, that’s why they can only exist as mathematical models).

It would have been impossible to simulate the busiest 5-state 2-symbol beaver, since several super computers and my Arduino 😉 are still searching for it. So far, ‘we’ have found one that has 4098 ‘1‘s on tape when it stops.

* Turing proved that no single algorithm can exist, that’s capable of predicting the outcome of all possible beavers. Apart from the trivial ones, you’ll simply have to run them…

**In a Busy Beaver Turing machine, the head will always be moved. That reduces the number of possible [color, state, move] instruction triplets for a 4-state, 2-color beaver to 2x5x2 = 20 (remember the 5th X state). We have to provide instruction triplets to 4×2 = 8 state/color combinations (state X doesn’t need one), so there are 208 different 4-state, 2-color Busy Beavers.



Buddhabrot – the last fractal?

Writing sketches for posts in the Fractals category of this blog made me realize that for many fractals (e.g. Mandelbrot, Julia, Newton etc.), visual appeal depends to a large extent on color mapping. In other words: mathematics may have blessed these fractals with a potentially beautiful complexity, but it still takes a programmer to visualize it as such.

Another thought that crossed my mind was whether the (perceived) infinite complexity of fractals couldn’t be just revealing the incompleteness of the mathematical system that produces and observes them (in which case we’d better name them Gödel fractals).

The above observations became manifest again when I was looking for a way to color the inside of the Mandelbrot set, the part that’s usually left uncolored. After a couple of very unappealing attemps, I discovered someone’s method for producing the so called Buddhabrot fractal.


However, as you can see, the result on my small 480×320 pixel display doesn’t even come close to some of the beautiful images that pop up in Google. Actually, it looks more like one of Marie Curie’s early X-ray images. On the other hand, I suspect that many beautiful realizations of the Buddhabrot fractal have been enhanced with the same post-processing techniques that are used for beautifying pictures of galaxies and nebulae. That feels a bit like cheating to me, because I could probably make my Buddhabrot look like Donald Trump that way… (would that still be called beautifying?)


Now that this somewhat demystifying idea of ‘artificial beauty’ has tempered my fascination for fractals a bit, it’s time to return to the challenge of creating fast and memory-friendly algorithms for IFS-generated shapes, like space filling curves.

Newton fractal

Here’s another famous fractal that I wanted to try on Arduino: the Newton fractal. It’s named after Isaac Newton’s iterative method for finding roots of functions. The classic example applies the method to function f(z) = z3 – 1 , where z is a complex number. This function has three roots in the complex plane.

classic newton fractal

The sketch that produced this picture loops over the pixels of a 480×320 display, mapping each of them to a complex number z0, that will serve as the initial value for Newton’s iteration process:

zn+1 = zn – f(zn) / f'(zn)

The basic color of a pixel (red, green or blue) depends on to which of the three roots the process converges. Its color intensity depends on the number of iterations that were needed to reach that root within a pre-defined precision. It’s just that simple!

Apart from playing with different color mappings (always essential for producing visually appealing fractals), I wanted to use modified versions of Newton’s method, as well as to apply them to different functions. The Arduino core has no support for complex calculus, and a library that I found didn’t even compile. So I wrote a couple of basic complex functions and put them in a functions.h file. There must better ways, but it works for me.

Once you have a basic sketch, the canvas of complexity is all yours!



f(z) = z4 – 1


This is my functions.h file. It must be in the same folder as the fractal sketch.


And here’s the Newton fractal sketch. Note the #include “functions.h” on the first line.



Snowflakes & Clouds

I haven’t been working on larger projects for a while, so here’s a couple of recent chips from the workbench.

In my ongoing quest for a faster HX8357D display driver, I was excited to read that the fast TFT_eSPI library now claims to support this beautiful but rather slow display. Unfortunately, the results were disappointing (unstable, readPixel freezes the screen). However, a nice bycatch of my experiments was the discovery that display updates become noticeably faster when the esp8266’s runs at 160MHz. Very useful for my fractal sketches, and for faster aircraft position updates on the Flight Radar‘s map.

Then I wondered how fast an esp8266 could actually drive my ili9341 driven 320×240 display. So I used my fast ‘offset-modulo’ Koch Snowflake algorithm for a snowflake animation speed test.

The result: an esp8266 at 160MHz can draw more than 100 (iteration depth 3) snowflakes per second on an ili9341 display, using the TFT_eSPI library. That’s about 5x faster than Adafruit’s HX8357 library can achieve on my 480×320 display. Hopefully, there will be a fast and reliable library for that display some day. Update: the TFT_eSPI library now supports driving the HX8457 in parallel mode and the results are impressive.


My renewed attention to the Koch Snowflake led to the discovery of a fractal that was new to me: the Plasma fractal. A commonly used algorithm to produce it is the Diamond-square algorithm. It’s apparently being used to generate nature-like backgrounds for video games.

I wrote a small demo sketch that produces random clouds (well, sort of…), but it can also be used to produce (non-musical) rock formations or landscapes.

6 samples of randomly generated clouds

(picture quality suffers from camera pixel interference with the TFT display)


The Diamond-square algorithm assigns initial values to the four corners of a (2n+1)x(2n+1) grid. Then it starts a process of calculating the remaining elements of the grid, based on neighbour values and a random noise of decreasing magnitude. The visual result depends on the four initial corner values, the random noise and the chosen color mapping.

The name Plasma fractal becomes clear when we show consecutive results in a loop. While still using random noise, the trick is to initialize the random generator with the same seed value for every new loop cycle. By slightly changing the initial corner values for each new cycle, the results will also be slightly different from the previous one, which is exactly what we want for a smooth animation.

The general formula that I used for initializing corner k with value h[k] is:

h[k] = d[k] + a[k]*sin( t*f[k] )   where t is a loop counter.

So the value of corner k will periodically swing around value d[k] with amplitude a[k]. The period of the swing is determined by f[k].

Now we can play with the d, a and f values for each corner, as well as with the color mapping, in order to find visually appealing animations. Here’s a first impression (esp8266 & ili9341 320×240 display). The sketch uses a 65×65 grid, with every point filling a corresponding 3×4 rectangle on the display with it’s color value. Since grid values are stored in 4-byte float variables, a 129×129 grid would become too large for the esp8266.


Used values:


Color mapping: