Where ISS…?


Tracking the International Space Station (ISS)

Borrowing most of its code from my What’s Up? air traffic monitor, this small project uses publicly available live data for showing current position, altitude and velocity of the International Space Station on a small TFT display. The original version draws the ISS and its trail over an ‘equirectangular’ map of the Earth, also showing the actual daylight curve and current positions of the Sun and the Moon.

The video below shows the ESP32 variant, but with a couple of small adaptions, the sketch will also run on an ESP8266. As usual, my camera does little justice to the TFT display…

ISS tracker on the 2.2″ display of a TTGO T4 (ESP32 )

Position and data are refreshed every 5 seconds, during which time the ISS has travelled almost 40 km! The backgound image also needs to be updated every 270 seconds – the time in which the daylight curve will have moved one pixel to the left over a 320 pixels wide equirectangular Earth map. I used the station’s previous and current position for calculating the icon’s rotation angle. This is just to indicate the station’s heading, and doesn’t refer to its actual orientation in space, of course.

The newer version below takes a different approach by keeping the ISS in the center of a rotating globe. In reality, the display looks much better than in this video.


I also made a third version that keeps the ISS icon in the center of a moving Google map with adjustable zoom level, but that requires a Google Maps API key.

This project seems very suitable for educational purposes. With just an ESP board and a TFT display, students can quickly become familiar with C++ coding and some essential maker techniques like Internet communication, JSON parsing and image processing. As a side effect, they will also learn something about Earth movement, cartography, stereometry, Newton’s laws and space research.

Virtual Radar Server

What’s Up? ” revisited

The discovery that you can run Virtual Radar Server (VRS) on a Rapberry Pi triggered me to revise my What’s Up flight radar for ESP32/8266 once again. Mainly interested in aircraft within the range of my ADS-B receivers, I already preferred querying those receivers over using the adsbexchange API, especially after they removed most static metadata from their json response. This had even forced me to setup and maintain my own database for looking up fields like aircraft model, type, operator, country etc.

However, querying Virtual Radar Server on one of my PiAware receivers lets VRS do these lookups from its own up-to-date web databases! Its enriched json response, based on my own receivers’ ADS-B and MLAT reception, contains more details than the current adsbexchange API. Besides, unlike adsbexchange, a local VRS doesn’t mind to be queried once every second! Using VRS as the primary source, I can always choose to find possibly  filtered out ‘sensitive’ aircraft by querying adsbexchange as well.

A quick prototype on a D32 Pro and a 2.4″ TFT display finally shows flicker free icon movement after applying some display techniques from earlier projects.

My smartphone doesn’t like filming TFT displays…

While simultaneously running the old adsbexchange version and this new one, I noticed that the new version showed more aircraft, especially nearby aircraft at low flight levels. This is surprising, since adsbexchange.com claims to provide unfiltered data and they should be aware of those those missing aircraft because I feed them!

Anticipating a future project (“Automatic Plane Spotter“), the new version also keeps track of the nearest aircraft’s position, expressed in azimuth and elevation. This will be used for driving the steppers of a pan-tilt camera mount.


Stereoscopy on ESP32?

double trouble…?

I knew in advance that trying to make an ESP32 show moving stereoscopic images on a TFT display could be unsuccessful. But sometimes you just want to know how close you can get. Besides, it fits my lifelong interest in stereoscopy*.

I found suitable test material in the form of this animated gif (author unknown). Now I ‘only’ had to find a technique for hiding the left frames for the right eye, and vice versa.


The first technique that I tried used two of these Adafruit “LCD Light Valves“:

Source: Adafruit website

Together, they formed active shutter glasses for looking at a single TFT display. Looping over the original frames, the display alternately showed the left and right half of a frame, always closing the opposite eye’s valve. The speed of these shutters surprised me, but their accuracy proved insufficient. In order to make the left frame completely invisible to the right eye and vice versa, I had to build in short delays that led to flickering. Without those delays, there was too much ‘leakage’ from the opposite side, a recipe for headaches.


The only solution for the above problem was a physical separation of left and right eye-range (i.e. the classical View·Master© approach). Luckily, I had this 3D Virtual Reality Viewer for smartphones laying around. Instead of a smartphone, it can easily hold an ESP32, a battery and two TFT displays. The technical challenge of this method was to keep both displays in sync.

It’s easy for a single ESP32 to drive two identical TFT displays over SPI. The state of a display’s CS line decides whether it responds to the master or not. With CS for left pulled LOW and CS for right pulled HIGH, only the left display will respond, and vice versa. And when data needs to be sent to both displays, you simply pull both CS lines LOW.

The results of this approach were disappointing. Unfortunately, the time the SPI bus needs for switching between clients proved to be too long, resulting in even more flickering than with the shutter glasses technique.


As for now, I can see only one option left: using two ESP32 boards, each driving their own display. This will likely solve the flickering (this post shows that the ESP32 is fast enough), but now I’ll have to keep both ESP32 boards in sync.

Both boards are in the same housing and share a power supply, so the most simple, fast and accurate way to sync them seemed over two cross-wired IO pins. Unfortunately, and for reasons still unknown to me, when I run the sketch below on both boards, they keep waiting for each other forever because they will always read 1 on their inPin, even when the other board writes 0 to its outPin.


Now that I’ve come closer to a 3D movie viewer than expected, it would be a shame if this apparently simple synchronization problem would stop me.

To be continued (tips are welcome)



*The first 3D image that I saw as a kid was a “color anaglyph” map of a Dutch coal mine. It came with cardboard glasses holding a red and a cyan colored lens. The map looked something like this:


There were also these picture pairs, forcing your eyes to combine short-distance focus with long-distance alignment. It gave me some headaches until I invented a simple trick: swap left and right and watch them cross-eyed…


My aunt once gave me an antique wooden predecessor of the viewmaster, complete with dozens of black & white images of European capitals.

A newer ‘antique’ that I like for its ingenuity is Nintendo’s Virtual Boy with its rotating mirror technique. Instant headache, but so cool!


Somewhat eccentric newcomers were the “stereograms”. Fascinating how fast our brain manages to make 3D spaghetti out of their messy images, even if they move!




Web images on TFT

“Bodmer vs Bodmer…”


Looking for a fast way to have an ESP32 download and show web images on a TFT display, I came accross the very fast TJpg_Decoder library* on Github. It’s by Bodmer (the guy deserves a statue) and integrates nicely with his TFT_eSPI library.

One of the library’s examples shows how to download a jpg image to a file in SPIFFS, and then decode and render it to a 320×240 TFT display. Rendering time is impressive, but I managed to make it 10% faster by rendering from RAM instead of SPIFFS.

The sketch at the end of this post shows the essential steps. It pushes a 320×240 web image to the display 2x faster than my older sketches, that use the JPEGDecoder library (also by Bodmer!). Another difference from my earlier sketches is the use of the HTTPClient library. It takes away the hassle of dealing with http headers and chunked-encoded response.

By the way: after rendering the jpg file to the display, you can still copy it from the array to a file in SPIFFS or on SD card for later use. This only takes a single line of extra code.

There is something that I noticed while measuring this library’s speed on ESP32 boards with PSRAM. Sketches that were compiled with PSRAM Enabled rendered almost 15% slower than the same sketch with PSRAM disabled, even if PSRAM wasn’t used at all. This is not necessarily related to the library; it could also be a general PSRAM issue. Luckily, since jpg images for small TFT displays will likely fit inside RAM (even for many ESP8266 boards), there’s no need to enable PSRAM, unless your program needs it for other purposes. Make sure that BUFFSIZE fits in RAM and is large enough to hold your jpg files.

As for grabbing other image formats (gif, bmp, png…) and resizing: most hobbyists will have access to a (local or hosted) webserver with php. Thanks to very powerful php graphic libraries, only a few lines of php code can take care of converting any common image format to jpg, and send it (resized, rotated, cropped, sharpened, blurred or whatever) to your ESP for rendering. If that’s not an option, then this new library can still reduce the size of an image by a factor 2, 4 or 8 by calling function TJpgDec.setJpgScale() with the desired factor as parameter.

Below is an adapted and stripped version of Bodmer’s example, using an array instead of SPIFFS. Make sure to select the right setup for the TFT_eSPI library before compiling. Some boards also require you to explicitely switch on the backlight in your sketch.


Output on the Serial monitor will be something like:

[HTTP] connection closed or file end.
Downloaded in 188 ms.
Decoding and rendering: 125 ms.

Too slow for your eyes? By storing decoded tiles to a display buffer in (PS)RAM instead of pushing them to the display one by one, the image can be pushed to the display in 44 ms

On an ESP32, you can speed up the entire process by using both cores. The library has an example that lets core 0 do the decoding. Whenever it finishes a tile (usually 16×16 pixels), core 1 will push it to the display in parallel. However, this will not beat the above display buffer method with regard to the visual part of the process.

*TJpg_Decoder uses the Tiny JPEG Decompressor engine by ChaN



3 * (esp32 + tft) = ?


ESP32 boards with a mounted TFT display can be so hard to resist that, in a moment of shameless weakness, I purchased three of them in one buy, without consulting reviews…


So here’s a report of my first acquaintances with these new Chinese kids on my block.


  LilyGO TTGO T4 v1.3

This board with 8MB PSRAM, an SD card slot and a 2.2″ TFT display makes development of graphics oriented sketches very convenient.

I2C pins are available via a connector (cable included), but unfortunately they didn’t break out the standard SPI pins. On my board, different from what some internet pictures suggest, not GPIO25 but GPIO26 is broken out (labeling on the module is correct).

Compared with the LOLIN 2.4″ TFT shield, this display is a bit smaller (with the same resolution) and has no touch screen. Its 3 buttons allow for some basic feedback to the application. Build quality is excellent.

Driving the display was easy, but it took me some time to make it show images from an SD card. That’s because the board’s SD slot requires its own SPI bus on pins 2, 13, 14 & 15. The following code snippets show how I got it working.

In the main section:


In setup():

The above code also shows that this board requires you to explicitely switch the backlight on. This can be done by writing HIGH to TFT_BL but I prefer the sigmaDelta method as shown because it lets you control the display’s brightness.

So far, my overall impression of this board and its manufacturer is positive. It would have been perfect if they had broken out the SPI pins. Price: €19.

  LilyGo TTGO T-Display v1.1.

The 1.14″ display has 240×135 pixels. To be honest, fontsize 1 starts to become a bit of a visual challenge here, but I do not intend to use this board for text anyway. Actually, I currently have no specific application in mind for it at all, but I’m sure I’ll find one.

Bodmer’s indispensable TFT_eSPI library supports its ST7789 chip, so I expected no less than turbo speed, especially with ‘only’ 32400 pixels to push. Although this ESP32 WROOM module has no PSRAM and less memory than the T4, there’s still enough memory available for double buffering the display.

Information on github is to the point and up to date, so in no time I ported my most recent Webcam·View·Master sketch to this mini display for testing the board.

Note that the real size of the display is only 2.5 x 1.4 cm!

Like with the T4 board, TFT_BL has to be set HIGH in order to switch the backlight on.

Price: €9. Definitely a bargain for an ESP32 with such a cute display. And it even has a LiPo charging circuit and comes with header pins and a battery cable. Make sure you have an usb-C to usb-A cable because that is not included.

M5Stack Core Gray

After my rather negative review for the M5Stack Fire, this purchase may be a bit surprising. Perhaps I secretly hoped that this Gray model wouldn’t have the same issues and annoyances, but that soon turned out to be false hope. Actually, it’s even more noisy than the Fire and my dacWrite(25, 0) trick doesn’t change that. It is, however, less sensitive to usb power issues during sketch uploads. No ‘brown out detections’ so far.

But then, ESP32 development boards don’t come any better looking than the M5Stack family. Once you can work around its electronic design flaws, the Gray makes a perfect housing for ESP32 gadgets, especially when combined with a matching standing base power supply, equipped with a DHT12 temperature and humidity sensor.

I found a DHT12 library and an example sketch for reading the DHT12 sensor here, but the example didn’t compile. Here’s a corrected version that displays temperature and humidity when an M5Stack Core model is put on the standing base.


Compared with the Fire, the Gray has no Neopixel led bars and no PSRAM, making three extra GPIO pins availabe for other purposes, including GPIO16 and GPIO 17 for UART communication. Its battery is only 150 mAh, but I don’t mind because I couldn’t resist that standing base either. Prices: €30 for the Gray and €7.50 for the standing base.


The two LilyGo boards were my first from this manufacturer (is Lily Zhong the Chinese LadyAda?). I like both displays, so they will not be the last TTGOs.

As for the M5Stack ecosystem: now that I have everything I need for making great looking gadgets, I don’t think I’ll invest in it any further. Maybe they should ask some seasoned analog nerd to improve their electronic design with a couple of dirt cheap components.




Currently ‘lockdown-ed’ at sea level, my thoughts often wander to the majestic heights of my beloved Alps. After visiting one of those Alpine webcam sites, I wondered if I could turn my M5Stack Fire into a handheld Alpenpanorama. Or, to reverse Francis Bacon’s words: if you can’t go to the mountain, then the mountain must come to you…

This project was a bit more challenging than expected, as showing JPEG-encoded web images on a 320×240 TFT display requires extracting, decoding and resizing binary data from a chunked-encoded http response. After two days of bit-wrestling, here’s the result:

The time interval between images was reduced for this video. Music: “Schwärzen-Polka”

At startup, the program reads a list of 60+ webcam links from my web server. This enables me to change the displayed webcams without updating the software. Once running, the middle button pauses/resumes the loop, the left button removes the currently displayed webcam from the loop and the right button restores the original webcam list.

Because these live photo webcams refresh their pictures every 10 minutes, this gadget continuously gives me an up to date view of places where I would so much rather be….

P.S. I recommend not to use the official M5Stack library. In this project, decoding jpeg images with Bodmer’s fast JPEGDecoder library became more than 3 times faster after I replaced M5Stack (that has TFT_eSPI built in) with native TFT_eSPI. This could very well be caused by their library’s interrupt based button functions that are also notorious for causing WiFi crashes.



Inspiring Chaos

Wonder why I didn’t discover Jason Rampe’s Softology blog much earlier, as it deals with most of the math related topics of my own blog (and many more). His expertise and visualizations, as well as his software application Visions of Chaos, live at the opposite end of the spectrum when compared to my modest tinkering. As a result, most of his visualizations are far beyond the specs of a microcontroller and a small display, but I discovered a few mathematical models that I simply had to try right away:


  • Diffusion-limited Aggregation

“Diffusion-limited aggregation creates branched and coral like structures by the process of randomly moving particles that touch and stick to existing stationary particles.” – [Softology blog].

After coding a 2D version for my M5Stack Fire (ESP32), I was surprised to see how this simple principle generates these nice structures. The process starts with a single stationary (green) particle in the center. Then, one by one starting from a randomly chosen point at the border, every next particle takes a random walk inside the circle until it hits a stationary particle to which it sticks, becoming a stationary (green) particle itself.


  • Reaction-Diffusion model

It was Alan Turing who came up with a model to explain pattern formation in the skin of animals like zebras and tigers (morphogenesis). It’s based on the reaction of two chemical substances within cells and the diffusion of those chemicals across neighbour cells. Certain diffusion rates and combinations of other parameters can produce stable Turing Patterns.

The above (Wikipedia) picture shows the skin of a Giant pufferfish. The pictures below show some first results of my attempts with different settings on a 200×200 pixel grid.

From left to right: ‘curved stripes’ settings (C), ‘dots’ settings (D), 50% C + 50% D, 67% C + 33% D

There’s still plenty to experiment for me here. The algorithm is strikingly simple and fully parameterized, but also quite heavy on the ESP32….


  • Mitchell Gravity Set Fractals

A simulation of particles, following Newton’s law of gravity. This simple example has six equal masses placed at the corners of a regular hexagon. Particles (pixels) are colored according to the time they need to cross an escape range and disappear in ‘space’.


  • Agent-based models

After coding my own variant of the Foraging Ant Colony example from the Softology blog, I’m currently working on a couple of obstacle escape mechanisms. Coming soon.



M5Stack Fire


It looked like the perfect hardware for a couple of gadgets that I have in mind, but my enthusiasm for the M5Stack Fire development board quickly cooled down after my first experiments.

Meanwhile, I’ve been able to tackle most software issues, but it’s the hardware flaws that worry me most. My verdict is at the bottom of this post.


M5Stack is a clever concept, built around an ESP32 core that can be expanded by stackable (magnetic) modules. Also, special purpose units can be connected via Grove connector cables. A growing number of modules and units are available and it all looks very appealing indeed. But then, this is definitely an expensive way of prototyping.

I chose the Fire variant because it houses an ESP32 WROVER module with 16 MB Flash and 4 MB PSRAM. This is what you get:

  • 320×240 ili9342 display (M5Stack library uses Bodmer’s unrivalled TFT_eSPI)
  • 3 buttons
  • microphone
  • speaker (1 Watt)
  • 10 Neopixels
  • accelerometer MPU6886 (not an MPU9250!)
  • magnetometer BMM150
  • SD card slot (max. 16 GB)
  • 600 MAh battery
  • stackable charger module
  • plastic storage box with USB-C cable, hexagonal wrench and some LEGO…

At first, it looked like I had received a defective device that did nothing more than producing a ticking noise. It turned out that this behaviour occurs when the battery is completely empty, even if the module is powered by USB. Fair enough (once you know it).

The device comes pre-loaded with UIFlow firmware, a graphical programming environment. Apart from the self-started demo, I didn’t test it. Instead, I added the board and its official library to the Arduino IDE and uploaded one of my most hardware demanding sketches (Perrier Loop). Everything worked fine, but all colors were inverted. In sketches that use the original TFT_eSPI library, adding the following lines in the setup() section of your sketch fixes the issue:


The module’s speaker is a bit noisy: it cracks on ESP32 resets and hisses while the power/reset button is being pressed. For its price, I would have expected a better hardware design. Besides, I personally would have liked a more ergonomic power button and a more consistent on/off/reset behaviour.

When I tried the official library’s example sketch for reading the IMU sensors, the ESP32 crashed. It turned out that my Fire did not have the MPU9250 sensor that the example had been written for, but a MPU6886 and a BMM150 instead. Wonder why they never updated their 2.5 years old example.

After this crash, I was no longer able to upload new sketches to the Fire (A fatal error occurred: Timed out waiting for packet content…). I managed to fix it, but unfortunately this behaviour keeps returning. And there’s more strange behaviour, like on-board neopixels being lit while the running sketch doesn’t even use them,  as well as spontaneous beeps.

Things got worse: the power/reset switch stopt working after I had uploaded the library example WiFiSetting.ino sketch (that worked OK, by the way). The module would no longer switch off, despite many double clicks, executed at different intervals….

Finally, after lots of randomly clicking the button, the device appeared to be off, but could no longer be switched on…. The Serial monitor then showed that the ESP32 was actually still running, but also continuously crashing. This also blocked uploading new sketches completely 🙁

In the end, I had to remove the battery module. After reconnecting it, I managed to erase ESP32’s flash memory. Then I used a tool called M5Burner for uploading the original UIFlow firmware.

The error messages when the device is crashing in a loop seem to suggest that not all ESP32 strapping pins are properly being pulled up or down during boot:

flash read err, 1000
ets_main.c 371
ets Jun 8 2016 00:22:57

My provisional verdict, based on an ongoing struggle with (my copy of) the M5Stack Fire:



  • clever concept
  • nice look and mostly decent physical build quality
  • high quality 2″ display, supported by Bodmer’s TFT_eSPI library.


  • seriously flawed reset/power circuit!
  • noisy speaker (poor audio isolation; use dacWrite(25, 0); if you don’t need audio)
  • SD card slot not properly aligned
  • The official library’s interrupt-based button functions can crash your WiFi (and worse)
  • Grove port C  appears to be useless by design (hard-wired to PSRAM pins 16 & 17)
  • 5V tolerancy of Grove ports A and B (pins 21, 22, 26 & 36) remains unclear

This nice looking ESP32 toy may need a few electronic design changes to enhance its usefulness and stability. Until then, I’ll keep my distance from the Fire….



Multi-Fractal Explorer

One program to rule them all…

The obvious code similarity between most of my fractal sketches prompted me to write a generic fractal explorer, using a simplified version of my recently finished ESP32 framework for self-replicating cellular automata.

Unlike the more complicated functions for self-replicating automata, most fractal-specific functions only take a few lines of code, so rather than being a template for separate fractal sketches, it’s a single program that can browse and zoom-in on multiple fractal types (Mandelbrot, Julia, Newton etc.) by calling their corresponding fractal functions.


The video shows the ‘renormalized’ Mandelbrot fractal on a 320×240 display, with a fixed zoom factor per click. The clicked position of the fractal stays mapped to the clicked pixel. If the accumulated zoom level exceeds a threshold value, the algorithm will switch to double precision calculations for better resolution.

I may add a simple web server for switching fractal type, setting zoom factor or changing color palette over WiFi, but the real fun here was to write a generic sketch that gets maximum speed from the ESP32 microcontroller by using both cores, as well as a couple of techniques and tricks developed for earlier projects.


2D arrays in PSRAM

This post shows a minimal example sketch for storing a 2D array  in PSRAM memory of ESP32 WROVER modules. It’s mainly for my own reference, now that I’ve finally figured out how these notorious C++ pointers work*.

(An earlier post already showed a simple way to store 2D arrays in PSRAM, but that required some row/column book keeping).

The following sketch creates a 2D array named grid for storing float values in PSRAM. Then it fills all ‘cells’ with values of the form <column>.<row>. Finally, it prints the value of grid[123][234] (which should be 123.234, of course).


Storing, for example, uint16_t values instead of floats only requires replacement of all occurrences of float with uint16_t.

Note that you can store multiple 2D arrays (even of different types) this way. Just declare an additional pair of global pointers of the desired type and add a corresponding ‘create’ section for each array in setup().

Elements in the grid array from the above example can now simply be addressed like grid[x][y], just like in regular 2D arrays. This convenience comes at a price:

  • The presented method needs some extra memory for storing additional column pointers (the term sizeof(float *) * cols in the calculation formula of len).
  • My (and probably everybody’s) favorite ESP graphics library TFT_eSPI can push an entire buffer of pixel values to a TFT display (very fast!). However, that buffer needs to be referrenced to by a single pointer, whereas grid is declared as a double pointer. As far as I know, that means that you’ll have to push the grid array column by column, where grid[i] is a pointer to the column with (zero-offset) index i.

I’m now using the described method in sketches with many single-pixel read/write operations, or with multiple arrays stored in PSRAM. For ‘speedy’ TFT projects, I’ll keep stitching my rows and columns together for storage in a single 1D PSRAM array.


* Finding this piece of information about pointers was very helpful:  “… type is required to declare a pointer. This is the type of data that will live at the memory address it points to.” Before that, I presumed that pointers were supposed to be unsigned integers, long enough to hold any address they could point to.