Self-Hosted GPS Tracker

With my T-Beam GPS Tracker operating fine during a car ride, I finally have a secure replacement for this very old GPS Tracker App on my nine years old HTC Desire HD*.

The TTGO T-Beam came in this box. I only had to drill a small hole for the antenna

 

The concept of the old Android app (meanwhile removed from both the App Store and github by its creator**) is very simple. It periodically sends your phone’s GPS readings as http GET parameters to a self-hosted endpoint of your choice. Since TheThingsNetwork‘s HTTP Integration works more or less the same, I could simply reuse the php backend that I once wrote for the app, now for showing my LoRaWan tracker’s location on a map.

 

Data flow is as follows:

  • Tracker periodically sends GPS readings to TheThingsNetwork. The sketch makes the interval between consecutive send commands depend on the last GPS speed reading
  • HTTP Integration service sends LoRaWan payload+metadata in an http POST variable to a php script (dump.php) on my server (see below)
  • dump.php appends the entire POST value to a log file (log.txt) and writes a formatted selection of relevant data to a text file (last_reading.txt, always overwritten)

The map is an embedded OpenStreetMap ‘Leaflet’ inside an html file, with some simple javacript periodically reading the formatted data from last_reading.txt. After extracting the individual GPS readings and a timestamp in the background, these values are used to update the tracker’s location on the map and to refresh textual info.

Data in the logfile can, if necessary after conversion, be used for analyzing and drawing the trackers’s route. There are several standard solutions for this, even online.

 

*Running non-stop on its original battery and without any problems since March 2011. I ❤ HTC!

**Hervé Renault, who suggests a modern alternative on his personal website (in French)


My dump.php file (presumes fields latitude, longitude, speed and sats in TTN payload):

TTGO T-Beam

joining TheThingsNetwork

LoRaWan and TheThingsNetwork are quite popular in my country, although they may have lost part of their momentum lately*. With no particular use case in mind, it was mainly curiosity that made me purchase a TTGO T-Beam board and join TheThingsNetwork.

The board comes in a box with two antennas (LoRa and GPS) and header pins

My T-Beam (T22 V 1.1)  is basically an ESP32 WROVER module with an onboard SX1276 LoRa radio and a u-blox NEO-M8N GPS module, a type 18650 battery holder and an AXP192 power management chip. The manufacturer’s documentation is a bit confusing, being divided over two apparently official github repositories: https://github.com/LilyGO/TTGO-T-Beam and https://github.com/Xinyuan-LilyGO/LilyGO-T-Beam. Also, board versions up to 0.7 are significantly different from later versions.

First, I successfully tested the GPS receiver of my version 1.1 board with this sketch. Its code shows the most important differences between version 1.x boards and previous versions (like 0.7). TX and RX pins for the GPS receiver are now 34 and 12 instead of 12 and 15. Furthermore, the newer boards have a power management chip (AXP192) that lets you control power sink to individual board components. It requires an include of the axp20x library as well as code for explicitly powering used components. I recommend to take a look at the examples from that library.

Testing the T-Beam’s LoRa radio either requires a second LoRa board (which I don’t have), or making ‘The Thing’ talk to TheThingsNetwork. I went for the TTN option, obviously. And with a GPS on board, a GPS tracker was a logical choice for my first LoRaWan sketch.

After creating an account on the TTN website, I had to register an ‘Application’ and a ‘Device’, as well as provide a Payload Format Decode function**. Along the way, the system generated some identification codes: Application EUI, Device EUI and Application Key, needed for the OTAA Activation Method that I selected.

Then I ran the sketch below, which I compiled from several sources. After a minute or so, the Serial monitor reported a GPS fix, continued with “EV_JOINING”, and … that was it. Apart from a faulty device or a software issue, I also had to consider the possibility that my T-Beam was not within range of a TTN gateway. Hard to debug, but I was lucky.

TheThingsNetwork Console pages show Application Key and EUIs in hex format. Clicking the <> icon in front of a value will show it in C-style format and then the icon next to it lets you toggle between msb and lsb. It turned out that my sketch expected both EUIs to be in lsb format and the Application Key in msb format. I had used msb for all three of them!

After correcting the EUI values in device_config.h, the “EV_JOINING” in the Serial monitor was followed by “EV_JOINED” and “Good night…”, indicating that the device had been seen by a TTN gateway and gone into deep sleep mode. From that moment on, its payload messages, sent every two minutes (as set in my sketch), appeared at the Data tab of my application’s TTN Console. Looks like my T-Beam is a TTN node!

In order to do something useful with your uploaded data, the TTN Console offers several ‘Integrations’. For my GPS tracker I first tried TTN Mapper, making my GPS readings contribute to a coverage map of TTN gateways. It also lets you download your data in csv format. However, I didn’t see my readings on their map so far, perhaps because my signal was picked up by a gateway with unspecified location. So I switched to HTTP Integration in order to have all readings sent to an endpoint on my php/MySQL server.

My next steps will be testing coverage and reception of the T-Beam when used during a car travel, as wel as trying some other integrations. Should that raise my enthusiasm for TheThingsNetwork enough, then I might even consider to run my own TTN gateway in order to improve LoRaWan coverage in my area.

In summary, making my Thing join the Network wasn’t just plug & play, so I hope this post and the below mixture of mainly other people’s code will be of any help to TTN starters.

 

* based on (the lack of) recent activity of TTN communities in my area.


Code for a T-Beam v 1.x (with AXP192 power control chip), compiled from several sources. It sends latitude, longitude, altitude, hdop and number of satellites to TTN.

GPS-Mapper.ino

 

gps.h

 

gps.cpp

 

device_config.h (your OTAA secrets can be copied from the TTN Console (note msb/lsb!)

 

**Payload Format decoder (javascript to be entered at the Payload Formats tab of the TTN Console; reverses the encoding as performed by function buildPacket() in gps.cpp)

 

Where ISS…?

 

Tracking the International Space Station (ISS)

Borrowing most of its code from my What’s Up? air traffic monitor, this small project uses publicly available live data for showing current position, altitude and velocity of the International Space Station on a small TFT display. The original version draws the ISS and its trail over an ‘equirectangular’ map of the Earth, also showing the actual daylight curve and current positions of the Sun and the Moon.

The video below shows the ESP32 variant, but with a couple of small adaptions, the sketch will also run on an ESP8266. As usual, my camera does little justice to the TFT display…

ISS tracker on the 2.2″ display of a TTGO T4 (ESP32 )

Position and data are refreshed every 5 seconds, during which time the ISS has travelled almost 40 km! The backgound image also needs to be updated every 270 seconds – the time in which the daylight curve will have moved one pixel to the left over a 320 pixels wide equirectangular Earth map. I used the station’s previous and current position for calculating the icon’s rotation angle. This is just to indicate the station’s heading, and doesn’t refer to its actual orientation in space, of course.

The newer version below takes a different approach by keeping the ISS in the center over a rotating globe. In reality, the display looks much better than in this video.

 

I also made a third version that keeps the ISS icon in the center of a moving Google map with adjustable zoom level, but that requires a Google Maps API key.

This project seems very suitable for educational purposes. With just an ESP board and a TFT display, students can quickly become familiar with C++ coding and some essential maker techniques like Internet communication, JSON parsing and image processing. As a side effect, they will also learn something about Earth movement, cartography, stereometry, Newton’s laws and space research.


Here’s the code for the equirectangular map version:

 

Content of included sprite.h file:

 

Virtual Radar Server

What’s Up? ” revisited

The discovery that you can run Virtual Radar Server (VRS) on a Raspberry Pi triggered me to revise my What’s Up flight radar for ESP32/8266 once again. Mainly interested in aircraft within the range of my ADS-B receivers, I already preferred querying those receivers over using the adsbexchange API, especially after they removed most static metadata from their json response. This had even forced me to setup and maintain my own database for looking up fields like aircraft model, type, operator, country etc.

However, querying Virtual Radar Server on one of my PiAware receivers lets VRS do these lookups from its own up-to-date web databases! Its enriched json response, based on my own receivers’ ADS-B and MLAT reception, contains more details than the current adsbexchange API. Besides, unlike adsbexchange, a local VRS doesn’t mind to be queried once every second! Using VRS as the primary source, I can always choose to find possibly  filtered out ‘sensitive’ aircraft by querying adsbexchange as well.

A quick prototype on a D32 Pro and a 2.4″ TFT display finally shows flicker free icon movement after applying some display techniques from earlier projects.

My smartphone doesn’t like filming TFT displays…

While simultaneously running the old adsbexchange version and this new one, I noticed that the new version showed more aircraft, especially nearby aircraft at low flight levels. This is surprising, since adsbexchange.com claims to provide unfiltered data and they should be aware of those those missing aircraft because I feed them!

Anticipating a future project (“Automatic Plane Spotter“), the new version also keeps track of the nearest aircraft’s position, expressed in azimuth and elevation. This will be used for driving the stepper motors of a pan-tilt camera mount that I’m currently building.

 

Stereoscopy on ESP32?

double trouble…?

I knew in advance that trying to make an ESP32 show moving stereoscopic images on a TFT display would be hard. But sometimes you just want to know how close you can get. Besides, it fits my lifelong interest in stereoscopy*.

I found suitable test material in the form of this animated gif (author unknown). Now I ‘only’ had to find a technique for hiding the left frames for the right eye, and vice versa.

 

The first technique that I tried used two of these Adafruit “LCD Light Valves“:

Source: Adafruit website

Together, they formed active shutter glasses for looking at a single TFT display. Looping over the original frames, the display alternately showed the left and right half of a frame, always closing the opposite eye’s valve. The speed of these shutters surprised me, but their accuracy proved insufficient. In order to make the left frame completely invisible to the right eye and vice versa, I had to build in short delays that led to flickering. Without those delays, there was too much ‘leakage’ from the opposite side, a recipe for headaches.

 

The only solution for the above problem was a physical separation of left and right eye-range (i.e. the classical View·Master© approach). Luckily, I had this 3D Virtual Reality Viewer for smartphones laying around. Instead of a smartphone, it can easily hold an ESP32, a battery and two TFT displays. The technical challenge of this method was to keep both displays in sync.

It’s easy for a single ESP32 to drive two identical TFT displays over SPI. The state of a display’s CS line decides whether it responds to the master or not. With CS for left pulled LOW and CS for right pulled HIGH, only the left display will respond, and vice versa. And when data needs to be sent to both displays, you simply pull both CS lines LOW.

The results of this approach were disappointing. Unfortunately, the time the SPI bus needs for switching between clients proved to be too long, resulting in even more flickering than with the shutter glasses technique.

 

As for now, I can see only one option left: using two ESP32 boards, each driving their own display. This will likely solve the flickering (this post shows that the ESP32 is fast enough), but now I’ll have to keep both ESP32 boards in sync.

Both boards are in the same housing and share a power supply, so the most simple, fast and accurate way to sync them seemed over two cross-wired IO pins. Unfortunately, and for reasons still unknown to me, when I run the sketch below on both boards, they keep waiting for each other forever because they will always read 1 on their inPin, even when the other board writes 0 to its outPin.

 

Now that I’ve come closer to a 3D movie viewer than expected, it would be a shame if this apparently simple synchronization problem would stop me.

To be continued (tips are welcome)

 


 

*The first 3D image that I saw as a kid was a “color anaglyph” map of a Dutch coal mine. It came with cardboard glasses holding a red and a cyan colored lens. The map looked something like this:

 

There were also these picture pairs, forcing your eyes to combine short-distance focus with long-distance alignment. It gave me some headaches until I invented a simple trick: swap left and right and watch them cross-eyed…

 

My aunt once gave me an antique wooden predecessor of the viewmaster, complete with dozens of black & white images of European capitals.

A newer ‘antique’ that I like for its ingenuity is Nintendo’s Virtual Boy with its rotating mirror technique. Instant headache, but so cool!

 

Somewhat eccentric newcomers were the “stereograms”. Fascinating how fast our brain manages to make 3D spaghetti out of their messy images, even if they move!

 

 

 

View·Master goes 360°

Working on a couple of new display techniques, I turned my Webcam·View·Master into this endlessly looping 360° Panorama viewer. The results were smoother than expected.

 

The video shows the Seiser Alm – Punta d’Oro “Panocam” on my M5Stack Fire. The sketch will run on any ESP32 board with PSRAM, and the TFT display can have any size, since the downloaded webcam footage will be resized to make its vertical dimension match the display’s shortest side (‘landscape mode’, which makes sense here 😉 )

Reason why I used the M5Stack (with a 2″ display) for this video was its photogenic quality, but the Dolomite landscape definitely looks much better on my 2.2″ TTGO T4 v1.3 display. Partly responsible for the video’s pale colors is also my smartphone’s camera.

Now that I finished the prototype, the next step will be to use the buttons for selecting other panorama webcams and for pausing the pan movement.

Web images on TFT

“Bodmer vs Bodmer…”

 

Looking for a fast way to have an ESP32 download and show web images on a TFT display, I came accross the very fast TJpg_Decoder library* on Github. It’s by Bodmer (the guy deserves a statue) and integrates nicely with his TFT_eSPI library.

One of the library’s examples shows how to download a jpg image to a file in SPIFFS, and then decode and render it to a 320×240 TFT display. Rendering time is impressive, but I managed to make it 10% faster by rendering from RAM instead of SPIFFS.

The sketch at the end of this post shows the essential steps. It pushes a 320×240 web image to the display 2x faster than my older sketches, that use the JPEGDecoder library (also by Bodmer!). Another difference from my earlier sketches is the use of the HTTPClient library. It takes away the hassle of dealing with http headers and chunked-encoded response.

By the way: after rendering the jpg file to the display, you can still copy it from the array to a file in SPIFFS or on SD card for later use. This only takes a single line of extra code.

There is something that I noticed while measuring this library’s speed on ESP32 boards with PSRAM. Sketches that were compiled with PSRAM Enabled rendered almost 15% slower than the same sketch with PSRAM disabled, even if PSRAM wasn’t used at all. This is not necessarily related to the library; it could also be a general PSRAM issue. Luckily, since jpg images for small TFT displays will likely fit inside RAM (even for many ESP8266 boards), there’s no need to enable PSRAM, unless your program needs it for other purposes. Make sure that BUFFSIZE fits in RAM and is large enough to hold your jpg files.

As for grabbing other image formats (gif, bmp, png…) and resizing: most hobbyists will have access to a (local or hosted) webserver with php. Thanks to very powerful php graphic libraries, only a few lines of php code can take care of converting any common image format to jpg, and send it (resized, rotated, cropped, sharpened, blurred or whatever) to your ESP for rendering. If that’s not an option, then this new library can still reduce the size of an image by a factor 2, 4 or 8 by calling function TJpgDec.setJpgScale() with the desired factor as parameter.

Below is an adapted and stripped version of Bodmer’s example, using an array instead of SPIFFS. Make sure to select the right setup for the TFT_eSPI library before compiling. Some boards also require you to explicitely switch on the backlight in your sketch.

 

Output on the Serial monitor will be something like:

[HTTP] connection closed or file end.
Downloaded in 188 ms.
Decoding and rendering: 125 ms.

Too slow for your eyes? By storing decoded tiles to a display buffer in (PS)RAM instead of pushing them to the display one by one, the image can be pushed to the display in 44 ms

On an ESP32, you can speed up the entire process by using both cores. The library has an example that lets core 0 do the decoding. Whenever it finishes a tile (usually 16×16 pixels), core 1 will push it to the display in parallel. However, this will not beat the above display buffer method with regard to the visual part of the process.

*TJpg_Decoder uses the Tiny JPEG Decompressor engine by ChaN

 

 

3 * (esp32 + tft) = ?

 

ESP32 boards with a mounted TFT display can be so hard to resist that, in a moment of shameless weakness, I purchased three of them in one buy, without consulting reviews…

 

So here’s a report of my first acquaintances with these new Chinese kids on my block.

 

  LilyGO TTGO T4 v1.3

This board with 8MB PSRAM, an SD card slot and a 2.2″ TFT display makes development of graphics oriented sketches very convenient.

I2C pins are available via a connector (cable included), but unfortunately they didn’t break out the standard SPI pins. On my board, different from what some internet pictures suggest, not GPIO25 but GPIO26 is broken out (labeling on the module is correct).

Compared with the LOLIN 2.4″ TFT shield, this display is a bit smaller (with the same resolution) and has no touch screen. Its 3 buttons allow for some basic feedback to the application. Build quality is excellent.

Driving the display was easy, but it took me some time to make it show images from an SD card. That’s because the board’s SD slot requires its own SPI bus on pins 2, 13, 14 & 15. The following code snippets show how I got it working.

In the main section:

 

In setup():

The above code also shows that this board requires you to explicitely switch the backlight on. This can be done by writing HIGH to TFT_BL but I prefer the sigmaDelta method as shown because it lets you control the display’s brightness.

So far, my overall impression of this board and its manufacturer is positive. It would have been perfect if they had broken out the SPI pins. Price: €19.


  LilyGo TTGO T-Display v1.1.

The 1.14″ display has 240×135 pixels. To be honest, fontsize 1 starts to become a bit of a visual challenge here, but I do not intend to use this board for text anyway. Actually, I currently have no specific application in mind for it at all, but I’m sure I’ll find one.

Bodmer’s indispensable TFT_eSPI library supports its ST7789 chip, so I expected no less than turbo speed, especially with ‘only’ 32400 pixels to push. Although this ESP32 WROOM module has no PSRAM and less memory than the T4, there’s still enough memory available for double buffering the display.

Information on github is to the point and up to date, so in no time I ported my most recent Webcam·View·Master sketch to this mini display for testing the board.

Note that the real size of the display is only 2.5 x 1.4 cm!

Like with the T4 board, TFT_BL has to be set HIGH in order to switch the backlight on.

Price: €9. Definitely a bargain for an ESP32 with such a cute display. And it even has a LiPo charging circuit and comes with header pins and a battery cable. Make sure you have an usb-C to usb-A cable because that is not included.


M5Stack Core Gray

After my rather negative review for the M5Stack Fire, this purchase may be a bit surprising. Perhaps I secretly hoped that this Gray model wouldn’t have the same issues and annoyances, but that soon turned out to be false hope. Actually, it’s even more noisy than the Fire and my dacWrite(25, 0) trick doesn’t change that. It is, however, less sensitive to usb power issues during sketch uploads. No ‘brown out detections’ so far.

But then, ESP32 development boards don’t come any better looking than the M5Stack family. Once you can work around its electronic design flaws, the Gray makes a perfect housing for ESP32 gadgets, especially when combined with a matching standing base power supply, equipped with a DHT12 temperature and humidity sensor.

I found a DHT12 library and an example sketch for reading the DHT12 sensor here, but the example didn’t compile. Here’s a corrected version that displays temperature and humidity when an M5Stack Core model is put on the standing base.

 

Compared with the Fire, the Gray has no Neopixel led bars and no PSRAM, making three extra GPIO pins availabe for other purposes, including GPIO16 and GPIO 17 for UART communication. Its battery is only 150 mAh, but I don’t mind because I couldn’t resist that standing base either. Prices: €30 for the Gray and €7.50 for the standing base.


Epilogue

The two LilyGo boards were my first from this manufacturer (is Lily Zhong the Chinese LadyAda?). I like both displays, so they will not be the last TTGOs.

As for the M5Stack ecosystem: now that I have everything I need for making great looking gadgets, I don’t think I’ll invest in it any further. Maybe they should ask some seasoned analog nerd to improve their electronic design with a couple of dirt cheap components.

 

“Alpenblick”

 

Currently ‘lockdown-ed’ at sea level, my thoughts often wander to the majestic heights of my beloved Alps. After visiting one of those Alpine webcam sites, I wondered if I could turn my M5Stack Fire into a handheld Alpenpanorama. Or, to reverse Francis Bacon’s words: if you can’t go to the mountain, then the mountain must come to you…

This project was a bit more challenging than expected, as showing JPEG-encoded web images on a 320×240 TFT display requires extracting, decoding and resizing binary data from a chunked-encoded http response. After two days of bit-wrestling, here’s the result:

The time interval between images was reduced for this video. Music: “Schwärzen-Polka”

At startup, the program reads a list of 60+ webcam links from my web server. This enables me to change the displayed webcams without updating the software. Once running, the middle button pauses/resumes the loop, the left button removes the currently displayed webcam from the loop and the right button restores the original webcam list.

Because these live photo webcams refresh their pictures every 10 minutes, this gadget continuously gives me an up to date view of places where I would so much rather be….

P.S. I recommend not to use the official M5Stack library. In this project, decoding jpeg images with Bodmer’s fast JPEGDecoder library became more than 3 times faster after I replaced M5Stack (that has TFT_eSPI built in) with native TFT_eSPI. This could very well be caused by their library’s interrupt based button functions that are also notorious for causing WiFi crashes.

 

 

Inspiring Chaos

Wonder why I didn’t discover Jason Rampe’s Softology blog much earlier, as it deals with most of the math related topics of my own blog (and many more). His expertise and visualizations, as well as his software application Visions of Chaos, live at the opposite end of the spectrum when compared to my modest tinkering. As a result, most of his visualizations are far beyond the specs of a microcontroller and a small display, but I discovered a few mathematical models that I simply had to try right away:

 

  • Diffusion-limited Aggregation

“Diffusion-limited aggregation creates branched and coral like structures by the process of randomly moving particles that touch and stick to existing stationary particles.” – [Softology blog].

After coding a 2D version for my M5Stack Fire (ESP32), I was surprised to see how this simple principle generates these nice structures. The process starts with a single stationary (green) particle in the center. Then, one by one starting from a randomly chosen point at the border, every next particle takes a random walk inside the circle until it hits a stationary particle to which it sticks, becoming a stationary (green) particle itself.

 

  • Reaction-Diffusion model

It was Alan Turing who came up with a model to explain pattern formation in the skin of animals like zebras and tigers (morphogenesis). It’s based on the reaction of two chemical substances within cells and the diffusion of those chemicals across neighbour cells. Certain diffusion rates and combinations of other parameters can produce stable Turing Patterns.

The above (Wikipedia) picture shows the skin of a Giant pufferfish. The pictures below show some first results of my attempts with different settings on a 200×200 pixel grid.

From left to right: ‘curved stripes’ settings (C), ‘dots’ settings (D), 50% C + 50% D, 67% C + 33% D

There’s still plenty to experiment for me here. The algorithm that I wrote is strikingly simple and fully parameterized, but also quite heavy on the ESP32….

 

  • Mitchell Gravity Set Fractals

A simulation of particles, following Newton’s law of gravity. This simple example has six equal masses placed at the corners of a regular hexagon. Particles (pixels) are colored according to the time they need to cross an escape range and disappear in ‘space’.

 

  • Agent-based models

After coding my own variant of the Foraging Ant Colony example from the Softology blog, I’m currently working on a couple of obstacle escape mechanisms. Coming soon.