Stereoscopic video camera

After I had turned my antique stereoscope into a 3D animated GIF viewer, I began looking for ways to create my own 3D footage, preferably not limited to animated GIFs. I ended up building a stereoscopic camera from two ESP32 Camera boards. It records 3D videos (or stereoscopic pictures) on SD cards in a format that can be (dis)played by the stereoscope’s two TTGO T4 boards. The viewer can download recordings from the cameras over WiFi.

DIY stereoscopic (3D) video & photo camera with two ESP32-CAM boards

Building a prototype was quite simple because I could use code and techniques from my previous projects. Experiments with a TTGO T-Camera Plus board showed that 176×144 px frames (FRAMESIZE_QCIF) could be written to the on-board (SPI) SD slot at about 8 fps. This looked like the maximum size for this board, basically limited by its SD write speed. But then I realized that AI Thinker ESP32-CAM boards have a faster SD_MMC SD slot…

Over the SD_MMC bus in 4-bit mode* (about 2x faster than SPI), ESP32-CAM boards can write 240×176 frames (HQVGA) to the SD card at an acceptable frame rate > 9 fps. My benchmark tests showed that the (more consistent) 1-bit mode is only slightly slower. Moreover, this mode lets you control the flashlight via GPIO 4.

The exposed GPIO pins of ESP32-CAM boards can be used for synchronizing the frame shots taken by both cameras, using the technique that I had developed for the 3D viewer.

Fritzing schematic: the cross-wired Rx and Tx pins are used for frame synchronization

When it comes to reading recorded frames from SD card, the TTGO T4 boards in the viewer can easily achieve higher frame rates than the cameras, so in order to play my own recordings in real time, I had to include short delays after each frame.

So for now, my viewer shows 3D movies in 240×176 format (maximum duration is determined by the SD card’s free space), while pictures are displayed full screen.

 

*A well known consequence of accessing the SD slot of ESP32-CAM modules in 4-bit mode is that the flashlight will flash during SD transactions, as it is hard-wired to SD_MMC pin 4. That’s why I prefer the only slightly slower 1-bit mode, which is still considerably faster than SPI.

 

Display 3D animated GIFs

Stereoscopy without a headache…

My earlier Stereoscopy project had been put on hold for a while because the two ESP32 boards for driving its TFT displays refused to properly synchronize. Simplifying the sync technique (dropping interrupts) and shortening some wires finally did the trick.

TTGO T4 boards looping in sync over left and right frame halves of a 3D animated GIF*

The video also shows a manual restart of the boards to prove that synchronization doesn’t depend on a simultaneous start. After the first one has finished downloading its frames, it will wait until the second one has finished downloading the opposite side’s frames as well. Then they will start looping over corresponding left and right frames that are stored in PSRAM, staying perfectly synchronized as long as both sketches run.

While synthesizing a moving 3D image from a sequence of frame pairs, our brain will immediately respond to even the slightest synchronization error by causing a headache. That’s why one can not rely on the ESP32’s internal clocks or NTP syncs. So I used a simple ‘per frame handshake’ mechanism over two cross-wired GPIO pins. Despite the poor video quality and imperfect alignment of the displays, you can actually see the 3D effect in the above video (using the cross-eyed technique). In reality it looks much better!

The TTGO T4 boards fit nicely inside an old 3D viewer, using the buttons for selecting animated GIFs from my webserver. And although this is mainly a gadget for stereoscopy aficionados**, I’m sure that its synchronization method can be useful for future projects.

* GIF found on http://www.f-lohmueller.de/pov_tut/stereo/stereo_400e.htm

** I’m in famous company: Queen guitarist Brian May runs the London Stereoscopic Company where you can buy retro style stereo cards and viewers. Obviously a labor of love!

Self-Hosted GPS Tracker

With my T-Beam GPS Tracker operating fine during car trips, I finally have a secure replacement for this very old GPS Tracker App on my nine years old HTC Desire HD*.

The TTGO T-Beam came in this box. I only had to drill a small hole for the antenna

 

The concept of the old Android app (meanwhile removed from both the App Store and github by its creator**) is very simple. It periodically sends your phone’s GPS readings as http GET parameters to a self-hosted endpoint of your choice. Since TheThingsNetwork‘s HTTP Integration works more or less the same, I could simply reuse the php backend that I once wrote for the app, now for showing my LoRaWan tracker’s location on a map.

(zoomed out for privacy reasons)

 

Data flow is as follows:

  • Tracker periodically sends GPS readings to TheThingsNetwork. The sketch makes the interval between consecutive send commands depend on the last GPS speed reading
  • HTTP Integration service sends LoRaWan payload+metadata in an http POST variable to a php script (dump.php) on my server (see below)
  • dump.php appends the entire POST value to a log file (log.txt) and writes a formatted selection of relevant data to a text file (last_reading.txt, always overwritten)

The map is an embedded OpenStreetMap ‘Leaflet’ inside an html file, with some simple javacript periodically reading the formatted data from last_reading.txt. After extracting the individual GPS readings and a timestamp in the background, these values are used to update the tracker’s location on the map and to refresh textual info.

Data in the logfile can, after conversion if necessary, be used for analyzing and drawing the trackers’s route. There are several standard solutions for this, even online.

 

*Running non-stop on its original battery and without any problems since March 2011. I ❤ HTC!

**Hervé Renault, who suggests a modern alternative on his personal website (in French)


My dump.php file (presumes fields latitude, longitude, speed and sats in TTN payload):

TTGO T-Beam

joining TheThingsNetwork

LoRaWan and TheThingsNetwork are quite popular in my country, although they may have lost part of their momentum lately*. With no particular use case in mind, it was mainly curiosity that made me purchase a TTGO T-Beam board and join TheThingsNetwork.

The board comes in a box with two antennas (LoRa and GPS) and header pins

My T-Beam (T22 V 1.1)  is basically an ESP32 WROVER module with an onboard SX1276 LoRa radio and a u-blox NEO-M8N GPS module, a type 18650 battery holder and an AXP192 power management chip. The manufacturer’s documentation is a bit confusing, being divided over two apparently official github repositories: https://github.com/LilyGO/TTGO-T-Beam and https://github.com/Xinyuan-LilyGO/LilyGO-T-Beam. Also, board versions up to 0.7 are significantly different from later versions.

First, I successfully tested the GPS receiver of my version 1.1 board with this sketch. Its code shows the most important differences between version 1.x boards and previous versions (like 0.7). TX and RX pins for the GPS receiver are now 34 and 12 instead of 12 and 15. Furthermore, the newer boards have a power management chip (AXP192) that lets you control power sink to individual board components. It requires an include of the axp20x library as well as code for explicitly powering used components. I recommend to take a look at the examples from that library.

Testing the T-Beam’s LoRa radio either requires a second LoRa board (which I don’t have), or making ‘The Thing’ talk to TheThingsNetwork. I went for the TTN option, obviously. And with a GPS on board, a GPS tracker was a logical choice for my first LoRaWan sketch.

After creating an account on the TTN website, I had to register an ‘Application’ and a ‘Device’, as well as provide a Payload Format Decode function**. Along the way, the system generated some identification codes: Application EUI, Device EUI and Application Key, needed for the OTAA Activation Method that I selected.

Then I ran the sketch below, which I compiled from several sources. After a minute or so, the Serial monitor reported a GPS fix, continued with “EV_JOINING”, and … that was it. Apart from a faulty device or a software issue, I also had to consider the possibility that my T-Beam was not within range of a TTN gateway. Hard to debug, but I was lucky.

TheThingsNetwork Console pages show Application Key and EUIs in hex format. Clicking the <> icon in front of a value will show it in C-style format and then the icon next to it lets you toggle between msb and lsb. It turned out that my sketch expected both EUIs to be in lsb format and the Application Key in msb format. I had used msb for all three of them!

After correcting the EUI values in device_config.h, the “EV_JOINING” in the Serial monitor was followed by “EV_JOINED” and “Good night…”, indicating that the device had been seen by a TTN gateway and gone into deep sleep mode. From that moment on, its payload messages, sent every two minutes (as set in my sketch), appeared at the Data tab of my application’s TTN Console. Looks like my T-Beam is a TTN node!

In order to do something useful with your uploaded data, the TTN Console offers several ‘Integrations’. For my GPS tracker I first tried TTN Mapper, making my GPS readings contribute to a coverage map of TTN gateways. It also lets you download your data in csv format. However, I didn’t see my readings on their map so far, perhaps because my signal was picked up by a gateway with unspecified location. So I switched to HTTP Integration in order to have all readings sent to an endpoint on my php/MySQL server.

My next steps will be testing coverage and reception of the T-Beam when used during a car travel, as wel as trying some other integrations. Should that raise my enthusiasm for TheThingsNetwork enough, then I might even consider to run my own TTN gateway in order to improve LoRaWan coverage in my area.

In summary, making my Thing join the Network wasn’t just plug & play, so I hope this post and the below mixture of mainly other people’s code will be of any help to TTN starters.

 

* based on (the lack of) recent activity of TTN communities in my area.


Code for a T-Beam v 1.x (with AXP192 power control chip), compiled from several sources. It sends latitude, longitude, altitude, hdop and number of satellites to TTN.

GPS-Mapper.ino

 

gps.h

 

gps.cpp

 

device_config.h (your OTAA secrets can be copied from the TTN Console (note msb/lsb!)

 

**Payload Format decoder (javascript to be entered at the Payload Formats tab of the TTN Console; reverses the encoding as performed by function buildPacket() in gps.cpp)

 

Where ISS…?

 

Tracking the International Space Station (ISS)

Borrowing most of its code from my What’s Up? air traffic monitor, this small project uses publicly available live data for showing current position, altitude and velocity of the International Space Station on a small TFT display. The original version draws the ISS and its trail over an ‘equirectangular’ map of the Earth, also showing the actual daylight curve and current positions of the Sun and the Moon.

The video below shows the ESP32 variant, but with a couple of small adaptions, the sketch will also run on an ESP8266. As usual, my camera does little justice to the TFT display…

ISS tracker on the 2.2″ display of a TTGO T4 (ESP32 )

Position and data are refreshed every 5 seconds, during which time the ISS has travelled almost 40 km! The backgound image also needs to be updated every 270 seconds – the time in which the daylight curve will have moved one pixel to the left over a 320 pixels wide equirectangular Earth map. I used the station’s previous and current position for calculating the icon’s rotation angle. This is just to indicate the station’s heading, and doesn’t refer to its actual orientation in space, of course.

The newer version below takes a different approach by keeping the ISS in the center over a rotating globe. In reality, the display looks much better than in this video.

 

I also made a third version that keeps the ISS icon in the center of a moving Google map with adjustable zoom level, but that requires a Google Maps API key.

This project seems very suitable for educational purposes. With just an ESP board and a TFT display, students can quickly become familiar with C++ coding and some essential maker techniques like Internet communication, JSON parsing and image processing. As a side effect, they will also learn something about Earth movement, cartography, stereometry, Newton’s laws and space research.


Here’s the code for the equirectangular map version:

 

Content of included sprite.h file:

 

Stereoscopy on ESP32?

double trouble…?

I knew in advance that trying to make an ESP32 show moving stereoscopic images on a TFT display would be hard. But sometimes you just want to know how close you can get. Besides, it fits my lifelong interest in stereoscopy*.

I found suitable test material in the form of this animated gif (author unknown). Now I ‘only’ had to find a technique for hiding the left frames for the right eye, and vice versa.

 

The first technique that I tried used two of these Adafruit “LCD Light Valves“:

Source: Adafruit website

Together, they formed active shutter glasses for looking at a single TFT display. Looping over the original frames, the display alternately showed the left and right half of a frame, always closing the opposite eye’s valve. The speed of these shutters surprised me, but their accuracy proved insufficient. In order to make the left frame completely invisible to the right eye and vice versa, I had to build in short delays that led to flickering. Without those delays, there was too much ‘leakage’ from the opposite side, a recipe for headaches.

 

The only solution for the above problem was a physical separation of left and right eye-range (i.e. the classical View·Master© approach). Luckily, I had this 3D Virtual Reality Viewer for smartphones laying around. Instead of a smartphone, it can easily hold an ESP32, a battery and two TFT displays. The technical challenge of this method was to keep both displays in sync.

It’s easy for a single ESP32 to drive two identical TFT displays over SPI. The state of a display’s CS line decides whether it responds to the master or not. With CS for left pulled LOW and CS for right pulled HIGH, only the left display will respond, and vice versa. And when data needs to be sent to both displays, you simply pull both CS lines LOW.

The results of this approach were disappointing. Unfortunately, the time the SPI bus needs for switching between clients proved to be too long, resulting in even more flickering than with the shutter glasses technique.

 

As for now, I can see only one option left: using two ESP32 boards, each driving their own display. This will likely solve the flickering (this post shows that the ESP32 is fast enough), but now I’ll have to keep both ESP32 boards in sync.

Both boards are in the same housing and share a power supply, so the most simple, fast and accurate way to sync them seemed over two cross-wired IO pins. Unfortunately, and for reasons still unknown to me, when I run the sketch below on both boards, they keep waiting for each other forever because they will always read 1 on their inPin, even when the other board writes 0 to its outPin.

 

Now that I’ve come closer to a 3D movie viewer than expected, it would be a shame if this apparently simple synchronization problem would stop me.

To be continued (tips are welcome) Problem solved, see this post.

 


 

*The first 3D image that I saw as a kid was a “color anaglyph” map of a Dutch coal mine. It came with cardboard glasses holding a red and a cyan colored lens. The map looked something like this:

 

There were also these picture pairs, forcing your eyes to combine short-distance focus with long-distance alignment. It gave me some headaches until I invented a simple trick: swap left and right and watch them cross-eyed…

 

My aunt once gave me an antique wooden predecessor of the viewmaster, complete with dozens of black & white images of European capitals.

A newer ‘antique’ that I like for its ingenuity is Nintendo’s Virtual Boy with its rotating mirror technique. Instant headache, but so cool!

 

Somewhat eccentric newcomers were the “stereograms”. Fascinating how fast our brain manages to make 3D spaghetti out of their messy images, even if they move!

 

 

 

Chips from the lathe

This first post of 2020 just shows some videos of mini projects from the past 3 months.

Here’s an improvement (I hope) of my previous attempt to simulate fire on a small TFT display. I’ve added a glowing particles effect, did some parameter fine tuning and changed the color palette. The simulated area forms a layer over a jpg image, read from SD card.

Alas, my phone’s camera misses most of the improvements…

 

The next video shows the tesselation process of a 2D area according to the Majority Voting principle. In the initial state, every pixel randomly gets one out of three* possible colors. Then, in each (discrete and simultaneous) next step, every pixel takes the color of the majority of its ‘neighbours’. It’s no surprise that the chosen definition of neighborhood has great influence on this self-organizing process and its final state.

* The classic example uses 2 colors, but I chose to have 3 for a more fashionable (?) look.
 

 

Finally, meet Perrier Loop, the most complex Self-replicating Cellular Automaton that I managed to squeeze into my cellular automata framework for ESP32 (so far).  Grid cells can be in one out of 64 states (colors). Their state transitions are governed by 637 rules that use 16 variables (placeholders for up to 7 states, so the real number of rules is much larger). Each cell complies to this exact same set of rules to determine its next state, based on its current state and that of its 4 orthogonal neighbours (Von Neumann neighborhood).

In mathematical terminology, Perrier Loop is a self-replicating universal Turing machine.

And it looks so simple on this 320×240 display….

Unripe (Ada)fruit?

I’ve always been an Adafruit fan, gladly supporting their open source contributions by buying their products. But recently, I was very disappointed by the bad performance of their TCA9548A 1-to-8 I2C Expander.

I had bought it to be able to drive two identical SH1106 OLED displays without having to solder on one of the displays in order to give it a different I2C address.

First I ran an I2C scanner, and the result looked promising: all connected I2C devices were detected with their correct I2C address. But then, driving a single display only worked if it was connected to channel 0 of the expander. And even then, the display would often show errors.

Then I tried a DS3231 Real Time Clock, because there’s very little I2C traffic involved in reading  the time from this simple and reliable chip. Even when connected to channel 0, setting the clock didn’t work well and readings of a correctly set clock were mostly  messed up. Since this expander seemed unable to reliably drive a single device, there was no sense in trying to connect multiple devices.

[UPDATE 15-01-2020] quite unexpectedly, I found the *solution* on an Adafruit forum! The multiple-sensor example sketch in Adafruit’s tutorial for this product misses an essential line. After putting the line Wire.begin() in setup(), that is: before the first call of tcaselect(), the module works fine! Wonder why they don’t correct this in their tutorial.

 

Tic-Tac-Toe trainer

Turning an Arduino into an invincible Tic-Tac-Toe master is hardly a challenge, but I wrote a sketch for it anyway because I need it for a more ambitious project. For that project, I also had to design a couple of flawed Tic-Tac-Toe strategies with different levels of ‘vincibility’.

Next, I’ll have two Arduinos play matches against each other and analyse the outcome for all combinations of competing strategies. Now the strategies can be ordered based on their success rate.

The final goal is to make the Arduino act as a Tic-Tac-Toe trainer for beginners. While guiding their learning process by letting them play against increasingly better strategies, I can monitor their progression and compare it with the learning process of an Artificial Neural Network (ANN), for which I’m currently writing a sketch. Will 2,000 bytes be able to keep up with 100,000,000,000 human neurons?

This is a work in progress. For now, the video shows the Arduino in invincible mode, with me playing at the level of a befriended nation’s president. I even managed to put myself in a lost position after my very first move. It definitely takes some skills to loose this game…

 

 

 

Real Virtuality (RV)

Just a thought that crossed my mind: in today’s Virtual Reality, computers are used to simulate reality in a virtual setting, but when the first computers became available in the 1940’s, they were used by mathematicians for exactly the opposite.

For centuries, they had been creating all kinds of virtual worlds inside their minds, supported by little more than paper or a blackboard. Mind games, where everything was allowed, as long as you could prove it from self-postulated axioms. The principle of fractals or the concept behind cellular automata, for instance, had already been developed long before their fascinating complexity could be visualized.

[Kurt Gödel*, wearing 2D glasses]

The arrival of computers offered previously unthinkable possibilities to visualize these virtual worlds ‘for real’ (Real Virtuality…?). By studying the results, scientists developed many new insights and ideas. Nevertheless, true geniuses obviously don’t need VR or RV. Kurt didn’t even use his blackboard…

 

* Kurt Gödel was an Austrian mathematician, most famous for his Incompleteness Theorem.