Stereoscopic video camera

After I had turned my antique stereoscope into a 3D animated GIF viewer, I began looking for ways to create my own 3D footage, preferably not limited to animated GIFs. I ended up building a stereoscopic camera from two ESP32 Camera boards. It records 3D videos (or stereoscopic pictures) on SD cards in a format that can be (dis)played by the stereoscope’s two TTGO T4 boards. The viewer can download recordings from the cameras over WiFi.

DIY stereoscopic (3D) video & photo camera with two ESP32-CAM boards

Building a prototype was quite simple because I could use code and techniques from my previous projects. Experiments with a TTGO T-Camera Plus board showed that 176×144 px frames (FRAMESIZE_QCIF) could be written to the on-board (SPI) SD slot at about 8 fps. This looked like the maximum size for this board, basically limited by its SD write speed. But then I realized that AI Thinker ESP32-CAM boards have a faster SD_MMC SD slot…

Over the SD_MMC bus in 4-bit mode* (about 2x faster than SPI), ESP32-CAM boards can write 240×176 frames (HQVGA) to the SD card at an acceptable frame rate > 9 fps. My benchmark tests showed that the (more consistent) 1-bit mode is only slightly slower. Moreover, this mode lets you control the flashlight via GPIO 4.

The exposed GPIO pins of ESP32-CAM boards can be used for synchronizing the frame shots taken by both cameras, using the technique that I had developed for the 3D viewer.

Fritzing schematic: the cross-wired Rx and Tx pins are used for frame synchronization

When it comes to reading recorded frames from SD card, the TTGO T4 boards in the viewer can easily achieve higher frame rates than the cameras, so in order to play my own recordings in real time, I had to include short delays after each frame.

So for now, my viewer shows 3D movies in 240×176 format (maximum duration is determined by the SD card’s free space), while pictures are displayed full screen.


*A well known consequence of accessing the SD slot of ESP32-CAM modules in 4-bit mode is that the flashlight will flash during SD transactions, as it is hard-wired to SD_MMC pin 4. That’s why I prefer the only slightly slower 1-bit mode, which is still considerably faster than SPI.


Display 3D animated GIFs

Stereoscopy without a headache…

My earlier Stereoscopy project had been put on hold for a while because the two ESP32 boards for driving its TFT displays refused to properly synchronize. Simplifying the sync technique (dropping interrupts) and shortening some wires finally did the trick.

TTGO T4 boards looping in sync over left and right frame halves of a 3D animated GIF*

The video also shows a manual restart of the boards to prove that synchronization doesn’t depend on a simultaneous start. After the first one has finished downloading its frames, it will wait until the second one has finished downloading the opposite side’s frames as well. Then they will start looping over corresponding left and right frames that are stored in PSRAM, staying perfectly synchronized as long as both sketches run.

While synthesizing a moving 3D image from a sequence of frame pairs, our brain will immediately respond to even the slightest synchronization error by causing a headache. That’s why one can not rely on the ESP32’s internal clocks or NTP syncs. So I used a simple ‘per frame handshake’ mechanism over two cross-wired GPIO pins. Despite the poor video quality and imperfect alignment of the displays, you can actually see the 3D effect in the above video (using the cross-eyed technique). In reality it looks much better!

The TTGO T4 boards fit nicely inside an old 3D viewer, using the buttons for selecting animated GIFs from my webserver. And although this is mainly a gadget for stereoscopy aficionados**, I’m sure that its synchronization method can be useful for future projects.

* GIF found on

** I’m in famous company: Queen guitarist Brian May runs the London Stereoscopic Company where you can buy retro style stereo cards and viewers. Obviously a labor of love!

M5Stack Core2

at first glance

The product description of M5Stack’s new Core2 suggested correction of all hardware flaws of the Fire (see this earlier post), so I decided to give it a try and get me one, also because the Core2 has a capacitive touch screen.

The device arrived nicely packed and, like all M5Stack products, looks quite nice and well built. The included usb-C cable, although a bit short, enables you to get started right away.


The Core2 comes preloaded with a Factory-test application, showing some of its features. A first impression already revealed the following improvements compared to the Fire:

  • Separate (properly working) buttons for On/Off and Reset
  • No more unwanted noises from the speaker
  • Useless Grove port (exposing hard-wired PSRAM pins 16 & 17) has been dropped

And the most notable new features:

  • Capacitive touch screen (works great)
  • Real Time Clock (RTC)
  • Vibration motor
  • AXP192 Power Management Chip
  • I2S amplifier

There’s one design choice that I don’t understand. The broken out GPIO pins of the so called M Bus are hidden behind what looks like a simple plastic cover, but in fact this cover holds a circuit board that contains the microphone and the IMU sensor. Removing the cover can easily damage the circuit or its components, so I decided to leave it in place. That still gives access to the external I2C bus (pins 32 & 33) via the red Grove Port A (standard I2C pins 21 & 22 are internally used by the touch screen, the RTC, the IMU sensor and the power management chip).

In order to program the Core2 from the Arduino IDE, you have to:

Add the board via the Boards Manager. Make sure to add the following link to the ‘Additional Boards Manager URLs’ field (File->Preferences->Settings):

The board will then appear in the boards list as M5Stack-Core2.

Install the M5Core2 library. At the time of writing, I recommend to install it by importing a zip file from Installing it via Arduino IDE’s Library Manager will give you an older version with the same version number! (version 0.0.1). The older version has a very basic Touch library, whereas the Github version supports multiple touch, drag, swipe etc. and uses an easier and more powerful API. Moreover, the newer touch.h file opens with an extended comment section that explains its usage in great detail. Very well done!

You may also have to install a driver for the CP2104 USB to serial chip, if your PC doesn’t already have it. It can be found here.

Although I’m still starting to explore the Core2 myself, here are some tips that may be useful for other starters.

Setting the RTC

The preloaded Factory-test application shows a button for setting the Real Time Clock, but that function is not implemented. I found a very well written sketch on Github that not only lets you sync the RTC with an NTP server, but also demonstrates basic touch functions (using the older touch library) in very readable code. It can be found here.

Use all features of the touch screen

As mentioned earlier, the comment section in touch.h of the newer library version (on Github) explains everything very clearly and even gives some examples. Note that the touch screen is larger than the LCD display: 320×280 pixels, so it can be used for emulating the three buttons on earlier models (note the red circles)

Control the green LED and the vibration motor

Both the green power LED and the vibration motor are controlled directly by the AXP192 Power Management chip. This small sketch continuously switches them on an off.

(the internal speaker can be enabled/disabled by similar commands, e.g. M5.Axp.SetSpkEnable (true); )

Using the TFT display

Under the hood, M5Stack uses Bodmer’s TFT_eSPI library. Once you have included <M5Core2.h>, the display object can be referred to as M5.Lcd. All TFT_eSPI methods will work fine on this object. Sprites can also be associated with it:

This means that existing ESP32 sketches that include the TFT_eSPI library can easily be ported to M5Stack versions that use the M5Stack library instead. Don’t forget to initialize the display by including the following line in setup() :

(the four parameters enable the display, SD slot, UART and I2C, respectively)

Using the AXP192 Power Management chip

A look at the file AXP192.h from the M5Core2 library shows an impressive list of status values that can be retrieved from this chip. There’s also a lot of useful functions, most of them with descriptive names. The ScreenBreath() function is supposed to set the display’s brightness, but it doesn’t work in my sketches…







8-cylinder TFT display

Sports car in disguise…

With its 8-bits parallel interface under the hood, this cheap MCUFriend 2.4″ TFT Arduino shield is something like a V8 powered Aston Martin, disguised as a Volkswagen Beetle.
I wondered how fast an ESP32 could drive it, especially when using the TFT_eSPI library.

These popular 320×240 ili9341 TFT displays are usually equipped with an SPI interface, but the TFT_eSPI library supports their 8-bits parallel version as well. I adapted the corresponding user_setup file for wiring the shield to a LOLIN D32 Pro board (see below), connected the required 15 wires (including two power lines*) and ran the same benchmark sketch that I had previously used in this post. The result was quite impressive.

The complete benchmark test (without the final text loop) completes within 0.8 seconds!


The following table quantifies the considerable speed gain in comparison with the SPI version at 27 MHz and 40 MHz (no DMA).


Here’s my user_setup file for a LOLIN D32 Pro board. It leaves GPIO pins 16 & 17 (used by PSRAM on WROVER boards), standard SPI pins 18, 19 & 23 and UART0 pins 1 & 3 free.  Adding the line #define DMA_BUSY_CHECK fixed a compile error.


This is a great display for projects that require ultra fast screen updates and don’t mind spending 13 GPIO pins. Now I’ll go after the 3.5″ version with 480×320 pixels, hoping it will be less sensitive to solderless wiring than my HX8357D 3.5″ display in 8-bit mode.

* For my shield to work, I had to power its 5V pin from the ESP32’s 5V output pin.

ESP32-CAM demystified

I’ve always felt a bit intimidated by the many different ESP32 Camera boards, as well as by the abundance of information and example programs. Uncertain whether I’d be capable of writing my own application, I took the plunge anyway and purchased a TTGO-T-Camera Plus. No, not the selfie version, as you may have guessed from the picture 🙂

After examining some examples, I soon noticed that most of their code was based on the same source. Espressif’s well documented library and some of lewisxhe’s github repositories then helped me find the right ‘abstraction level’ for dealing with these boards, treating low level libraries and some standard code snippets as black boxes.

Maybe my first basic steps below can help other beginners.


The mission
In order to grasp the basic concept behind all ESP32-Camera applications, I just wanted to make the ST7789 TFT display of my board show the live camera image. No streaming over WiFi yet, just a continuous transfer of pixel color values from the camera’s sensor to the display, as fast as possible for the chosen resolution. If I could achieve this, everything else would require just general programming skills rather than camera-specific knowledge. And I wanted to use the good old-fashioned Arduino IDE, of course.

Which board to select?
Since my TTGO board wasn’t listed in the Arduino IDE (and couldn’t be added via Board Manager), I could have selected ESP32 Dev Module (with PSRAM enabled) or ESP32 Wrover Module. However, I always prefer to manually add unlisted boards so they can have their own pins_arduino.h file. This requires adding a section in the boards.txt file and creating a pins_arduino.h file in a subdirectory for the added board under variants. On a Windows machine, the location of boards.txt is something like:
But as I said, you can also select a standard board and hope there will be no conflicts.

Basic sketch


Explanation of the most important lines:

Set resolution
The following line in the above code tells the camera to use HQVGA resolution (240×176), which will make the image frames fit on the 240×240 display of my TTGO board.

Names of resolutions that are supported by the mounted OV2640 camera are:


Set format
This line defines the format in which a frame is stored in the frame buffer(s).

All supported format options are:



Espressif’s esp_camera library is what powers most of the sketch. This is an ESP32 core library, so if your Arduino IDE is ‘esp aware’, no installation is required. Once included, it lets you create objects from structs (hope I use the correct C++ terminology) like gpio_config_t, sensor_t, camera_config_t and camera_fb_t.

The #include “pins_arduino.h” line will look for a pins_arduino.h file in your sketch folder or, if not present (as in my case), in the board’s subdirectory under variants (see the earlier  “Which board to select?” section). These are the board specific pin definitions for my board:


As you can see, I included the TFT_eSPI library to drive my board’s ST7789 display. Make sure to change the library’s User_Setup_Select.h file accordingly.

Grabbing a frame

This crucial line creates a pointer *fb to the last filled frame buffer (most recent snapshot). In a streaming situation, it will typically live inside in a loop. The frame buffer is a struct that contains a pixel data array buf, as well as metadata like frame size, timestamp etc.

Now we need to process the information in the frame buffer, usually as fast as possible. Regardless of the chosen CAMERA_PIXEL_FORMAT, the frame buffer will always store frame values as 8-bit unsigned integers (uint8_t). Since the example sketch asks for RGB565  color values, the following line reconstructs 16-bit RGB565 color values from pairs of 8-bit values in the fb->buf array and stores them in the 16-bit rgbBitmap array*.

The rgbBitmap array is then pushed to the display by the following line:

After the entire frame buffer has been processed, the resources need to be released asap:


Result and tuning
The above sketch immediately worked and the visual result was smoother than expected. Then I added a couple of lines to measure the achieved frame rate. Running on my TTGO board, the above example sketch attains a stable 22 fps.

*I achieved 25 fps in a dual core & double buffer sketch that pushes fb->buf directly to the display. That’s the theoretical maximum for this sensor and fast enough for a flicker-free display.


Trying to improve the 22 fps of the above sketch, I wrote a version that accepts and processes a JPEG decoded frame buffer. Here it is:

This version also works fine, but its frame rate tops at 17 fps. A jpeg encoded frame buffer can be usefull, though, in cases were the (much smaller) buffer needs to be stored on an SD card or sent over WiFi.

Two sketch parameters in particular are of interest when trying to get higher frame rates : xclk_freq_hz and fb_count.

The maximum frame rate you can theoretically get is limited by the camera sensor itself. It is shown in this table (for OV2640):

Note the counterintuitive effect of higher frame rates for half the frequency. These figures were confirmed by a dummy sketch that only grabs a buffer and immediately releases it. For ‘real’ applications that actually process the frame buffer, you simply have to find out what gives the best result.

Although a comment in some example mentions that the value of fb_count should only be larger than 1 for JPEG buffers, the RGB565 example in this post works best (on PSRAM boards) with fb_count = 4 , and the JPEG example with fb_count = 2. Again, you simply have to play with these parameters to find the best values for your application and board.

What’s next?
Now that I understand the basic concept, next step will be to stream the frames over http, and control the camera via a browser. All examples that I could find use the same concept: an http daemon (webserver) that serves an index.html file showing camera controls. Actually, they all use the same files. I’m not sure what to do yet: adapt these files or write something myself. As my camera will be mounted on my DIY 2-axis gimbal, I also want to control its movement via the same web page.

Other ToDos:

  • ☑ Save frame captures to SD card or web storage (e.g. for time lapse videos)
  • ☐ Stream a continuous 360° panorama view using my new DIY gimbal
  • ☐ Experiment with the onboard I2S MEMS microphone
  • ☐ Experiment with face/object/color/motion recognition

To be continued…

Self-Hosted GPS Tracker

With my T-Beam GPS Tracker operating fine during car trips, I finally have a secure replacement for this very old GPS Tracker App on my nine years old HTC Desire HD*.

The TTGO T-Beam came in this box. I only had to drill a small hole for the antenna


The concept of the old Android app (meanwhile removed from both the App Store and github by its creator**) is very simple. It periodically sends your phone’s GPS readings as http GET parameters to a self-hosted endpoint of your choice. Since TheThingsNetwork‘s HTTP Integration works more or less the same, I could simply reuse the php backend that I once wrote for the app, now for showing my LoRaWan tracker’s location on a map.

(zoomed out for privacy reasons)


Data flow is as follows:

  • Tracker periodically sends GPS readings to TheThingsNetwork. The sketch makes the interval between consecutive send commands depend on the last GPS speed reading
  • HTTP Integration service sends LoRaWan payload+metadata in an http POST variable to a php script (dump.php) on my server (see below)
  • dump.php appends the entire POST value to a log file (log.txt) and writes a formatted selection of relevant data to a text file (last_reading.txt, always overwritten)

The map is an embedded OpenStreetMap ‘Leaflet’ inside an html file, with some simple javacript periodically reading the formatted data from last_reading.txt. After extracting the individual GPS readings and a timestamp in the background, these values are used to update the tracker’s location on the map and to refresh textual info.

Data in the logfile can, after conversion if necessary, be used for analyzing and drawing the trackers’s route. There are several standard solutions for this, even online.


*Running non-stop on its original battery and without any problems since March 2011. I ❤ HTC!

**Hervé Renault, who suggests a modern alternative on his personal website (in French)

My dump.php file (presumes fields latitude, longitude, speed and sats in TTN payload):


joining TheThingsNetwork

LoRaWan and TheThingsNetwork are quite popular in my country, although they may have lost part of their momentum lately*. With no particular use case in mind, it was mainly curiosity that made me purchase a TTGO T-Beam board and join TheThingsNetwork.

The board comes in a box with two antennas (LoRa and GPS) and header pins

My T-Beam (T22 V 1.1)  is basically an ESP32 WROVER module with an onboard SX1276 LoRa radio and a u-blox NEO-M8N GPS module, a type 18650 battery holder and an AXP192 power management chip. The manufacturer’s documentation is a bit confusing, being divided over two apparently official github repositories: and Also, board versions up to 0.7 are significantly different from later versions.

First, I successfully tested the GPS receiver of my version 1.1 board with this sketch. Its code shows the most important differences between version 1.x boards and previous versions (like 0.7). TX and RX pins for the GPS receiver are now 34 and 12 instead of 12 and 15. Furthermore, the newer boards have a power management chip (AXP192) that lets you control power sink to individual board components. It requires an include of the axp20x library as well as code for explicitly powering used components. I recommend to take a look at the examples from that library.

Testing the T-Beam’s LoRa radio either requires a second LoRa board (which I don’t have), or making ‘The Thing’ talk to TheThingsNetwork. I went for the TTN option, obviously. And with a GPS on board, a GPS tracker was a logical choice for my first LoRaWan sketch.

After creating an account on the TTN website, I had to register an ‘Application’ and a ‘Device’, as well as provide a Payload Format Decode function**. Along the way, the system generated some identification codes: Application EUI, Device EUI and Application Key, needed for the OTAA Activation Method that I selected.

Then I ran the sketch below, which I compiled from several sources. After a minute or so, the Serial monitor reported a GPS fix, continued with “EV_JOINING”, and … that was it. Apart from a faulty device or a software issue, I also had to consider the possibility that my T-Beam was not within range of a TTN gateway. Hard to debug, but I was lucky.

TheThingsNetwork Console pages show Application Key and EUIs in hex format. Clicking the <> icon in front of a value will show it in C-style format and then the icon next to it lets you toggle between msb and lsb. It turned out that my sketch expected both EUIs to be in lsb format and the Application Key in msb format. I had used msb for all three of them!

After correcting the EUI values in device_config.h, the “EV_JOINING” in the Serial monitor was followed by “EV_JOINED” and “Good night…”, indicating that the device had been seen by a TTN gateway and gone into deep sleep mode. From that moment on, its payload messages, sent every two minutes (as set in my sketch), appeared at the Data tab of my application’s TTN Console. Looks like my T-Beam is a TTN node!

In order to do something useful with your uploaded data, the TTN Console offers several ‘Integrations’. For my GPS tracker I first tried TTN Mapper, making my GPS readings contribute to a coverage map of TTN gateways. It also lets you download your data in csv format. However, I didn’t see my readings on their map so far, perhaps because my signal was picked up by a gateway with unspecified location. So I switched to HTTP Integration in order to have all readings sent to an endpoint on my php/MySQL server.

My next steps will be testing coverage and reception of the T-Beam when used during a car travel, as wel as trying some other integrations. Should that raise my enthusiasm for TheThingsNetwork enough, then I might even consider to run my own TTN gateway in order to improve LoRaWan coverage in my area.

In summary, making my Thing join the Network wasn’t just plug & play, so I hope this post and the below mixture of mainly other people’s code will be of any help to TTN starters.


* based on (the lack of) recent activity of TTN communities in my area.

Code for a T-Beam v 1.x (with AXP192 power control chip), compiled from several sources. It sends latitude, longitude, altitude, hdop and number of satellites to TTN.







device_config.h (your OTAA secrets can be copied from the TTN Console (note msb/lsb!)


**Payload Format decoder (javascript to be entered at the Payload Formats tab of the TTN Console; reverses the encoding as performed by function buildPacket() in gps.cpp)


Where ISS…?


Tracking the International Space Station (ISS)

Borrowing most of its code from my What’s Up? air traffic monitor, this small project uses publicly available live data for showing current position, altitude and velocity of the International Space Station on a small TFT display. The original version draws the ISS and its trail over an ‘equirectangular’ map of the Earth, also showing the actual daylight curve and current positions of the Sun and the Moon.

The video below shows the ESP32 variant, but with a couple of small adaptions, the sketch will also run on an ESP8266. As usual, my camera does little justice to the TFT display…

ISS tracker on the 2.2″ display of a TTGO T4 (ESP32 )

Position and data are refreshed every 5 seconds, during which time the ISS has travelled almost 40 km! The backgound image also needs to be updated every 270 seconds – the time in which the daylight curve will have moved one pixel to the left over a 320 pixels wide equirectangular Earth map. I used the station’s previous and current position for calculating the icon’s rotation angle. This is just to indicate the station’s heading, and doesn’t refer to its actual orientation in space, of course.

The newer version below takes a different approach by keeping the ISS in the center over a rotating globe. In reality, the display looks much better than in this video.


I also made a third version that keeps the ISS icon in the center of a moving Google map with adjustable zoom level, but that requires a Google Maps API key.

This project seems very suitable for educational purposes. With just an ESP board and a TFT display, students can quickly become familiar with C++ coding and some essential maker techniques like Internet communication, JSON parsing and image processing. As a side effect, they will also learn something about Earth movement, cartography, stereometry, Newton’s laws and space research.

Here’s the code for the equirectangular map version:


Content of included sprite.h file:


Virtual Radar Server

What’s Up? ” revisited

The discovery that you can run Virtual Radar Server (VRS) on a Raspberry Pi triggered me to revise my What’s Up flight radar for ESP32/8266 once again. Mainly interested in aircraft within the range of my ADS-B receivers, I already preferred querying those receivers over using the adsbexchange API, especially after they removed most static metadata from their json response. This had even forced me to setup and maintain my own database for looking up fields like aircraft model, type, operator, country etc.

However, querying Virtual Radar Server on one of my PiAware receivers lets VRS do these lookups from its own up-to-date web databases! Its enriched json response, based on my own receivers’ ADS-B and MLAT reception, contains more details than the current adsbexchange API. Besides, unlike adsbexchange, a local VRS doesn’t mind to be queried once every second! Using VRS as the primary source, I can always choose to find possibly  filtered out ‘sensitive’ aircraft by querying adsbexchange as well.

A quick prototype on an ESP32 and a 2.4″ TFT display finally shows flicker free icon movement after applying some display techniques from earlier projects.

My smartphone doesn’t like filming TFT displays…

While simultaneously running the old adsbexchange version and this new one, I noticed that the new version showed more aircraft, especially nearby aircraft at low flight levels. This is surprising, since claims to provide unfiltered data and they should be aware of those those missing aircraft because I feed them!

Anticipating a future project (“Automatic Plane Spotter“), the new version also keeps track of the nearest aircraft’s position, expressed in azimuth and elevation. This will be used for driving the stepper motors of a pan-tilt camera mount that I’m currently building.


Stereoscopy on ESP32?

double trouble…?

I knew in advance that trying to make an ESP32 show moving stereoscopic images on a TFT display would be hard. But sometimes you just want to know how close you can get. Besides, it fits my lifelong interest in stereoscopy*.

I found suitable test material in the form of this animated gif (author unknown). Now I ‘only’ had to find a technique for hiding the left frames for the right eye, and vice versa.


The first technique that I tried used two of these Adafruit “LCD Light Valves“:

Source: Adafruit website

Together, they formed active shutter glasses for looking at a single TFT display. Looping over the original frames, the display alternately showed the left and right half of a frame, always closing the opposite eye’s valve. The speed of these shutters surprised me, but their accuracy proved insufficient. In order to make the left frame completely invisible to the right eye and vice versa, I had to build in short delays that led to flickering. Without those delays, there was too much ‘leakage’ from the opposite side, a recipe for headaches.


The only solution for the above problem was a physical separation of left and right eye-range (i.e. the classical View·Master© approach). Luckily, I had this 3D Virtual Reality Viewer for smartphones laying around. Instead of a smartphone, it can easily hold an ESP32, a battery and two TFT displays. The technical challenge of this method was to keep both displays in sync.

It’s easy for a single ESP32 to drive two identical TFT displays over SPI. The state of a display’s CS line decides whether it responds to the master or not. With CS for left pulled LOW and CS for right pulled HIGH, only the left display will respond, and vice versa. And when data needs to be sent to both displays, you simply pull both CS lines LOW.

The results of this approach were disappointing. Unfortunately, the time the SPI bus needs for switching between clients proved to be too long, resulting in even more flickering than with the shutter glasses technique.


As for now, I can see only one option left: using two ESP32 boards, each driving their own display. This will likely solve the flickering (this post shows that the ESP32 is fast enough), but now I’ll have to keep both ESP32 boards in sync.

Both boards are in the same housing and share a power supply, so the most simple, fast and accurate way to sync them seemed over two cross-wired IO pins. Unfortunately, and for reasons still unknown to me, when I run the sketch below on both boards, they keep waiting for each other forever because they will always read 1 on their inPin, even when the other board writes 0 to its outPin.


Now that I’ve come closer to a 3D movie viewer than expected, it would be a shame if this apparently simple synchronization problem would stop me.

To be continued (tips are welcome) Problem solved, see this post.



*The first 3D image that I saw as a kid was a “color anaglyph” map of a Dutch coal mine. It came with cardboard glasses holding a red and a cyan colored lens. The map looked something like this:


There were also these picture pairs, forcing your eyes to combine short-distance focus with long-distance alignment. It gave me some headaches until I invented a simple trick: swap left and right and watch them cross-eyed…


My aunt once gave me an antique wooden predecessor of the viewmaster, complete with dozens of black & white images of European capitals.

A newer ‘antique’ that I like for its ingenuity is Nintendo’s Virtual Boy with its rotating mirror technique. Instant headache, but so cool!


Somewhat eccentric newcomers were the “stereograms”. Fascinating how fast our brain manages to make 3D spaghetti out of their messy images, even if they move!