M5Stack Core2

at first glance

The product description of M5Stack’s new Core2 suggested correction of all hardware flaws of the Fire (see this earlier post), so I decided to give it a try and get me one, also because the Core2 has a capacitive touch screen.

The device arrived nicely packed and, like all M5Stack products, looks quite nice and well built. The included usb-C cable, although a bit short, enables you to get started right away.

 

The Core2 comes preloaded with a Factory-test application, showing some of its features. A first impression already revealed the following improvements compared to the Fire:

  • Separate (properly working) buttons for On/Off and Reset
  • No more unwanted noises from the speaker
  • Useless Grove port (exposing hard-wired PSRAM pins 16 & 17) has been dropped

And the most notable new features:

  • Capacitive touch screen (works great)
  • Real Time Clock (RTC)
  • Vibration motor
  • AXP192 Power Management Chip
  • I2S amplifier

There’s one design choice that I don’t understand. The broken out GPIO pins of the so called M Bus are hidden behind what looks like a simple plastic cover, but in fact this cover holds a circuit board that contains the microphone and the IMU sensor. Removing the cover can easily damage the circuit or its components, so I decided to leave it in place. That still gives access to the external I2C bus (pins 32 & 33) via the red Grove Port A (standard I2C pins 21 & 22 are internally used by the touch screen, the RTC, the IMU sensor and the power management chip).

In order to program the Core2 from the Arduino IDE, you have to:

Add the board via the Boards Manager. Make sure to add the following link to the ‘Additional Boards Manager URLs’ field (File->Preferences->Settings):

https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/arduino/package_m5stack_index.json

The board will then appear in the boards list as M5Stack-Core2.

Install the M5Core2 library. At the time of writing, I recommend to install it by importing a zip file from https://github.com/m5stack/M5Core2. Installing it via Arduino IDE’s Library Manager will give you an older version with the same version number! (version 0.0.1). The older version has a very basic Touch library, whereas the Github version supports multiple touch, drag, swipe etc. and uses an easier and more powerful API. Moreover, the newer touch.h file contains an extended comment section that explains its usage in great detail. Very well done!

You may also have to install a driver for the CP2104 USB to serial chip, if your PC doesn’t already have it. It can be found here.


Although I’m still starting to explore the Core2 myself, here are some tips that may be useful for other starters.

Setting the RTC

The preloaded Factory-test application shows a button for setting the Real Time Clock, but that function is not implemented. I found a very well written sketch on Github that not only lets you sync the RTC with an NTP server, but also demonstrates basic touch functions (using the older touch library) in very readable code. It can be found here.

Use all features of the touch screen

As mentioned earlier, the comment section in touch.h of the newer library version (on Github) explains everything very clearly and even gives some examples. Note that the touch screen is larger than the LCD display: 320×280 pixels, so it can be used for emulating the three buttons on earlier models (note the red circles)

Control the green LED and the vibration motor

Both the green power LED and the vibration motor are controlled directly by the AXP192 Power Management chip. This small sketch continuously switches them on an off.

(the internal speaker can be enabled/disabled by similar commands, e.g. M5.Axp.SetSpkEnable (true); )

Using the TFT display

Under the hood, M5Stack uses Bodmer’s TFT_eSPI library. Once you have included <M5Core2.h>, the display object can be referred to as M5.Lcd. All TFT_eSPI methods will work fine on this object. Sprites can also be associated with it:

This means that existing ESP32 sketches that include the TFT_eSPI library can easily be ported to M5Stack versions that use the M5Stack library instead. Don’t forget to initialize the display by including the following line in setup() :

(the four parameters enable the display, SD slot, UART and I2C, respectively)

Using the AXP192 Power Management chip

A look at the file AXP192.h from the M5Core2 library shows an impressive list of status values that can be retrieved from this chip. There’s also a lot of useful functions, most of them with descriptive names. The ScreenBreath() function is supposed to set the display’s brightness, but it doesn’t work in my sketches…

 

 

 

 

 

 

8-cylinder TFT display

Sports car in disguise…


With its 8-bits parallel interface under the hood, this cheap MCUFriend 2.4″ TFT Arduino shield is like a V8 powered Aston Martin, disguised as a Volkswagen Beetle.
I wondered how fast an ESP32 could drive it, especially when using the TFT_eSPI library.

These popular 320×240 ili9341 TFT displays are usually equipped with an SPI interface, but the TFT_eSPI library supports their 8-bits parallel version as well. I adapted the corresponding user_setup file for wiring the shield to a LOLIN D32 Pro board (see below), connected the required 15 wires (including two power lines*) and ran the same benchmark sketch that I had previously used in this post. The result was quite impressive.

The complete benchmark test (without the final text loop) completes within 0.8 seconds!

 

The following table quantifies the considerable speed gain in comparison with the SPI version at 27 MHz and 40 MHz (no DMA).

 

Here’s my user_setup file for a LOLIN D32 Pro board. It leaves GPIO pins 16 & 17 (used by PSRAM on WROVER boards), standard SPI pins 18, 19 & 23 and UART0 pins 1 & 3 free.  Adding the line #define DMA_BUSY_CHECK fixed a compile error.

 

This is a great display for projects that require ultra fast screen updates and don’t mind spending 13 GPIO pins. Now I’ll go after the 3.5″ version with 480×320 pixels, hoping it will be less sensitive to solderless wiring than my HX8357D 3.5″ display in 8-bit mode.

* For my shield to work, I had to power its 5V pin from the ESP32’s 5V output pin.

ESP32-CAM demystified


I’ve always felt a bit intimidated by the many different ESP32 Camera boards, as well as by the abundance of information and example programs. Uncertain whether I’d be capable of writing my own application, I took the plunge anyway and purchased a TTGO-T-Camera Plus. No, not the selfie version, as you may have guessed from the picture 🙂

After examining some examples, I soon noticed that most of their code was based on the same source. Espressif’s well documented library and some of lewisxhe’s github repositories then helped me find the right ‘abstraction level’ for dealing with these boards, treating low level libraries and some standard code snippets as black boxes.

Maybe my first basic steps below can help other beginners.

 

The mission
In order to grasp the basic concept behind all ESP32-Camera applications, I just wanted to make the ST7789 TFT display of my board show the live camera image. No streaming over WiFi yet, just a continuous transfer of pixel color values from the camera’s sensor to the display, as fast as possible for the chosen resolution. If I could achieve this, everything else would require just general programming skills rather than camera-specific knowledge. And I wanted to use the good old-fashioned Arduino IDE, of course.

Which board to select?
Since my TTGO board wasn’t listed in the Arduino IDE (and couldn’t be added via Board Manager), I could have selected ESP32 Dev Module (with PSRAM enabled) or ESP32 Wrover Module. However, I always prefer to manually add unlisted boards so they can have their own pins_arduino.h file. This requires adding a section in the boards.txt file and creating a pins_arduino.h file in a subdirectory for the added board under variants. On a Windows machine, the location of boards.txt is something like:
C:\Users\<username>\AppData\Local\Arduino15\packages\esp32\hardware\esp32\1.0.4
But as I said, you can also select a standard board and hope there will be no conflicts.

Basic sketch

 

Explanation of the most important lines:

Set resolution
The following line in the above code tells the camera to use HQVGA resolution (240×176), which will make the image frames fit on the 240×240 display of my TTGO board.

Names of resolutions that are supported by the mounted OV2640 camera are:

 

Set format
This line defines the format in which a frame is stored in the frame buffer(s).

All supported format options are:

 

Includes

Espressif’s esp_camera library is what powers most of the sketch. This is an ESP32 core library, so if your Arduino IDE is ‘esp aware’, no installation is required. Once included, it lets you create objects from structs (hope I use the correct C++ terminology) like gpio_config_t, sensor_t, camera_config_t and camera_fb_t.

The #include “pins_arduino.h” line will look for a pins_arduino.h file in your sketch folder or, if not present (as in my case), in the board’s subdirectory under variants (see the earlier  “Which board to select?” section). These are the board specific pin definitions for my board:

 

As you can see, I included the TFT_eSPI library to drive my board’s ST7789 display. Make sure to change the library’s User_Setup_Select.h file accordingly.

Grabbing a frame

This crucial line creates a pointer *fb to the last filled frame buffer (most recent snapshot). In a streaming situation, it will typically live inside in a loop. The frame buffer is a struct that contains a pixel data array buf, as well as metadata like frame size, timestamp etc.

Now we need to process the information in the frame buffer, usually as fast as possible. Regardless of the chosen CAMERA_PIXEL_FORMAT, the frame buffer will always store frame values as 8-bit unsigned integers (uint8_t). Since the example sketch asks for RGB565  color values, the following line reconstructs 16-bit RGB565 color values from pairs of 8-bit values in the fb->buf array and stores them in the 16-bit rgbBitmap array.

The rgbBitmap array is then pushed to the display by the following line:

After the entire frame buffer has been processed, the resources need to be released asap:

 

Result and tuning
The above sketch immediately worked and the visual result was smoother than expected. Then I added a couple of lines to measure the achieved frame rate. Running on my TTGO board, the above example sketch attains a stable 22 fps. Not really impressive, but fast enough for a flicker-free display.

 

Trying to improve the frame rate, I wrote a version that accepts and processes a JPEG decoded frame buffer. Here it is:

This version also works fine, but its frame rate tops at 17 fps. A jpeg encoded frame buffer can be usefull, though, in cases were the (much smaller) buffer needs to travel over WiFi.

Two sketch parameters in particular are of interest when trying to get higher frame rates : xclk_freq_hz and fb_count.

The maximum frame rate you can theoretically get is limited by the camera sensor itself. It is shown in this table (for OV2640):

Note the counterintuitive effect of higher frame rates for half the frequency. These figures were confirmed by a dummy sketch that only grabs a buffer and immediately releases it. For ‘real’ applications that actually process the frame buffer, you simply have to find out what gives the best result.

Although a comment in some example mentions that the value of fb_count should only be larger than 1 for JPEG buffers, the RGB565 example in this post works best (on PSRAM boards) with fb_count = 4 , and the JPEG example with fb_count = 2. Again, you simply have to play with these parameters to find the best values for your application and board.

What’s next?
Now that I understand the basic concept, next step will be to stream the frames over http, and control the camera via a browser. All examples that I could find use the same concept: an http daemon (webserver) that serves an index.html file showing camera controls. Actually, they all use the same files. I’m not sure what to do yet: adapt these files or write something myself. As my camera will be mounted on my DIY 2-axis gimbal, I also want to control its movement via the same web page.

Other ToDos:

  • Save frame captures to SD card or web storage (e.g. for time lapse videos)
  • Stream a continuous 360° panorama view using my new DIY gimbal
  • Experiment with the onboard I2S MEMS microphone
  • Experiment with face/object/color/motion recognition

To be continued…

Self-Hosted GPS Tracker

With my T-Beam GPS Tracker operating fine during car trips, I finally have a secure replacement for this very old GPS Tracker App on my nine years old HTC Desire HD*.

The TTGO T-Beam came in this box. I only had to drill a small hole for the antenna

 

The concept of the old Android app (meanwhile removed from both the App Store and github by its creator**) is very simple. It periodically sends your phone’s GPS readings as http GET parameters to a self-hosted endpoint of your choice. Since TheThingsNetwork‘s HTTP Integration works more or less the same, I could simply reuse the php backend that I once wrote for the app, now for showing my LoRaWan tracker’s location on a map.

(zoomed out for privacy reasons)

 

Data flow is as follows:

  • Tracker periodically sends GPS readings to TheThingsNetwork. The sketch makes the interval between consecutive send commands depend on the last GPS speed reading
  • HTTP Integration service sends LoRaWan payload+metadata in an http POST variable to a php script (dump.php) on my server (see below)
  • dump.php appends the entire POST value to a log file (log.txt) and writes a formatted selection of relevant data to a text file (last_reading.txt, always overwritten)

The map is an embedded OpenStreetMap ‘Leaflet’ inside an html file, with some simple javacript periodically reading the formatted data from last_reading.txt. After extracting the individual GPS readings and a timestamp in the background, these values are used to update the tracker’s location on the map and to refresh textual info.

Data in the logfile can, after conversion if necessary, be used for analyzing and drawing the trackers’s route. There are several standard solutions for this, even online.

 

*Running non-stop on its original battery and without any problems since March 2011. I ❤ HTC!

**Hervé Renault, who suggests a modern alternative on his personal website (in French)


My dump.php file (presumes fields latitude, longitude, speed and sats in TTN payload):

TTGO T-Beam

joining TheThingsNetwork

LoRaWan and TheThingsNetwork are quite popular in my country, although they may have lost part of their momentum lately*. With no particular use case in mind, it was mainly curiosity that made me purchase a TTGO T-Beam board and join TheThingsNetwork.

The board comes in a box with two antennas (LoRa and GPS) and header pins

My T-Beam (T22 V 1.1)  is basically an ESP32 WROVER module with an onboard SX1276 LoRa radio and a u-blox NEO-M8N GPS module, a type 18650 battery holder and an AXP192 power management chip. The manufacturer’s documentation is a bit confusing, being divided over two apparently official github repositories: https://github.com/LilyGO/TTGO-T-Beam and https://github.com/Xinyuan-LilyGO/LilyGO-T-Beam. Also, board versions up to 0.7 are significantly different from later versions.

First, I successfully tested the GPS receiver of my version 1.1 board with this sketch. Its code shows the most important differences between version 1.x boards and previous versions (like 0.7). TX and RX pins for the GPS receiver are now 34 and 12 instead of 12 and 15. Furthermore, the newer boards have a power management chip (AXP192) that lets you control power sink to individual board components. It requires an include of the axp20x library as well as code for explicitly powering used components. I recommend to take a look at the examples from that library.

Testing the T-Beam’s LoRa radio either requires a second LoRa board (which I don’t have), or making ‘The Thing’ talk to TheThingsNetwork. I went for the TTN option, obviously. And with a GPS on board, a GPS tracker was a logical choice for my first LoRaWan sketch.

After creating an account on the TTN website, I had to register an ‘Application’ and a ‘Device’, as well as provide a Payload Format Decode function**. Along the way, the system generated some identification codes: Application EUI, Device EUI and Application Key, needed for the OTAA Activation Method that I selected.

Then I ran the sketch below, which I compiled from several sources. After a minute or so, the Serial monitor reported a GPS fix, continued with “EV_JOINING”, and … that was it. Apart from a faulty device or a software issue, I also had to consider the possibility that my T-Beam was not within range of a TTN gateway. Hard to debug, but I was lucky.

TheThingsNetwork Console pages show Application Key and EUIs in hex format. Clicking the <> icon in front of a value will show it in C-style format and then the icon next to it lets you toggle between msb and lsb. It turned out that my sketch expected both EUIs to be in lsb format and the Application Key in msb format. I had used msb for all three of them!

After correcting the EUI values in device_config.h, the “EV_JOINING” in the Serial monitor was followed by “EV_JOINED” and “Good night…”, indicating that the device had been seen by a TTN gateway and gone into deep sleep mode. From that moment on, its payload messages, sent every two minutes (as set in my sketch), appeared at the Data tab of my application’s TTN Console. Looks like my T-Beam is a TTN node!

In order to do something useful with your uploaded data, the TTN Console offers several ‘Integrations’. For my GPS tracker I first tried TTN Mapper, making my GPS readings contribute to a coverage map of TTN gateways. It also lets you download your data in csv format. However, I didn’t see my readings on their map so far, perhaps because my signal was picked up by a gateway with unspecified location. So I switched to HTTP Integration in order to have all readings sent to an endpoint on my php/MySQL server.

My next steps will be testing coverage and reception of the T-Beam when used during a car travel, as wel as trying some other integrations. Should that raise my enthusiasm for TheThingsNetwork enough, then I might even consider to run my own TTN gateway in order to improve LoRaWan coverage in my area.

In summary, making my Thing join the Network wasn’t just plug & play, so I hope this post and the below mixture of mainly other people’s code will be of any help to TTN starters.

 

* based on (the lack of) recent activity of TTN communities in my area.


Code for a T-Beam v 1.x (with AXP192 power control chip), compiled from several sources. It sends latitude, longitude, altitude, hdop and number of satellites to TTN.

GPS-Mapper.ino

 

gps.h

 

gps.cpp

 

device_config.h (your OTAA secrets can be copied from the TTN Console (note msb/lsb!)

 

**Payload Format decoder (javascript to be entered at the Payload Formats tab of the TTN Console; reverses the encoding as performed by function buildPacket() in gps.cpp)

 

Where ISS…?

 

Tracking the International Space Station (ISS)

Borrowing most of its code from my What’s Up? air traffic monitor, this small project uses publicly available live data for showing current position, altitude and velocity of the International Space Station on a small TFT display. The original version draws the ISS and its trail over an ‘equirectangular’ map of the Earth, also showing the actual daylight curve and current positions of the Sun and the Moon.

The video below shows the ESP32 variant, but with a couple of small adaptions, the sketch will also run on an ESP8266. As usual, my camera does little justice to the TFT display…

ISS tracker on the 2.2″ display of a TTGO T4 (ESP32 )

Position and data are refreshed every 5 seconds, during which time the ISS has travelled almost 40 km! The backgound image also needs to be updated every 270 seconds – the time in which the daylight curve will have moved one pixel to the left over a 320 pixels wide equirectangular Earth map. I used the station’s previous and current position for calculating the icon’s rotation angle. This is just to indicate the station’s heading, and doesn’t refer to its actual orientation in space, of course.

The newer version below takes a different approach by keeping the ISS in the center over a rotating globe. In reality, the display looks much better than in this video.

 

I also made a third version that keeps the ISS icon in the center of a moving Google map with adjustable zoom level, but that requires a Google Maps API key.

This project seems very suitable for educational purposes. With just an ESP board and a TFT display, students can quickly become familiar with C++ coding and some essential maker techniques like Internet communication, JSON parsing and image processing. As a side effect, they will also learn something about Earth movement, cartography, stereometry, Newton’s laws and space research.


Here’s the code for the equirectangular map version:

 

Content of included sprite.h file:

 

Virtual Radar Server

What’s Up? ” revisited

The discovery that you can run Virtual Radar Server (VRS) on a Raspberry Pi triggered me to revise my What’s Up flight radar for ESP32/8266 once again. Mainly interested in aircraft within the range of my ADS-B receivers, I already preferred querying those receivers over using the adsbexchange API, especially after they removed most static metadata from their json response. This had even forced me to setup and maintain my own database for looking up fields like aircraft model, type, operator, country etc.

However, querying Virtual Radar Server on one of my PiAware receivers lets VRS do these lookups from its own up-to-date web databases! Its enriched json response, based on my own receivers’ ADS-B and MLAT reception, contains more details than the current adsbexchange API. Besides, unlike adsbexchange, a local VRS doesn’t mind to be queried once every second! Using VRS as the primary source, I can always choose to find possibly  filtered out ‘sensitive’ aircraft by querying adsbexchange as well.

A quick prototype on an ESP32 and a 2.4″ TFT display finally shows flicker free icon movement after applying some display techniques from earlier projects.

My smartphone doesn’t like filming TFT displays…

While simultaneously running the old adsbexchange version and this new one, I noticed that the new version showed more aircraft, especially nearby aircraft at low flight levels. This is surprising, since adsbexchange.com claims to provide unfiltered data and they should be aware of those those missing aircraft because I feed them!

Anticipating a future project (“Automatic Plane Spotter“), the new version also keeps track of the nearest aircraft’s position, expressed in azimuth and elevation. This will be used for driving the stepper motors of a pan-tilt camera mount that I’m currently building.

 

Stereoscopy on ESP32?

double trouble…?

I knew in advance that trying to make an ESP32 show moving stereoscopic images on a TFT display would be hard. But sometimes you just want to know how close you can get. Besides, it fits my lifelong interest in stereoscopy*.

I found suitable test material in the form of this animated gif (author unknown). Now I ‘only’ had to find a technique for hiding the left frames for the right eye, and vice versa.

 

The first technique that I tried used two of these Adafruit “LCD Light Valves“:

Source: Adafruit website

Together, they formed active shutter glasses for looking at a single TFT display. Looping over the original frames, the display alternately showed the left and right half of a frame, always closing the opposite eye’s valve. The speed of these shutters surprised me, but their accuracy proved insufficient. In order to make the left frame completely invisible to the right eye and vice versa, I had to build in short delays that led to flickering. Without those delays, there was too much ‘leakage’ from the opposite side, a recipe for headaches.

 

The only solution for the above problem was a physical separation of left and right eye-range (i.e. the classical View·Master© approach). Luckily, I had this 3D Virtual Reality Viewer for smartphones laying around. Instead of a smartphone, it can easily hold an ESP32, a battery and two TFT displays. The technical challenge of this method was to keep both displays in sync.

It’s easy for a single ESP32 to drive two identical TFT displays over SPI. The state of a display’s CS line decides whether it responds to the master or not. With CS for left pulled LOW and CS for right pulled HIGH, only the left display will respond, and vice versa. And when data needs to be sent to both displays, you simply pull both CS lines LOW.

The results of this approach were disappointing. Unfortunately, the time the SPI bus needs for switching between clients proved to be too long, resulting in even more flickering than with the shutter glasses technique.

 

As for now, I can see only one option left: using two ESP32 boards, each driving their own display. This will likely solve the flickering (this post shows that the ESP32 is fast enough), but now I’ll have to keep both ESP32 boards in sync.

Both boards are in the same housing and share a power supply, so the most simple, fast and accurate way to sync them seemed over two cross-wired IO pins. Unfortunately, and for reasons still unknown to me, when I run the sketch below on both boards, they keep waiting for each other forever because they will always read 1 on their inPin, even when the other board writes 0 to its outPin.

 

Now that I’ve come closer to a 3D movie viewer than expected, it would be a shame if this apparently simple synchronization problem would stop me.

To be continued (tips are welcome)

 


 

*The first 3D image that I saw as a kid was a “color anaglyph” map of a Dutch coal mine. It came with cardboard glasses holding a red and a cyan colored lens. The map looked something like this:

 

There were also these picture pairs, forcing your eyes to combine short-distance focus with long-distance alignment. It gave me some headaches until I invented a simple trick: swap left and right and watch them cross-eyed…

 

My aunt once gave me an antique wooden predecessor of the viewmaster, complete with dozens of black & white images of European capitals.

A newer ‘antique’ that I like for its ingenuity is Nintendo’s Virtual Boy with its rotating mirror technique. Instant headache, but so cool!

 

Somewhat eccentric newcomers were the “stereograms”. Fascinating how fast our brain manages to make 3D spaghetti out of their messy images, even if they move!

 

 

 

View·Master goes 360°

Working on a couple of new display techniques, I turned my Webcam·View·Master into this endlessly looping 360° Panorama viewer. The results were smoother than expected.

 

The video shows the Seiser Alm – Punta d’Oro “Panocam” on my M5Stack Fire. The sketch will run on any ESP32 board with PSRAM, and the TFT display can have any size, since the downloaded webcam footage will be resized to make its vertical dimension match the display’s shortest side (‘landscape mode’, which makes sense here 😉 )

Reason why I used the M5Stack (with a 2″ display) for this video was its photogenic quality, but the Dolomite landscape definitely looks much better on my 2.2″ TTGO T4 v1.3 display. Partly responsible for the video’s pale colors is also my smartphone’s camera.

Now that I finished the prototype, the next step will be to use the buttons for selecting other panorama webcams and for pausing the pan movement.

Web images on TFT

“Bodmer vs Bodmer…”

 

Looking for a fast way to have an ESP32 download and show web images on a TFT display, I came accross the very fast TJpg_Decoder library* on Github. It’s by Bodmer (the guy deserves a statue) and integrates nicely with his TFT_eSPI library.

One of the library’s examples shows how to download a jpg image to a file in SPIFFS, and then decode and render it to a 320×240 TFT display. Rendering time is impressive, but I managed to make it 10% faster by rendering from RAM instead of SPIFFS.

The sketch at the end of this post shows the essential steps. It pushes a 320×240 web image to the display 2x faster than my older sketches, that use the JPEGDecoder library (also by Bodmer!). Another difference from my earlier sketches is the use of the HTTPClient library. It takes away the hassle of dealing with http headers and chunked-encoded response.

By the way: after rendering the jpg file to the display, you can still copy it from the array to a file in SPIFFS or on SD card for later use. This only takes a single line of extra code.

There is something that I noticed while measuring this library’s speed on ESP32 boards with PSRAM. Sketches that were compiled with PSRAM Enabled rendered almost 15% slower than the same sketch with PSRAM disabled, even if PSRAM wasn’t used at all. This is not necessarily related to the library; it could also be a general PSRAM issue. Luckily, since jpg images for small TFT displays will likely fit inside RAM (even for many ESP8266 boards), there’s no need to enable PSRAM, unless your program needs it for other purposes. Make sure that BUFFSIZE fits in RAM and is large enough to hold your jpg files.

As for grabbing other image formats (gif, bmp, png…) and resizing: most hobbyists will have access to a (local or hosted) webserver with php. Thanks to very powerful php graphic libraries, only a few lines of php code can take care of converting any common image format to jpg, and send it (resized, rotated, cropped, sharpened, blurred or whatever) to your ESP for rendering. If that’s not an option, then this new library can still reduce the size of an image by a factor 2, 4 or 8 by calling function TJpgDec.setJpgScale() with the desired factor as parameter.

Below is an adapted and stripped version of Bodmer’s example, using an array instead of SPIFFS. Make sure to select the right setup for the TFT_eSPI library before compiling. Some boards also require you to explicitely switch on the backlight in your sketch.

 

Output on the Serial monitor will be something like:

[HTTP] connection closed or file end.
Downloaded in 188 ms.
Decoding and rendering: 125 ms.

Too slow for your eyes? By storing decoded tiles to a display buffer in (PS)RAM instead of pushing them to the display one by one, the image can be pushed to the display in 44 ms

On an ESP32, you can speed up the entire process by using both cores. The library has an example that lets core 0 do the decoding. Whenever it finishes a tile (usually 16×16 pixels), core 1 will push it to the display in parallel. However, this will not beat the above display buffer method with regard to the visual part of the process.

*TJpg_Decoder uses the Tiny JPEG Decompressor engine by ChaN