usbxbm consists of two parts:

  1. an ATmega328 device (using RUDY here) that receives image data via USB
  2. a host-side Python script that creates the image data from a source file and sends it to the device

ATmega328 Device

The device mostly just waits for commands from the Python script and either writes the received image data as-is to the display, or handles communication establishment.

While the USB communication part is identical regardless of the display, the display handling itself naturally isn't. In its current state, the device either works with a 84x48 Nokia LCD connected via SPI, or a 128x64 SSD1306-based OLED connected via I2C. While both displays could technically be supported at the same time, I figured it won't make much sense in practice, so different build targets will choose the display implementation accordingly.

Each build target will also store the display information like its name and resolution, and the Python script is going to request that information to process the source images accordingly.

Note that the bigger resolution and (significantly) lower speed of the I2C communication will result in a very noticeable difference in maximum frame rate compared to the Nokia LCD.

Video on Nokia LCD
Video in OLED

The Nokia LCD at the top displays pretty much the entire intro in the same time the OLED below displays only the initial text. Neither of them are necessarily ideal, and while the OLED can't be sped up, the Python script has a frame delay option to add a pause between processing and sending each frame that will slow the Nokia LCD down.

Host-side Python Script

The Python script takes a bunch of command line options, and depending on them either handles a video file, a webcam stream, a directory containing a series of images, or a single image as source data.

For a detailed list and explanation of the each command line parameter and a series of examples, check the GitHub repository for now.

Once the script established the connection with the device, it asks for its display configuration, and loops (if it's not just a single image) through each video frame or image inside a given directory to processes it:

  • scaling the image down to the display size based on the received configuration data
  • transforming it to a black-and-white XBM image based on a given threshold value
  • transposing the data for the display's memory arrangements
  • sending it to the device via USB

To achieve all that, the script depends on OpenCV, Pillow, NumPy, and libusb - but there's a requirements.txt for pip available in the GitHub repository.

As both displays are black-and-white displays, the threshold value is used to define at what greyscale value the source image will become a black or white pixel. This is going to take a bit trial and error to get the best possible result, but the famous Lenna picture, a few different threshold values will result in something like this:

threshold value 100
threshold value 128 (default)
threshold value 140

...to be continued (I guess?)