The Timecorder is a basic IoT hub/sensor, a small battery powered Raspberry Pi Zero package that provides local sensing, networking and hub functionality. There is a touch screen display for configuration and data viewing purposes. Data can be streamed over wifi using MQTT The hub is also battery powered, hopefully for at least half a day.

I will be using a Raspberry Pi case and display, but with the latest Raspberry Pi Zero W board. The very small board, combined with a full size Raspberry case/display provides lots of extra room for a batter and other components. The first couple of build logs will cover the overall packaging. Simple, off the shelf parts.

Software is written in Python, one of the main advantages of using the Raspberry Pi. CV and Tensorflow will both be used. I'd like nice, touch controlled graphical display of data, sensor configuration. Every device should have a screen, so hub camera can be used with QR codes for setup.

The basic application I have in mind is just leaving the device in a location to observe. This could be a bird feeder, a fish tank, a busy desk, the sky or a street intersection. For anything interesting, Timecorder would record and broadcast details and deltas about events, measurements and objects in the scene. ML and external ML APIs provide object identification such as a bird species or parts on a workbench, or pedestrian or family member. A example higher level recognizer might identify the aquarium states of: being fed, water change, cleaning, activity.

By design, timecording uses all available sensors. There is no separate configuration for example to enable temperature readings. If there is a sensor on the device or connected, it will be recorded. The key is that the Timecorder only records 'interesting' events, such as temperature rising more than 1 degree. So the volume of data recorded or transmitted is greatly reduced.

Simple time-lapse recording will be available, but with a twist. It would automatically adjust frame rate based on scene and movement. Slow moving clouds need slow frame rate, faster moving clouds a higher frame rate. Frame rate can dynamically change also, based on in-the-instance movement.

I think the CV/ML and time-lapse can be used together. Time-lapse frames provide optimized input data for CV/ML. And in reverse, CV/ML identify features that time-lapse needs to record.

UI/control want simple, will merge a command line approach with gui. Don't have very good frameworks, or they are overkill. Just need to display list of values or menu items, charts. Touch/stylus control (plus voice). Create commands to retrieve sensor data - live or historical, just cmd line with json output. Same with command meta data - lists of menus and keyword options, output as json. Then a single json 'UX' that displays the json in a 'graphical' way, and allows appropriate interaction and operations. Could be json->svg, or json->gl...

I'm calling this the cmd compositor, will turn a cmd line terminal into a nice looking, interactive UX. Commands and output data are objects, and can be manipulated.