Close

Options for Generative AI Models

A project log for Generative AI on a Microcontroller

The Electronic Die of the Future

timTim 11/05/2023 at 21:280 Comments

Since our goal is to generate images, we need to select a suitable artifical neural network architecture that is able to generate images based on specific input.

Typically, three architectures are discussed in this context as of today (2023):

  1. Generative Adversarial Networks (GAN)
  2. Conditional Variational Autoencoders (VAE)
  3. Diffusion Models

Diffusion models are the newest of the bunch and are at the core of the AI image generators that are creating a lot of hype currently. Latent diffusion, the architecture at the core of Stable Diffusion, is actually a combintion of a diffusion model and a VAE.

Obviously, a diffusion model may be the most interesting to implement. Therefore I will start with that. There may be a risk that it turns out too heavyweight for a microcontroller though, even when the problem is as simplified as we made it already.

Variational Autoencoders may be a good alternative for a simpler architecture with a higher probability of it being able to fit to be deployed on a small device. Therefore this is second priority, at least as a backup.

Generative Adversarial Networks were the most lauded approach before diffusion models stole the show. Since they basically train a decoder that could be used in a very similar way as a VAE, they may also be an interesting option to create a lightweight model. Compared to VAEs, they may be better suited to create novel images. But that is something to find out. Unfortunatley, it appears that training GANs is less easy than the other two options. Therefore I will park this for now, maybe to be revisited later.

Generally, it has to be assumed that the problem of generating images requires more processing power and larger neural networks than a model that only does image recognition (a discriminator). There are plenty of examples of running MNIST infererence on an Arduino. Does it work for generative NN as well? That remains to be seen...

Next Steps

1) Investigate diffusion models

2) look into variational autoencoders

Discussions