Close
0%
0%

M.A.R.K. Custom DNN Model Training and Inference

Train your machine learning models in Google Colab and easily optimize them for hardware accelerated inference!

Similar projects worth following

These might be very difficult times for many of us, depending on the part of the world you live in. Due to aggravating coronovirus pandemic many countries implemented strict lockdown policies. I myself recently had to spend 14 days in quarantine, staying indoors for 24 hours a day. I decided to make most of it and continue working on the stuff I am excited about, i.e. robotics and machine learning. And this is how aXeleRate was born.

aXeleRate started as a personal project of mine for training YOLOv2 based object detection networks and exporting them to.kmodel format to be run on K210 chip. I also needed to train image classification networks. And sometimes I needed to run inference with Tensorflow Lite on Raspberry Pi. As a result I had a whole bunch of disconnected scripts with somewhat overlapping functionality. So, I decided to fix that by combining all the elements in an easy to use package and as a bonus part – making it fully compatible with Google Colab.

aXeleRate is meant for people who need to run computer vision applications (image classification, object detection, semantic segmentation) on the edge devices with hardware acceleration. It has easy configuration process through config file or config dictionary (for Google Colab) and automatic conversion of the best model for training session into the required file format. You put the properly formatted data in, start the training script and come back to see a converted model that is ready for deployment on your device!

Here is quick rundown of the features:

Key Features

  • Supports multiple computer vision models: object detection(YOLOv2), image classification, semantic segmentation(SegNet-basic)
  • Different feature extractors to be used with the above network types: Full Yolo, Tiny Yolo, MobileNet, SqueezeNet, VGG16, ResNet50, and Inception3.
  • Automatic conversion of the best model for the training session. aXeleRate will download the suitable converter automatically.
  • Currently supports trained model conversion to:.kmodel(K210),.tflite formats. Support planned for:.tflite(Edge TPU),.pb(TF-TRT optimized).
  • Model version control made easier. Keras model files and converted models are saved in the project folder, grouped by the training date. Training history is saved as.png graph in the model folder.
  • Two modes of operation: locally, with train.py script and.json config file and remote, tailored for Google Colab, with module import and dictionary config.

In this article we’re going to train a person detection model for use with K210 chip on cyberEye board installed on M.A.R.K. mobile platform. M.A.R.K. (I'll call it MARK in text) stands for Make a Robot Kit and it is an educational robot platform in development by TinkerGen education. I take part in the development of MARK and we’re currently preparing to launch a Kickstarter campaign. One of the main features of MARK is making machine learning concepts and workflow more transparent and easier to understand and use for teachers and students.

py - 2.73 kB - 04/26/2020 at 05:46

Download

  • 1
    Understanding of the workflow

    As it was mentioned before, aXeleRate can be run on local computer or in Google Colab. We’ll opt for running on Google Colab, since it simplifies the preparation step.

    Let’s open the sample notebook

    Go through the cells one by one to get the understanding of the workflow. This example trains a detection network on a tiny dataset that is included with aXeleRate. For our next step we need a bigger dataset to actually train a useful model.

  • 2
    Training the object detection model in Colab

    Open the notebook I prepared. Follow the steps there and in the end, after a few hours of training you will get.h5,.tflite and.kmodel files saved in your Google Drive. Download the.kmodel file and copy it to an SD card and insert the SD card into mainboard. In our case with M.A.R.K. it is a modified version of Maixduino called cyberEye.

    MARK is an educational robot for introducing students to concepts of AI and machine learning. So, there are two ways to run a custom model you created just now: using Micropython code or our TinkerGen’s graphical programming environment, called Codecraft. While the first one is undoubtedly more flexible in ways you can tweak the inference parameters, the second is more user-friendly.

  • 3
    Running custom model with graphical programming environment

    If you opt for graphical programming environment, then go to Codecraft website, https://ide.tinkergen.com and choose MARK(cyberEye) as target platform.

    Click on Add Extension and choose Custom models, then click on Object Detection model.

    There you will need to enter filename of the model on SD card, the actual name of the model you will see in Codecraft interface(can be anything, let's enter Person detection), category name(person) and anchors. We didn't change anchors parameters, so will just use the default ones.

    After that you will see three new blocks have appeared. Let's choose the one that outputs X coordinate of detected object and connect it to Display... at row 1. Put that inside of the loop and upload the code to MARK. You should see X coordinate of the center of the bounding box around detected person at the first row of the screen. If nothing is detected it will show -1.

View all 5 instructions

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates