Close
0%
0%

DeepRC Robot Car

Building a robot car with a smartphone at its heart.

Similar projects worth following
With the imminent dawn of self-driving vehicles, the art and craft of developing their software and hardware remains elusive. It’s no suprise: after all the domain is still very much in it’s infancy. Engineering a robust system involves tight cooperation of the most cutting edge software and (potentially) dangerous and expensive hardware. Therfore, the bar for entry is set very high and only available to companies with vast resources, such as Alphabet’s Waymo.

I believe that making an approximation of the technology available to anyone who wishes to try it is possible, just as it was possible with VR and Google Cardboard. This project focuses on building a low-cost, DIY robot car with advanced sensors, actuation and connectivity. The goal is to allow anybody to implement the cutting edge robotics and self-driving car algorithms on this small scale robot.

Motivation

As a software engineer, I would often take for granted the democratisation and openness of the field I work in. I’d often overlook the wonder of state of the art technology simply being there in the open on Github, a few terminal commands away from working on my desktop.

Fortunately, the field of self-driving vehicles seems to be taking steps in that direction. There are MOOCs and free online materials offered for anyone who’s interested in the topic. However, they still lack a hardware platform where the learners can safely and quickly evaluate their ideas. The best we can hope for right now is a simulator. I believe that making an approximation of the technology available to anyone who wishes to try it is possible, just as it was possible with VR and Google Cardboard.

Can you already guess where this is going?

The platform I’m proposing is not unlike Google Cardboard. The principles remain the same: low cost (or made of things you likely already own) and no need for expert knowledge to get started. I’ve been working on a remotely controlled car based on a smartphone that is able to reuse the sensors (GPS, compass, accelerometer, light), connectivity and high-res camera available on all modern devices.

Design

There are 3 main components of the robot:

A 3D printed chassis

The robot’s main body is essentially an advanced smarthphone case.

Since the phone is placed horizontally, normally the camera would be looking at the ceiling, which is not very useful. Therefore, on top of the camera there is a mirror placed at a 45dg angle, so that the robot can look forward.

There is a number of active components placed on the chassis. The car is propelled by a brushless motor connected to the rear axle via a series of reduction gears. A rotary encoder is connected to the motor’s shaft. Steering is controlled by a small servo placed at the front of the chassis.

On-board electronics

For controlling the actuators and reading telemetry data a small number of electronic components are installed on the chassis.

The main circuit board is based on an excellent NRF52 SOC. It provides a Bluetooth LE radio to communicate with the phone. The servo is controlled by the chip directly, however the motor requires an additional Electronic Speed Controller (ESC).

Software

In order to simplify development, software is split between a computer and a phone app. The goal is for the app to be as simple as possible, implementing only the most basic connectivity. This puts the app outside of the iterative development loop, which is much faster on a computer due to more powerful hardware and robotics software availability (such as ROS). The downside is that the robot must always be connected to a computer (via WiFi) and some latency is introduced when reading sensors.

Thanks to the video processing and connectivity capabilities of modern smartphones the latency is brought down to the minimum - average 80ms lag for camera image and non-perceptible lag when transmitting steering commands and receiving telemetry on local WiFi. With real-time control delegated to the on-board electronics, the robot can be controlled from comfortable 10Hz+ loop.

Applications; Future work

Thanks to the varied range of available sensors, the car can be used to implement a number of robotics and self-driving algorithms - from robust localisation by fusing accelerometer, compass, encoder and GPS data to navigation using computer vision (or Machine Learning).

Watch this space for updates on Computer Vision and Deep Reinforcement Learning-based navigation projects!

Thank you for reading this far. I’ve put a lot of time and love in this project, learning about 3D modelling, 3D printing, circuit design and robotics. I’m primarily a software engineer, so for me the most fun part is only starting now, once the hardware is mostly finalised.

I’d love to hear what you think about DeepRC and how would you improve it.

  • Experimenting with Deep Reinforcement Learning

    Piotr Sokólski07/14/2019 at 19:19 1 comment

    I’ve made an attempt at implementing collision avoidance using Deep Reinforcement Learning - with partial success.

    Problem and Setup

    In order to apply a Reinforcement Learning algorithm, the goal has to be specified in terms of maximizing cumulative reward.

    The objective of the robot was to drive for the longest time without hitting any obstacle. The objective was shaped by a reward function - the robot would receive a penalty when hitting an obstacle. Since the robot receives zero reward for just driving around and a negative reward (penalty) for hitting an obstacle, by maximizing cumulative reward a collision avoiding behavior should emerge.

    The collisions were detected automatically using an accelerometer. Any detected collision would also trigger a “back down and turn around” recovery action, so that the robot could explore it’s environment largely unattended. It would drive with a small fixed speed (around 40cm/s) and the Reinforcement Learning Agent would be in control of the steering angle.

    Reward Shaping and the Environment

    Reinforcement Learning Agents make decisions based on current state. State usually consists of observations of the environment, sometimes extended by historical observations and some internal state of the Agent. For my setup, the only observation available to the Agent was a single (monocular) image from a camera and history of past actions, added so that velocity and “decision consistency” can be encoded in the state. The Agent’s steering control loop ran at 10Hz. There was also a much finer PID control loop for velocity control (maintaining a fixed speed in this case) running independently on the robot.

    With the reward specified as in previous section, it was not surprising that the first learned behavior was to drive around in tight circles - as long as there is enough space, the robot would drive around forever. Although correct given the problem statement, this was not the behavior I was looking for. Therefore, I’ve added a small penalty that would keep increasing unless the robot was driving straight. I’ve also added a constraint on how much the steering angle can change between frames to reduce jerk.

    This definition of the reward function is sparse - the robot can drive around for a long time before receiving any feedback when hitting an obstacle - especially when the Agent gets better. I improved my results by “backfilling” the penalty to a few frames before the collision to increase the number of negative examples. This is not and entirely correct thing to do, but worked well in my case, since the robot was driving at very low speeds and the Agent communicated with the robot via WiFi, so there was some lag involved anyway.

    Deep Details

    For a Deep Reinforcement Learning algorithm I chose Soft Actor-Critic (SAC)(specifically the tf-agents implementation). I picked this algorithm since it promises to be sample-efficient (therefore decreasing data collection time, an important feature when running on a real robot and not a simulation) and there were already some successful applications on simulated cars and real robots.

    Following the method described in Learning to Drive Smoothly… and Learning to Drive in a Day, in order to speed up training I encoded the images provided to the Agent using a Variational Auto-Encoder (VAE). Data for the VAE model was collected using a random driving policy, and once pre-trained, the VAE model’s weights were fixed during SAC Agent training.

    The Good, the Bad and the Next Steps

    The Agent has successfully learned to navigate some obstacles in my apartment. The steering was smooth and the robot would generally prefer to drive straight for longer periods of time.

    I was not able to consistently measure improvement in episode length (a common metric for sparse-reward-function problems) - it largely depended on the point in the apartment when the robot was started.

    Unfortunately, the learned policy was not robust - the robot would not avoid previously...

    Read more »

  • Localization Part 1.

    Piotr Sokólski05/30/2019 at 16:51 0 comments

    In this update we will present a position tracking algorithm for a car-like robot. The algorithm will fuse readings from multiple sensors to obtain a “global” and local position. We will show an implementation based on ROS.

    We will be using robot’s position estimates for bootstrapping an autonomous driving algorithm. Even though it is possible to develop an end-to-end learning algorithm that does not rely on explicit localization data (e.g. [1]), having a robust and reasonably accurate method of estimating robot’s past and current position is going to be essential for automated data collection and evaluating performance.

    Part 1. “Global” Localization

    First and foremost, we would like to know the position of the robot relative to a known landmark, such as the start of a racetrack, center of the room etc. In a full-sized car this information can be obtained from a GPS sensor. Even though we could collect GPS readings from a DeepRC robot, the accuracy of this method would pose a problem: GPS position can be up to a few meters off and accuracy suffers even more indoors. In our case this is unacceptable, since the the robot needs to navigate small obstacles and few meter long racetracks. Other methods (such as SLAM) exist, but they come with their own set of requirements and drawbacks.

    Instead we implement a small-scale positioning system using AR tags and an external camera. Both the robot and the “known landmark” (or origin point) will have a AR tag placed on top of them and we will use a AR tag detection algorithm to estimate their relative position. A pointcloud obtained from a RGBD camera will be used to improve the accuracy of pose estimation.

    Read more »

View all 2 project logs

Enjoy this project?

Share

Discussions

Bill Healey wrote 06/19/2019 at 17:52 point

Rather than using an external camera for position detection, something like VINS-Mono /  VINS-Mobile (  https://github.com/HKUST-Aerial-Robotics/VINS-Mobile ) would be really cool.  This allows you to get the position purely based on the IMU and the "first-person" camera movement.

It can be tied in to ROS to perform mapping, and more advanced navigation.

  Are you sure? yes | no

besenyeim wrote 06/19/2019 at 16:42 point

Why did you select the BT link? Many smartphones have USB OTG, serial devices are usually supported by the Android's Linux kernel. I would guess lower latency with the wired method, and with smart design, you can use the phone's battery omitting the external one.

  Are you sure? yes | no

Piotr Sokólski wrote 06/19/2019 at 17:44 point

I picked an iPhone because I already knew how to program it and I had one lying around... In hindsight an Android would have been a better choice for latency. I'm not using anything specific to iPhone, so perhaps one day I'll be able to port it.

In theory the motor can draw 2Amps+, so that probably couldn't be handled by an USB power.

  Are you sure? yes | no

besenyeim wrote 06/19/2019 at 20:14 point

I see. USB (I think including OTG) supplies min. 2.5W. If the average consumption doesn't exceed that, sufficient buffers and converters can be used. So as I understand, the goal is to reduce HW requirements. Leaving out the battery and its charger circuit simplifies it.

  Are you sure? yes | no

Mike Szczys wrote 06/03/2019 at 20:37 point

Hey, great project! I love the periscope idea to re-orient the back camera. Can it do line-following using the selfie camera as a sensor? I imaging focal length is a problem but maybe if it's black/white contrast tracking only (as with most line-followers) it could work.

  Are you sure? yes | no

Piotr Sokólski wrote 06/04/2019 at 09:13 point

Thanks Mike! I haven't thought about it before and it's an interesting idea, I'll try it out.  I've been experimenting with line/lane following using the front facing camera, I'm preparing another project log with the results. 

  Are you sure? yes | no

Daren Schwenke wrote 06/04/2019 at 09:51 point

You can get a clip on macro lens for about 5 McDonalds Cheeseburgers which would allow it to focus on the ground at that distance.  The edge quality of the image is distorted and your imaging area would be pretty small though.  

*Really* accurate line follower, for pencil lines?

Add a torch and a prism, take samples, and do some spectroscopy?  Why yes, there is life here..  :)

  Are you sure? yes | no

besenyeim wrote 06/19/2019 at 16:33 point

As an almost free alternative, one can glue a lens salvaged from a CD/DVD drive. Smaller than the clip-on lens, but the result is similarly bad.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates