Close
0%
0%

BunnyBot

BunnyBot is a ROS based robot platform that can perform useful tasks using its built in gripper and vision system.

Similar projects worth following
The goal of BunnyBot is to produce an autonomous robot that is actually useful in daily life - fetching bins of electronic components, throwing out trash, fetching drinks etc. The robot is fully autonomous, requiring no user interaction besides voice commands and gestures.

Goals

- The robot should autonomously navigate, avoid obstacles and interact with objects using a gripper.

- Aside from an initial mapping phase, not require teleoperation or manual user input.

- Be multi-purpose, performing a number of common tasks


License

BSD license on the code and design files unless otherwise specified (eg. apriltags nodelet is GPL)


Overall design

The core function of the robot is to move objects from one point to another. The object may be a target to be transported or a tool that performs some task (eg. a cleaning pad, dusting tool etc). The robot should contain the minimum hardware required to accomplish this.


Mechanical design

I'm using a single DoF arm and simple planar gripper. This has a few advantages over more complex arm/gripper setups:

- can be produced fairly cheaply using 3d printers

- no need for complex inverse kinematics

- reduces flex from joint backlash, higher load capacity, lower weight

The depth camera is mounted at the back to minimize its detection deadzone. As a result, the body of the robot is wedge-shaped so that when the arm is lowered the camera has an unobstructed view. The right side of the robot has a triangular tessellation pattern, this is to somewhat offset the increased weight on that side due to the arm.


Hardware

I've experimented with several iterations of hardware setups and settled on this:

- neato xv11 LIDAR for localization

- intel realsense R200 depth sensor. This provides a depth stream for object detection as well as a 1080p RGB stream to detect fiducials

- Nvidia Jetson TK1 as the main computer. This board has CUDA support, which makes it much easier to deal with computer vision

- Kobuki base robot platform. This is the same base used by the Turtlebot. The base provides gyro, odometry and other sensor output through its USB interface, which runs ROS right out of the box.

- Arduino uno to drive the arm/gripper. Although the TK1 technically has GPIO, linux is not a realtime OS and not really suited to driving motors with PWM.

- Actuonix/Fergelli P16 150mm 22:1 linear actuator to drive the arm

- Polulu VNH5019 motor driver to drive the linear actuator

- MXL AC-404 microphone. I got this one mainly because of the recommendation from the ROS website



Software

I used ROS for the software framework. Most of the really difficult problems for this project has already been solved by various ROS packages. Here are the stock ROS nodes that I used, along with the high-level task that it performs:

gmapping - build a map of the environment using SLAM

robot localization, AMCL - localize within the map via sensor fusion of LIDAR data, wheel odometry and gyro readings

ROS navigation - plan a path to a waypoint on the map

RTAB-map obstacle detection - detect obstacles and plan around them

pi_trees - behaviour tree implementation for high-level decision making

pocket sphinx - speech recognition

and many others.. check out the launch file on github for more.

There are some tasks for which no off-the-shelf package exists, or requires refactoring for speed improvements. In these cases I wrote my own ROS nodes:

gpu_robot_vision - uses the TK1's GPU to perform simple CV tasks, including: image rectification, BW/color conversion, scaling and noise reduction filter

apriltags_nodelet - a nodelet that outputs a list of fiducial markers from the rectified and denoised RGB stream

april_tf - takes those markers as input and outputs the same markers transformed in the map frame. Persist observations by writing them to a file, and outputs a target pose for ROS navigation to plan a path to the marker

april_planner - issue precise movements to the robot based on the relative movement of the detected markers

depth_object_detection - this is mainly used to detect my hand, to open/close the gripper


Installation

- get a spare machine and install ubuntu 14.04

- flash your TK1 and ensure wifi is working

- load custom kernel with UVC for the realsense

- install ROS and realsense drivers (install ROS indigo)

- install these ROS nodes (available via...

Read more »

bunnybot.ai

illustrator version of the mechanical design. Ready for CNC cut using a 1/8" router bit

postscript - 1.18 MB - 08/23/2016 at 09:49

Download

bunnybot.pdf

pdf version of the mechanical design

Adobe Portable Document Format - 111.77 kB - 08/23/2016 at 09:47

Preview
Download

basespacers.stl

spacers to elevate the main section

plain - 468.83 kB - 08/20/2016 at 07:33

Download

gripper2.stl

padding for the gripper I bought on aliexpress

plain - 103.60 kB - 08/20/2016 at 07:33

Download

container2.stl

cylindrical bins for the robot to grasp

plain - 197.54 kB - 08/20/2016 at 07:33

Download

View all 7 files

View all 10 components

  • High level decision making

    Jack Qiao08/27/2016 at 05:55 0 comments

    To connect the disparate pieces of code into a working whole, the robot needs some faculties for high level decision making. This might make you think of "AI" or machine learning, but our needs are closer to game AI: something simple, rules-based and easy to debug. After some research the dominant approaches to this task are HSM (hierarchical state machines) and behavior trees.


    I thought behavior trees made more intuitive sense, so that's what I ended up using. Fortunately there is a pretty good looking implementation for ROS in python, created by the maker of the pirobot.

    It comes out of the box with ROS-related blocks for performing actions on changes to a ROS topic as well as the basic building blocks of behavior trees like Selectors and Sequences. As I worked through the behaviors for the robot I had to implement a few new task blocks:

    TwistTask - issues a constant twist (linear and angular velocity) to the robot for x seconds

    PublishTask - publish on a ROS topic to trigger some action

    DynamicActionTask - navigate to a target pose, same as SimpleActionTask, but takes the goal from the blackboard so it can be updated dynamically at runtime

    As of now the behavior tree for the robot is quite simple, it just runs down a list of actions when a voice command is triggered. For more robustness there should be some recovery behaviors for example, if a navigation command fails.

    code on github: https://github.com/Jack000/bunnybot/blob/master/launch/bunnybot.py

  • code and design files

    Jack Qiao08/20/2016 at 07:09 0 comments

    The project is still in progress, but at this stage I think I will upload the code and design files for the robot.

    I've named the robot "bunnybot", as a reference to the turtlebot with which it shares many hardware similarities.

    The custom code I've written for the robot is now compiled in one repo: https://github.com/Jack000/bunnybot

    key takeaways from this version of the robot:

    - too short. I still have to bend over to interact with it when giving it objects. I think 2 feet would be a better height.

    - object sensing. Without feedback on whether a grasp was successful sometimes it grabs air and assumes all is well

    - tilting head. The narrow field of view of the camera means that the fiducials have to be at specific heights. If the camera was tiltable we'd be able to place the fiducials in the entire vertical range of the gripper.

  • bin picking

    Jack Qiao07/24/2016 at 05:58 0 comments

    so the first task the robot will perform is bin picking. Eg. you could use voice command to tell the robot "get me a 1 kilo-ohm resistor", and it will fetch the bin and bring it to you. Afterward it will put the bin back on the shelf.

    the design of the bin is constrained by the accuracy of the robot. The bins must be shaped so that they can be reliably gripped from any angle, and they have to be tolerant to being picked up or dropped a few cm from their rest position.

    this is what I came up with

    the container is cylindrical to be easy to grasp, and when dropped into its slot it will always come to rest at the same spot. The interior of the bin tapers up so that the center of gravity is low, and when dropped with some inaccuracy won't tend to bounce around too much.

    I CNC'd the bevel into the slots to match the bin, but thinking about it now it's not really necessary.

  • Object interaction

    Jack Qiao07/23/2016 at 08:01 0 comments

    here's a test of the fiducial marker system

    I teleoperate the robot to within view of a fiducial marker, then the robot plans the grasp on its own, slowing down as it gets close to its objective.

    The idea is that the global planner in ROS can get the robot to the rough vicinity of an object, then a separate planner can issue precise commands based on the observed fiducials to perform the grasp.

    Because the gripper obscures view of the object during grasping, the fiducial can't be on the object itself. To do that I'll have to add a feature to trust the wheel odometry when the fiducial is lost.

    In the first version of the planner code I had also accounted for marker rotation, ensuring that the robot approached from the same angle every time. But the orientation of the observed marker is very noisy, especially when at a distance, leading to bad oscillations.

    I call the planner april_planner, check out the code on github: https://github.com/Jack000/april_planner

    the node issues command velocities directly instead of going through the navigation stack.

  • Fiducial marker nodelet

    Jack Qiao07/14/2016 at 23:00 0 comments

    To interact with the environment I'll be using fiducial markers. The markers will tell the robot 1. what an object is, and 2. where it is relative to the robot

    #2 is very important. The SLAM system is only accurate to about 10cm, which gives us a general idea of where we are relative to the map, but is not enough for grasping. In order to grasp objects we need to have line of sight to the fiducial marker throughout the grasping process.

    The particular type of fiducial marker I'm using are called apriltags

    why are they called apriltags? no idea, might be a person's name.

    I had intended to write a full GPU implementation of apriltags, but I don't think I'll have enough time to make the hackaday prize deadline. So instead I'll just use an existing library that runs on the CPU.

    There are a bunch of AR and other fiducial marker nodes for ROS, but strangely none of them seem to support nodelets.

    If you're not familiar with ROS, inter-node communication is a rather heavy process and especially so when it comes to video. When a camera node transfers images to the fiducial marker node, it's first encoded by image_transport, then serialized and sent over TCP - even if you're on the same machine! Special implementations of nodes that do not do this are called nodelets, with the kicker that nodes and nodelets don't share the same API.

    I think because ROS is used primarily for research trivial stuff like this is easily neglected. You don't have to worry about CPU load when you have a PR2 after all : ]

    Anyways, not having found a suitable nodelet implementation of apriltags, I ported the code for an existing node into a nodelet container.

    check it out on github: https://github.com/Jack000/apriltags_nodelet

    I stress tested it with some 1080p video on the TK1. At that resolution the node runs at 2fps and the nodelet at 3, but with a 1.1 decrease in CPU usage.

    For actual use on the robot I think I'll scale down the video so the detection can proc with a reasonable framerate. That thankfully, can be offloaded to the GPU.

  • GPU computer vision

    Jack Qiao07/12/2016 at 00:41 0 comments

    One of the biggest obstacles for mobile robots is compute efficiency. Although desktop computers have become powerful, on robots there is a tradeoff for weight/running time/cost as you scale up the computational requirements. This is why high compute/low power boards like the Jetson TK1/TX1 are great for autonomous robots that rely on computer vision for navigation, obstacle avoidance and grasping.

    on the ROS platform, the stock nodes for computer vision work well for research robots that don't have much power constraints, but our robot will have to squeeze out every ounce of performance from the Jetson TK1. There aren't any existing ROS nodes that takes advantage of the GPU, so I've started writing an OpenCV wrapper to perform some of the more resource intensive operations on the GPU.

    One of these operations is image undistortion. TLDR: cameras subtly distort images, making objects with straight lines appear curved, we need to calibrate the camera and use software to correct the image so that straight lines in the real world appear straight in the image.

    I wrote a small nodelet to do this on the GPU using OpenCV's GPU APIs. Check it out on Github: https://github.com/Jack000/image_proc_tegra

    On my TK1 it reduces nodelet CPU usage from ~2.8 to ~1.8, with a 0.25 overhead in CUDA upload.

    There does not appear to be a way to share CUDA pointers between ROS nodelets, so moving forward I will probably implement the GPU code all in one nodelet, and have options to turn various things on/off.

  • Robot navigation

    Jack Qiao07/04/2016 at 06:41 0 comments

    I'm still waiting on the arm actuator to arrive, meanwhile here is a video of the robot doing SLAM

    Here's a short explainer:

    Phase 1, mapping: The robot has a 2d LIDAR that spins around to make a map of its surroundings. I manually give navigation goals to the robot, which drives to the commanded position while avoiding obstacles. As it does this a map is being built from the collected sensor data. I struggle a bit as there are people walking around.

    Phase 2, navigation: Once we have an adequate map, we can save that map and use it for path planning. The advantage here is that the sensor data is only used for localization and not mapping, so the robot can go much faster than the mapping phase. This is mostly a limitation of our LIDAR, which is fairly slow and noisy.

    About the visualization (Rviz in ROS)

    - small rainbow-coloured dots are from the LIDAR

    - shifty white dots are point cloud data from the realsense (used to detect obstacles outside the LIDAR plane)

    - black dots are marked obstacles

    - fuzzy dark blobs are the local costmap. The robot tries to avoid going into dark blobs when planning its path

    - big green arrow is the target position that I give to the robot

    - green line is the global plan (how the robot plans to get from its current location to the target)

    - blue line is the local plan (what the robot actually does given the global plan, visible obstacles and acceleration limits of the robot)

  • mobile base

    Jack Qiao06/23/2016 at 19:57 0 comments

    After experimenting (and failing) at a couple of ways of doing SLAM I've settled on the classic design of a 2d LIDAR for room mapping, supplemented with a front-facing depth sensor for obstacle detection.

    Instead of a fully custom robot I'll be using a Kobuki base and putting my stuff on top. With the power constraints I will have to replace the mini-itx board with a jetson tk1, hopefully Ubuntu on ARM won't be too much of a pain. Here's what the prototype will look like:

    and the boards all cut out:

  • tethered dev platform and fake-LIDAR

    Jack Qiao06/23/2016 at 19:36 0 comments

    the first prototype had some issues, the biggest of which was that it had no on-board computer and had to be tethered to my laptop. Here I've made a diy dev platform that runs a mini-itx board. It's still tethered with a power cord though.

    with this platform I've experimented with 3 different approaches to 2d SLAM:

    1. A "Fake" lidar scan from depth-sensor data

    2. Laser line projector and triangulation

    3. A neato xv-11 LIDAR

    Method 1 is basically what the turtlebot uses, and it works ok for the most part. The biggest problem with this method is that the field of view is rather limited, so there are many cases where you have no data to do SLAM with.

    Method 2 I saw on this hackaday post. I used a 20mw laser line projector, 650nm band-pass filter and a usb camera. The challenging part of this method is noise filtering - whenever the camera or an obstacle moves, it will show up in the inter-frame difference along with the laser line. While the band-pass optical filter limits the absolute-value of the noise, the laser line will also quickly drop to near the noise floor due to the fan spread. The 20mw laser was only usable to about 2 meters. A very specific problem that I encountered was computer monitors - when moving the camera around the difference looks exactly like a laser line. Overall I don't think this works particularly well for a moving robot.

    I've had much better results with the neato LIDAR. Although it also uses parallax, the data I'm getting is a lot more reliable at farther range.

  • gripper

    Jack Qiao06/23/2016 at 19:07 0 comments

    bought a gripper on aliexpress to save time. I know, not very hackerish but there are more interesting things to work on!

View all 11 project logs

Enjoy this project?

Share

Discussions

G. Rosa wrote 09/15/2021 at 01:06 point

Most impressive work done!  Your BunnyBot should do quite well in the commercial market.  Awesome demonstration shown on the video.  Thanks. 

  Are you sure? yes | no

trevorjtclarke wrote 06/29/2016 at 18:06 point

You should change the name to "Project Dalek"! ;) Looks awesome so far, nice project!! Have you considered starting with a iRobot base? Or is that too limiting for what you're looking to do?

  Are you sure? yes | no

Jack Qiao wrote 06/30/2016 at 17:25 point

from what I've read the kobuki base has much better wheel odometry than the icreate. For what I'm doing low speed movements will be important so I think the kobuki will be better.

  Are you sure? yes | no

Jack Qiao wrote 06/25/2016 at 19:48 point

just that first one for now. I'll have a few videos of the new platform moving soon, hopefully.

  Are you sure? yes | no

Orlando Hoilett wrote 06/24/2016 at 22:01 point

This is really cool. Do you have any videos?

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates