Close

ROS In Three Dimensions: Data Structure and Sensor

A project log for Sawppy the Rover

Motorized model of Mars rovers Curiosity and Perseverance For Under $500

rogerRoger 01/16/2019 at 10:103 Comments

One of the ways a TurtleBot makes ROS easier and more approachable for beginners is by simplifying a robot’s world into two dimensions. It’s somewhat like the introductory chapters of a physics textbook, where all surfaces are friction-less and all collisions are perfectly inelastic. The world of a TurtleBot is perfectly flat and all obstacles have an infinite height. This simplification allows the robot’s environment to be represented as a 2D array called an occupancy grid.

Of course, the real world is more complicated. My TurtleBot clone Phoebe encountered several problems just trying to navigate my home. The real world do not have flat floors and obstacles come in all shapes, sizes, and heights. Fortunately, researchers have been working on problems encountered by robots venturing outside the simplified world, it’s a matter of reading research papers and following their citation links to find the tools.

One area of research improves upon the 2D occupancy grid by building data structures that can represent a robot’s environment in 3D. I’ve found several papers that built upon the octree concept, so that seems to be a good place to start.

But for a robot to build a representation of its environment in 3D, it needs 3D sensors. Phoebe’s Neato vacuum LIDAR works in a simplified 2D world but won’t cut it anymore in a 3D world. The most affordable entry point here is the Microsoft Kinect sensor bar from an old Xbox 360, which can function as a RGBD (red + blue + green + depth) input source for ROS.

Phoebe used Gmapping for SLAM, but that takes 2D laser scan data and generates a 2D occupancy grid. Searching for a 3D SLAM algorithm that can digest RGBD camera data, I searched for “RGBD SLAM” that led immediately to this straightforwardly named package. But of course, that’s not the only one around. I’ve also come across RTAB-Map which seems to be better maintained and updated for recent ROS releases. And best of all, RTAB-Map has the ability to generate odometry data purely from the RGBD input stream, which might allow me to bypass the challenges of calculating Sawppy’s chassis odometry from unreliable servo angle readings.

(Cross-posted to NewScrewdriver.com)

Discussions

Paul Crouch wrote 01/16/2019 at 13:34 point

Cool! I'm looking at using a Kinect for my Droidbot project but it's still early days yet. I'm watching with interest, thank you.

  Are you sure? yes | no

Humpelstilzchen wrote 01/16/2019 at 16:17 point

RTABMap will also require a lot more cpu time for each iteration. Afaik ROS has no packages for 3d navigation (You could try moveit for this job..). What you could to is convert your 3d camera data to 2d with e.g. pointcloud_to_laserscan[2] or depthimage_to_laserscan[1]. 

[1] http://wiki.ros.org/depthimage_to_laserscan

[2] http://wiki.ros.org/pointcloud_to_laserscan

  Are you sure? yes | no

Roger wrote 01/17/2019 at 21:04 point

Cool pointers, thanks! I'm still combing through RTABMap documentation. There appear to be some 2D projection capability built in, since I see a few topics published in (nav_msgs/OccupancyGrid) data format . But if they aren't what I think they are, it's great to know there are alternatives.

  Are you sure? yes | no