Close

Mike's Software is Broke...Again!

A project log for Bobble-Bot

Demonstration robot for learning principles of real-time control using Raspberry Pi, RT Linux, and ROS.

mike-mooreMike Moore 05/26/2019 at 21:350 Comments

The only excuse I have left

 Why test when James breaks everything anyway?

Developing software for a complex embedded system is hard. Tracking down a single bug in such a system can often take hours. Even worse, "fixing" a bug in one of these systems often uncovers another bug. It is a vicious cycle that embedded software developers unfortunately know all too well. Time spent resolving these issues during the development of a new product is costly. Many projects fail as a result of brittle and inadequately tested software. Many more succeed, but only after pouring a tremendous amount of money and resources into solving these issues. Fortunately, there are free and open-source tools that can be used to help avoid these problems. This project log will show the approach taken for developing and testing Bobble-Bot's embedded software.  

There's no excuse for your sloppy code, Mike

Software folks are quickly running out of viable excuses for buggy and un-testable code. I'm personally sick of uttering the phrase "but...it worked on my machine".  Many free and open-source tools are now available that can help developers create and maintain a set of automated tests of their code. Here's the list of tools that we use for testing Bobble-Bot's software (all are free).

Testing Bobble-Bot Software

One of the main design goals for Bobble-Bot was to open up the software development process to the open-source community. This facilitates collaboration and learning, but it also introduces some risk. How can we ensure that our software continues to work as developers from around the world make contributions? Fortunately, this is a problem that has been solved many times over by the open-source software community. To mitigate the risk, we simply stand on the shoulders of giants. Bobble-Bot, like many other open-source projects, relies on an automated testing and continuous integration pipeline in order to ensure that our software remains in a working state.

What this all means is that every time a change is submitted to our project's GitHub repository, an automated build and testing pipeline is triggered. This testing pipeline is comprised of stages which build, test, and analyze the proposed changes. The picture above is a view of the pipeline in action. The testing stage includes eighteen different pass/fail checks that are performed on the balance controller using our simulator. The analyze stage produces a summary Jupyter notebook which is automatically uploaded to our build server as the final and primary output of the testing pipeline. This notebook is a document that is manually reviewed before accepting the proposed software changes to our development baseline. The table below describes the tests that are done and provides links to the actual source code for each test.

Bobble-Bot's automated tests

Test NameDescriptionSource Code
No balanceBobble-Bot uncontrolledno_balance.test
Impulse forceBalance control active, impulse appliedimpulse_force.test
Balance controlBalance control active, drive aroundbalance_controller.test
Drive squareBalance control active, drive in squarejs_drive_square.test

Testing With Simulation

Simulation can be an invaluable tool for automated testing. For Bobble-Bot, we use Gazebo as our simulation environment of choice. Gazebo was selected because it is free and it integrates well with ROS. Of course, in order for your test results to be trustworthy, your simulation must adequately approximate reality. Check out this post where we establish the validity of the Bobble-Bot simulation. The summary gif below captures what the "impulse_force.test" actually looks like both in simulation and reality. Looks close enough to us!

Reporting Test Results

Comprehensive automated tests are owesome, but what should you do with all that data generated by your tests? In most cases, you want a clear way to summarize the results. Additionally, it would be great if the results were easily reproducible by others. You know...for science!

For Bobble-Bot, we went with Jupyter Notebook. Jupyter is a commonly used open-source tool in data science. It allows for organizing and sharing data processing and analysis code in a readable notebook style format. There are many great examples of Jupyter Notebooks available for free on the web. 

We used Jupyter to author a test summary notebook that presents the simulation data recorded during our automated tests. The best part is that the generation of the summary report can also be automated. We generate the report within an analysis stage of our continuous integration pipeline. The report gives developers insight into how their changes impact the integrated controller performance. Here is a sample section from the report. The full report contains summary plots from the no balance test, the impulse response test, and the driving tests. The source notebook for the full report can be found here.

We use this report to ensure that any software modifications made to Bobble-Bot's embedded controller are well vetted in simulation first. This reduces our risk of damaging hardware due to inadvertent software bugs. It also helps us to quickly evaluate proposed software changes from the open-source community prior to merging them into the main-line of development.

Writing Integration Tests with ROS & Gazebo

This could easily be a post in its own right. Maybe at some point I'll write one up with more detail. In the mean-time, checkout the code found here to see how Bobble-Bot uses a Gazebo simulation to perform its automated integration tests. Here's an outline of the basic steps to follow for any ROS + Gazebo integration test.

Include test dependencies in package.xml

We added the following lines as test dependencies to our package.xml file. Our integration test requires an IMU model that our controller code does not actually depend on. This is the "hector_gazebo_plugins" test dependency shown below.

<test_depend>rostest</test_depend>
<test_depend>xacro</test_depend>
<test_depend>hector_gazebo_plugins</test_depend>
<test_depend>rosunit</test_depend>

 Include rostest in CMake

Add lines like the following in order to add your tests to CMakeLists.txt. Our full CMakeLists.txt is here.

find_package(
  catkin
  REQUIRED COMPONENTS
  rostest
)

catkin_package(
  INCLUDE_DIRS include
  LIBRARIES ${PROJECT_NAME}
  CATKIN_DEPENDS
  rostest
)

if (CATKIN_ENABLE_TESTING)
    add_rostest_gtest(balance_controller_test
          test/balance_controller.test
          test/balance_controller_test.cpp)
    target_link_libraries(balance_controller_test ${catkin_LIBRARIES})
endif() 

Write a test

This is a snippet of code from our balance test cpp file that is responsible for doing the simulated right turn at the very end of our simulation based integration test.

TEST_F(BalanceSimControllerTest, testTurnRight)
{
  // send a turn rate command of -0.1 rad/s
  geometry_msgs::Twist cmd_vel;
  cmd_vel.angular.z = -0.1;
  publish_vel_cmd(cmd_vel);
  // wait for 3s
  ros::Duration(3.0).sleep();
  bobble_controllers::BobbleBotStatus status = getLastStatus();
  const double expected_turn_rate = -25.0;
  // should be turning at least -25 deg/s
  EXPECT_LT(status.TurnRate, expected_turn_rate);
}

int main(int argc, char** argv)
{
  testing::InitGoogleTest(&argc, argv);
  ros::init(argc, argv, "balance_controller_test");
  ros::AsyncSpinner spinner(1);
  spinner.start();
  int ret = RUN_ALL_TESTS();
  spinner.stop();
  ros::shutdown();
  return ret;
}

Write a launch file

You'll want a ROS launch file to use to kick off your test. We use a common launch file for all of our tests in conjunction with one launch file specific to each integration test. Here's the launch file for our balance control test.

<launch>
  <arg name="paused" default="false"/>
  <arg name="gui" default="false"/>
  <include file="$(find bobble_controllers)/test/common.launch">
    <arg name="paused" value="$(arg paused)"/>
    <arg name="gui" value="$(arg gui)"/>
  </include>
  <rosparam file="$(find bobble_controllers)/test/config.yaml" 
            command="load"/>
  <test test-name="balance_controller_test"
        pkg="bobble_controllers"
        type="balance_controller_test"
        required="true"
        time-limit="90.0">
  </test>
</launch> 

Build and run the test

You can use catkin to build the test, run it, and report the results. We use the commands below.

catkin run_tests --verbose --no-deps --this --force-color -j1
catkin_test_results --verbose ../../build/bobble_controllers/

Here's what the simulated integration test actually looks like in Gazebo. As you can see, we use simple 3D meshes for our automated tests.

Continuous Integration with GitLab CI

Once you have written your test and you have it working on your machine, you are ready to have a build server automate it. This is the step of the process that allows your test to be continually run every time a change is submitted to the repository. Again, this is a topic that could be a post unto itself, so we just summarize the process and provide reference links for further reading. 

For Bobble-Bot, we make use of a private GitLab repository for continuous integration that remains in sync with our public facing GitHub repository. To do this, we rely on a nice GitLab feature for pull and push mirroring to GitHub. Our build server uses a Docker executor to build and test our code within a Bobble-Bot simulation container. Using Docker for Gazebo & ROS simulations is a subject for another day.

GitLab uses YAML to specify the instructions to be executed by the build server. Here's a summary snippet from our .gitlab-ci.yml file.

Here's a screen shot of an example pipeline while it's running. As you can see, we are actively developing and testing modifications to support migration to ROS2! Check back soon for an updated post on the status of ROS2 for Bobble-Bot.

More Information

That was a long post covering several interrelated topics at a fairly high level. Hopefully you found it useful. Check out the references below for more information. Leave any comments and questions below. I'll be happy to answer questions. Thanks for reading!

Discussions