• Building a robot for real-world testing

    Mihir Garimella08/17/2015 at 21:41 0 comments

    After designing the mapping and navigation algorithms for Firefly, I wanted to create a flying robot so that I could test my work in real-world environments. At this point, I wasn't ready to build a flying robot from scratch, so I added sensing and processing hardware to a Parrot AR.Drone. There were a few important considerations when designing this hardware that I'm going to cover in this log.

    Selecting a single-board computer

    My notes are below. Ultimately, I chose the ODROID-C1 because it was relatively fast but also inexpensive and energy-efficient. (The notes refer to UncannyCV, a computer vision library optimized for ARMv7-A processors.)

    Designing a sensor module

    I also needed to design a sensor module that could work with my navigation algorithm to locate fires. My notes are below:

    Also, here's a final schematic of my sensor module:

    Selecting a specific gas sensor for fire detection

    Here are my notes on this:

  • Autonomously locating targets

    Mihir Garimella08/17/2015 at 19:40 0 comments

    The third component of Firefly's algorithmic layer deals with the problem of autonomously locating a target—like a trapped victim after a disaster or the source of a fire in a burning building—by combining multiple different sensory inputs. The problem with existing approaches is that they largely focus on taking measurements in several, precise locations, using these measurements to compute the location of the target.

    Approach 1: Reinforcement Learning

    Initially, this seemed like a classic reinforcement learning problem, because it's easy to formulate the problem in terms of states (sensor readings), actions (robot behaviors), and rewards (when you get closer to the target). I implemented this on my first prototype of Firefly—here are my notes:

    While this approach sounded good in theory, there were a few problems that made it infeasible. First, it required extensive training for each use case. It also didn't generalize well; I found that it worked only in the specific cases for which it was trained. Additionally, it wasn't robust to noisy sensor readings, which was especially problematic because I was using very cheap sensors that produced a useful output but also had significant noise. Because of this, I quickly decided to move on to a simpler, behavior-based approach, described below.

    Approach 2: Following Multiple Gradients

    This approach worked much more reliably, both in simulation (see below) and in some real-world experiments. Here's a graphic I made giving an overview of this approach:

    Evaluation

    I created a simulation to see how this algorithm would perform in a variety of environments. In the simulation, the robot starts out with a random position and orientation inside a square arena (with a certain side length) or circular arena (with a certain diameter) and moves 0.1 meters per loop iteration. The arena contains four identical obstacles with random orientations; the robot has to avoid these obstacles and the arena walls. Concentration and temperature are modeled with a two-dimensional Gaussian distribution centered in the arena; the distribution for concentration fills the arena, while the distribution for temperature is limited to a small area around the target.

    You can see the results of a few simulation runs above (they're the diagrams at the bottom of the image). I also used the simulation to quantify the effectiveness of this algorithm, and I was very happy with the results—the simulation indicated that the algorithm is effective (~98% success rate given the ten-minute flight time of a typical quadrotor), efficient, and scales well to large environments. Here are some specifics on what I found:I also conducted some real-world testing with a group of local firefighters—I'll talk more about that in a future log. Until then, if you want to see the algorithm in action, my TEDxTeen talk (linked in the sidebar) includes a video of one of these experiments.

  • Avoiding collisions

    Mihir Garimella08/17/2015 at 19:31 0 comments

    The second component of Firefly's base algorithmic layer deals with the problem of avoiding collisions with obstacles. Most existing algorithms for mapping and obstacle avoidance use complex, sophisticated sensors, like depth cameras or laser scanners. The problem with these sensors is that they're large, heavy, expensive, and draw a lot of power. Mapping with a single camera (or monocular mapping) is a good alternative, but it's a much harder problem because you're trying to use fundamentally two-dimensional information to understand a three-dimensional environment. While a few algorithms exist for this task, they're either limited to simple, structured environments, or are computationally-intensive and difficult to run in real time. Because of these limitations, I had to develop my own for Firefly.

    Approach

    Here's a graphic illustrating how my monocular mapping algorithm works:

    This algorithm proved to be able to avoid obstacles of different sizes, shapes, colors, and textures—more on this in a future log! Until then, if you really want to see it in action, my TEDxTeen talk (linked in the sidebar) has a video showing some of my results.

  • Escaping from moving threats

    Mihir Garimella08/17/2015 at 11:53 0 comments

    (This is the first of a series of logs that I'll be posting over the next few days giving a brief overview of the work I've done on Firefly to date, and what I'm hoping to get done over the next few months.)

    One of the components of Firefly's base algorithmic layer deals with escaping from moving threats in complex, dangerous environments by mimicking how fruit flies avoid swatters—so, for example, if one of these robots is carrying out a search and rescue mission after an earthquake and the ceiling collapses or an object falls, this algorithm helps the robot recognize it and move away in time. This is where the story of Firefly begins, so, in this log, I'll talk about my inspiration for the project, and describe the process of designing and implementing hardware and software for escaping from moving threats.

    Inspiration

    Firefly started out as a high school science fair project. In the summer before ninth grade, my family returned from a vacation to find our house filled with fruit flies, because we had forgotten to throw out some bananas on the kitchen counter before we left. I spent the next month constantly trying to swat them and getting increasingly frustrated when they kept escaping, but I also couldn't help but wonder about what must be going on "under the hood"—what these flies must be doing to escape so quickly and effectively.

    It turns out that the key to the fruit fly's escape is that they're able to accomplish a lot with very little: they use very simple simple sensing (their eyes, in particular, have the resolution of a 26 by 26 pixel camera), but, because they have such little sensory information to begin with, they're able to process all of that information very quickly to detect motion. The result of this is that fruit flies are able to see ten times faster than we can, even with brains a millionth as complex as ours, and this is what enables them to respond to moving threats so quickly.

    Around the same time, drones were in the news a lot. It quickly became clear that although they had tremendous potential to help after emergencies or natural disasters, they simply weren't robust or capable enough. One specific problem was that these robots weren't able to react to quickly-moving threats in their environments. Existing algorithms for avoiding collisions don't work here because they're so computationally-intensive and rely on complex sensors, so I made the connection to fruit flies: I wanted to see whether we could apply the same simplicity that makes the fruit fly so effective at escaping to make flying robots better at reacting to their environments in real time.

    Building a sensor module to detect approaching threats

    I had planned to use a small camera, combined with some simple computer vision algorithms, to detect threats. However, I found that even the lowest-resolution cameras available today create too much data to process in real time, especially on the 72 MHz ARM Cortex processor on the Crazyflie, the quadrotor on which I wanted to implement my work. I came up with a few alternatives and I eventually settled on these infrared distance sensors. (Sharp has since released a newer version that's twice as fast and has twice the range, so that's what I'm planning to use for my latest prototype.) The beauty of this approach is that each of these sensors produces a single number, compared to laser scanners or depth cameras, which produce tons of detailed data that’s nearly impossible to process and react to quickly.

    Initially, I wanted to use four of these sensors to precisely sense the direction from which a threat is approaching. I also wanted to use a separate microcontroller and battery to avoid directly modifying the Crazyflie's electronics. I built a balsa wood frame for the sensors, attached a Tinyduino, and added a small battery, and... my robot wouldn't take off, because it didn't have enough payload capacity to carry all of this added weight. I performed lift measurements and soon found that I could use a maximum...

    Read more »