[youtube pUj7Ryom03c nolink]
Researchers at Cornell University have come up with a way to enable robotic aircraft to navigate around outdoor obstacles using just a single camera and hardware that mimics neuron architecture.
Perceiving obstacles is extremely important for an aerial robot in order to avoid collisions. Methods based on stereo vision are fundamentally limited by the finite baseline between the stereo pairs, and fail in textureless regions and in presence of specular reflections. Active range-finding devices are either designed for indoor low-light environments (e.g., the Kinect), or are too heavy for aerial applications. More importantly, they demand more onboard power, which is at a premium for aerial vehicles..
The new algorithm works by taking a single still frame from a camera stream and classifying the image into areas that are safe for the robot to pass through, and areas that aren’t. To do this quickly and efficiently, the researchers are running their algorithm on a neuromorphic hardware platform based on the collective firing of a network of artificial neurons. Obstacles are separated out from backgrounds using a series of taught visual cues (like the fact that straight lines appear to converge at a distance, or the size of familiar objects), and he final platform will be able to process several frames per second using less than one watt of power. The system works quite well in practice: in 53 autonomous flights in obstacle-rich environments, the robot succeeded in reaching its objective without crashing into anything or killing anyone, with the final two flights being messed up by gusts of wind (they’re working on compensating for that).
Source: IEEE Spectrum