2017 was an epic year for Cobalt Robotics In April, we deployed our first indoor robots to paying...
How It Works: Autonomous Navigation
Eyes always light up when they see a robot moving on its own around the office—or any space really. There is something about a robot autonomously navigating that seems like magic, and excites even the most technologically-savvy person.
As exciting as it is to see happen, getting a robot to move around on its own—without bumping into anything—isn’t actually magic. In fact, it’s pretty similar to teaching a human about their surroundings. When a person enters a new space, three things are happening:
- They are assessing the environment and creating a mental map of the space;
- They are using a mental map to develop an understanding of how to navigate the environment; and
- They are detecting and avoiding obstacles as they move around the space.
Mapping the environment
Just like a human, a robot needs to have a map of any new space it is going to traverse so it can navigate on its own. When we place the Cobalt Robot in a new environment, through the robot we manually create a map of the space to establish a basic understanding of where it will move around using LiDAR, a remote sensing method that uses light in the form of a pulsed laser to measure ranges, and its odometer.
People familiar with LiDAR often point out that it is located at the bottom of the Cobalt Robot. This is intentional because placing the LiDAR a few inches off the ground allows the Cobalt Robot to detect the distance between its base and objects nearby.
As the robot moves around the space it is mapping, its odometer detects wheel spin, which is converted into direction and distance traveled. Our tools and calculations account for necessary adjustments. With the map the robot creates, it is able to orient itself while it performs duties autonomously. The process for generating a map is similar to how we get familiar with new places. We walk around a new environment and internalize our surroundings based on what we see with our eyes—the Cobalt Robot does the same thing with its sensors.
The Cobalt Robot references the newly created map as it navigates its surroundings using inputs from the LiDAR and odometer. Often, the environment can change—either slightly or drastically—as things get moved. For example, chairs and bags find their way to different places on the floor of an office. Racks and pallets move around the inside of a warehouse. Through artificial intelligence and machine learning, algorithms are used to deal with these changes, designed with inherent imprecision in the environment and allowing the robot to successfully autonomously navigate even if the layout of the space is slightly different from what it looked like when it was initially mapped. This method allows changes in the environment to create refreshes and updates to the map as appropriate.
The third and final component of autonomous navigation is avoiding obstacles. Environments constantly change, and avoiding obstacles requires differentiation between permanent features of the environment and temporary ones like humans or a displaced chair. When an obstacle is detected, the Cobalt Robot evaluates its surroundings and re-routes as needed. It uses multiple redundant sensors to detect and avoid obstacles, such as:
- LiDAR to detect obstacles at the ground level
- Depth sensors to gauge distance and object depth
- Chin-mounted sensor to detect elevation changes
- Bump sensors around the perimeter of the robot
Now, if you ever see a robot moving around on its own or hear questions about how it does, remember to think of autonomous robot navigation as similar to how humans move around. We often take for granted how easy it seems to enter a space and move around it based on what we see. Similar to a robot, we employ sensors, calculations, and continuous recalibration.