While a variety of industries are considering autonomous robots, we are still some time away from achieving reliable and robust long-term autonomy in the real world.
Fortunately, even at the current levels of autonomy, robots can be deployed to help with a variety of tasks and deliver significant benefit to end-users across industries.
Now researchers have created a framework that will enable engineers, users, and decision makers to systematically evaluate the autonomy of real-world robotics systems they are considering and decide how they can best benefit from this rapidly improving technology.
The researchers are Girish Chowdhary, Chinmay Soman and Katherine Driggs-Campbell. Chowdhary is co-founder and CTO of EarthSense, Inc., as well as an associate professor, Agricultural and Biological Engineering and Computer Science, at the University of Illinois (UI); chief Scientist, UI Center for Digital Agriculture Autonomous Farm, and associate director for UI’s Artificial Intelligence for Future Agricultural Resilience, Management, and Sustainability Institute. Soman is co-founder and CEO of EarthSense. Driggs-Campbell is an assistant professor of electrical and computer engineering at UI and a member of the university’s Center for Digital Agriculture.
The researchers propose a clear and concise description of Levels of Autonomy for robots as a function of variable expected interaction with users. By focusing on the classification of interaction, they decouple the proposed levels from the technical specifications of autonomous systems. In doing so, they present a unified framework for assessing the class of autonomy and setting design specifications across types of robots.
SAE (formerly Society of Automotive Engineers) levels of autonomy have been key for transportation in that they allow researchers to quickly scope their work in the appropriate context and easily compare capabilities and approaches and provide a backbone for guiding policy in autonomous systems.
The levels of autonomy are designed to describe how autonomous a robot is in executing a task. They tie back to the attention a human supervisor has to provide the robot or a team of robots while they are executing the task.
Level 1: A human needs to be always within line of sight of the robot. For example, in an agricultural automation system, a human must always follow a robot as it goes through the field. Simple reactive tasks such as keeping the robot in the center of the row or spraying when a weed is detected are automated. A widely deployed example of autonomous systems at this level are GPS guided tractors. Human operators intervene typically once every five minutes.
Level 2: Now, the human operators switch to being (remote) supervisors: They don’t have to follow the robot or be out of line of sight, yet the human still must remain in the field and keep monitoring the robot in case it needs rescuing. This capability is an enabling-point for high-value applications in many industries. For example, an ag robot might be able to navigate a way-point prescribed path avoiding most obstacles, and only get stumped once in a while. The target time between interventions increases to about an hour. At this level, the human may be able to do other tasks nearby, but likely only have one or two robots running autonomously under their supervision.
Level 3: In many industries, this level represents an inflection point where large-scale deployments become quite attractive. This robotic team is sufficiently capable of dealing with edge cases for several days so that a single human can monitor a number of robots. This is where most multi-robot-based farming systems begin to scale up. The human still might need to be in the field though to swap batteries, perform repairs, or rescue a stranded robot every so often. Up to 8 hours can elapse between human operator intervention.
Level 4: Here robots can really be deployed at large scale, without being constrained by labor costs. These robot teams are capable of dealing with many of the edge cases themselves, becoming sufficiently autonomous so that the human doesn’t feel the need to be in the field. They also have sufficient automated support infrastructure on-site, capable of finding their base stations, get a new battery, perform minor repairs, and get out of difficult cases (perhaps with help from a remote human). This level of autonomy needs not only the on-robot software to mature, but the in-field infrastructure to automate and typically a reliable connection with remote users. Up to 3 days can elapse between human interventions.
Level 5: These robots begin to learn from their experience to improve operation beyond what the human designer has programmed. They learn from each other, on site and from robot teams from other sites. They learn to predict how events affect their capabilities and plan proactively. Human interventions take place more than 3 days apart.
The researchers’ aim with this framework is to make it easier for potential adopters to systematically analyze the readiness of the robots under consideration and help those people achieve realistic deployment.
“We believe that most autonomous robotic products will go through this maturity lifecycle,” the three scientists noted.
Their research was supported by the National Science Foundation and the UI Center for Digital Agriculture.
Post a comment
Report Abusive Comment