Levels of Autonomy for Field Robots
A framework for decision makers, engineers, and users working on deploying autonomous robots.
Introduction: Stakeholders in a large variety of industries are considering autonomous robots. However, we are still some time away from achieving reliable and robust long-term autonomy in the real world. Fortunately, even at the current levels of autonomy, robots can be deployed to help with a variety of tasks and deliver significant benefit to end-users across industries. We present a framework that will enable engineers, users, and decision makers to systematically evaluate the autonomy of real-world robotics systems they are considering and decide how they can best benefit from this rapidly improving technology.
We propose a clear and concise description of Levels of Autonomy for robots as a function of variable expected interaction with users. By focusing on the classification of interaction, we decouple the proposed levels from the technical specifications of autonomous systems. In doing so, we present a unified framework for assessing the class of autonomy and setting design specifications across types of robots. We are inspired by the taxonomy used for determining the level of autonomy of single vehicles. While the SAE levels focus on the attentiveness and interventions required from a driver operating a single vehicle, they have suffered criticism (e.g., 1) due the inherent vagueness from terms like conditional automation or partial automation. In contrast, our objective is to clarify how increasing levels of autonomy will affect deployment of field robots. We specifically link our proposed levels with users’ expectations and consider tasks for multiple field robots coordinating with a human supervisor.
Thinking of autonomy as a long-term objective with a levels-based framework will help achieve realistic real-world deployments. Increasing levels of autonomy should be designed to increasingly simplify human user experience. Starting from the human having to control almost all aspects of the robotic system (Level 0), all the way to the team of robots carrying out specific tasks in dynamic and unstructured environments adapting and learning beyond what the designer or the user programmed (Level 5).
The SAE levels of autonomy are inspired by driver attentiveness and intervention requirements. Each level is also associated with responsibilities for: execution of control, monitoring of environment, interventions (emergency fall-backs), and autonomous capabilities (e.g., lane keeping -- this last point is only possible due to the structure inherent in driving). For general field robotics and multi-agent systems, we can similarly assign responsibility for monitoring operation of the robot, expected performance and aptitude (e.g., time between interventions), and amount of independence and adaptability (e.g., ability to reason about unexpected events, self-maintenance).
The SAE levels of autonomy have been key for transportation research in that they allow researchers to quickly scope their work in the appropriate context and easily compare capabilities and approaches and provide a backbone for guiding policy in autonomous systems. We propose the following Levels of Autonomy for Field Robots -
Level | Description | Time between Interventions |
---|---|---|
0 | Full manual teleoperation | n/a |
1 | Robot within line of sight (hands off) | 5 minutes |
2 | Operator on site or nearby (eyes off) | 1 hour |
3 | One operator oversees many robots (mind off) | 8 hours |
4 | Supervisor not on site (monitoring off) | 3 days |
5 | Robots adapt and improve execution (development off) | extended operation |
The levels of autonomy are designed to describe how autonomous a robot is in executing a task. They tie back to the attention a human supervisor has to provide the robot or a team of robots while they are executing the task.
Level 1 Autonomy: A human needs to be always within line of sight of the robot. For example, in the agricultural automation system shown in the picture below, a human must always follow a robot as it goes through the field. Simple reactive tasks such as keeping the robot in the center of the row or spraying when a weed is detected are automated. A widely deployed example of autonomous systems at this level of autonomy are GPS guided tractors. Here, the human is required to be in the cab to take care of unforeseen events, but the tractor drives itself on pre-programmed paths.
Level 2 Autonomy: Now, the human operators switch to being (remote) supervisors: They don’t have to follow the robot, the robot may be out of line of sight, but the human still must remain on the field and keep monitoring the robot in case it needs rescuing. This capability is an enabling-point for high-value applications in many industries. For example at Level 2, an agricultural robot might be able to navigate a way-point prescribed path avoiding most obstacles, and only get stumped once in a while. The target time between interventions increases to about an hour. At this level of autonomy, the human may be able to do other tasks on the field, but likely only have one or two robots running autonomously under their supervision.
Level 3 Autonomy: In many industries, Level 3 autonomy represents an inflection point where large-scale deployments become quite attractive. A Level 3 robotic team is sufficiently capable of dealing with edge cases for several days so that a single human can monitor a number of robots. This is where most multi-robot based farming systems begin to scale up. The human still might need to be on the field though to swap batteries, perform repairs, or rescue a stranded robot every so often.
Level 4 Autonomy: At level 4, autonomous robots can really be deployed at large scale, without being constrained by labor costs. Level 4 autonomous robot teams are capable of dealing with many of the edge cases themselves, becoming sufficiently autonomous so that the human doesn’t feel the need to be on the field. They also have sufficient automated support infrastructure on-site. The robots are capable of finding their base stations, get a new battery, perform minor repairs, and get out of difficult cases (perhaps with help from a remote human). This level of autonomy needs not only the on robot software to mature, but the on-field infrastructure to automate and typically a reliable connection with remote users.
Level 5 Autonomy: At level 5, the robots begin to learn from their experience to improve operation beyond what the human designer has programmed in. They learn from each other, on site and from robot teams from other sites. They learn to predict how events affect their capabilities and plan proactively.
As an example of how human interaction with the system changes with increasing levels of autonomy, consider the following with the multi-robot agricultural autonomy example: At Level 3, the human on the field is responsible for organizing field activity if it is going to rain. At Level 4, the robot team uses data from the internet to determine when to go out based on the weather. At level 5, the robot team, anticipating its going to rain tomorrow, learns to take care of tasks on the day before!
We have used an agricultural example, but the same could follow in other industries where precise control of the operating environment is not possible. For example, a disinfecting robot deployed at a hospital and operating at Level 3 could be monitored by a single person. At Level 4, teams of robots across multiple hospitals may be monitored through remote centers, while at Level 5, disinfecting robot teams would be able to predict human movements based on past patterns and proactively position themselves in areas where they expect high traffic.
We hope that this framework makes it easier to systematically analyze the readiness of the robots under consideration and helps achieve realistic deployment across industries in the majority of field robotics applications. We believe that most autonomous robotic products will go through this maturity lifecycle. Here, we have tied the Levels to actual deliverable product requirements in terms of human user interaction and not just to abstract statements like conditional automation or partial automation. This human-centric taxonomy is designed to overcome some of the criticism of the more abstract SAE levels of autonomy such as partial automation or conditional automation. Finally, by keeping the description of the Levels of Autonomy high-level and abstract, we aim to facilitate planning and decision-making across industries interested in adopting autonomous robots.
Authors:
Girish Chowdhary: Co-founder and CTO, EarthSense, Inc., Associate Professor, Agricultural and Biological Engineering and Computer Science, UIUC, Chief Scientist, UIUC Center for Digital Agriculture Autonomous Farm, Associate Director AIFARMS National AI Institute.
Chinmay Soman: Co-founder and CEO, EarthSense, Inc.
Katherine Driggs-Campbell: Assistant Professor, Electrical and Computer Engineering, UIUC, member, UIUC Center for Digital Agriculture
Acknowledgement: This research received support from the National Science Foundation (STTR Award #1951250) as well as from the University of Illinois Center for Digital Agriculture.