Safe Autonomy: Learning, Verification, and Trusted Operation of Autonomous Systems

December 7-8, 2020
9 am to 2 pm PT (Noon to 5 pm ET) Daily
Watch Workshop Videos: DTI YouTube Channel (From our homepage, scroll down to Workshops)

Advances in machine learning have accelerated the introduction of autonomy in our everyday lives. However, ensuring that these autonomous systems act as intended is an immense challenge. Today, when self-driving vehicles or collaborative robots operate in real-world uncertain environments, it is impossible to guarantee safety at all times. A key challenge stems from the uncertainty of the environment itself, and the inability to predict all possible situations and interactions that could confront the system. Machine learning, and its potential ability to generalize, may provide a solution. For example, a learning-based perception system for a self-driving vehicle must be able to generalize beyond the scenes that it has observed in training. Similarly, learned dynamical driving policies must successfully execute agile safety maneuvers in previously unexperienced scenarios. And yet today, these learning algorithms are producing solutions that are not easy to understand and may be brittle to faults and possible cyber-attacks. In addition, machine learning-based autonomy is largely being designed in isolation from the people who would use it, rather than being built from the ground up for interaction and collaboration.

In this workshop, we explore the scope of safe autonomy, present and identify the challenges, and to explore current research developments which help us move towards a solution. It includes talks from researchers and practitioners in academia, industry, and government from diverse areas such as control and robotics, AI and machine learning, formal methods, and human-robot interaction, and their applications to the domains of ground, air, and space vehicles as well as medical robotics.


Geir Dullerud (University of Illinois at Urbana-Champaign), Claire Tomlin (University of California, Berkeley)



Pieter Abbeel (University of California, Berkeley), Lars Blackmore (SpaceX), J-P Clarke (University of Texas at Austin), Anca Dragan (University of California, Berkeley), Katie Driggs-Campbell (University of Illinois at Urbana-Champaign), Hadas Kress-Gazit (Cornell University), Sayan Mitra (University of Illinois at Urbana-Champaign), Sandeep Neema (Defense Advanced Research Projects Agency), George Pappas (University of Pennsylvania), Daniela Rus (Massachusetts Institute of Technology), Dawn Tilbury (National Science Foundation, University of Michigan), Keenan Wyrobek (Zipline)

(All times are Pacific Time)

Day 1 (Monday, Dec. 7)

8:55 am – 9 am: Opening Remarks, Shankar Sastry ( DTI Co-Director, University of California, Berkeley) and R. Srikant ( DTI Co-Director, University of Illinois at Urbana-Champaign)

9 am – 9:30 am: Understanding Risk and Social Behavior Improves Decision Making for Autonomous Vehicles, Daniela Rus (Massachusetts Institute of Technology)

Abstract: Deployment of autonomous vehicles on public roads promises increases in efficiency and safety, and requires evaluating risk, understanding the intent of human drivers, and adapting to different driving styles. Autonomous vehicles must also behave in safe and predictable ways without requiring explicit communication. This talk describes how to integrate risk and behavior analysis in the control look of an autonomous car. I will describe how Social Value Orientation (SVO), which captures how an agent’s social preferences and cooperation affect their interactions with others by quantifying the degree of selfishness or altruism, can be integrated in decision making and provide recent examples of developing and deploying self-driving vehicles with adaptation capabilities.

Daniela Rus

Speaker: Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Deputy Dean of Research in the Schwarzman College of Computing, all at the Massachusetts Institute of Technology. Rus' research interests are in robotics and artificial intelligence with a key focus to develop the science and engineering of autonomy. Rus is a Class of 2002 MacArthur Fellow, a fellow of Association for Computing Machinery, Association for the Advancement of Artificial Intelligence, and Institute of Electrical and Electronics Engineers, and a member of the National Academy of Engineering and the American Academy of Arts and Sciences. She is the recipient of the Engelberger Award for robotics and is a Senior Visiting Fellow at MITRE Corporation. Rus earned her PhD in Computer Science from Cornell University.

9:30 am – 10 am: Specifications and Feedback for Safe Autonomy, Hadas Kress-Gazit (Cornell University)

Abstract: What is “safe” when we talk about autonomy? How is it defined and by whom? What happens when we cannot guarantee “safety”? In this talk, I will discuss specifications, synthesis, and feedback mechanisms that require us to be explicit about the definition of safety but also enable us to provide explanations when things go wrong.

Hadas Kress-Gazit

Speaker: Hadas Kress-Gazit is a Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Her research focuses on formal methods for robotics and automation and more specifically on synthesis for robotics – automatically creating verifiable robot controllers for complex high-level tasks. Her group explores different types of robotic systems including modular robots, soft robots, and swarms, and synthesizes ideas from robotics, formal methods, control, hybrid systems, and human-robot interaction. She has received multiple recognitions and awards for her research, her teaching, and her advocacy for groups traditionally underrepresented in robotics.

10 am – 10:15 am: Break

10:15 am – 10:45 am: Autonomous Precision Landing of Reusable Rockets, Lars Blackmore (SpaceX)

Abstract: The SpaceX reusable rocket program aims to reduce the cost of space travel by making rockets that can land, refuel, and refly instead of being thrown away after every flight. Autonomous precision landing of a rocket is a unique problem, which has been likened to balancing a rubber broomstick on your hand in a windstorm. Rockets do not have wings (unlike airplanes) and they cannot rely on a high ballistic coefficient to fly in a straight line (unlike missiles). The SpaceX Falcon 9 booster has now landed more than 50 times and has been reused more than 35 times, making reusability a normal part of the launch business. This talk will discuss the challenges involved, how these challenges were overcome, and the new challenges involved in landing Starship, which is designed to carry hundreds of tons of payload to the Moon, Mars, and beyond for a fraction of the cost of previous rockets.

Lars Blackmore

Speaker: Lars Blackmore is the Senior Principal Mars Landing Engineer at Space Exploration Technologies. Lars is responsible for entry and landing of the Starship rocket. Lars’ team developed entry and landing for Falcon 9, which has landed over 50 times, sometimes on land and sometimes on a floating platform. Prior to his time at SpaceX, Lars wrote algorithms for space missions at the NASA Jet Propulsion Lab. He co-invented the G-FOLD algorithm for precision landing on Mars and his control algorithms are currently flying on the SMAP climate observing spacecraft. Lars previously worked with the McLaren Formula One racing team, and has a PhD from the Massachusetts Institute of Technology.

10:45 am – 11:15 am: System Safety and Policy Implications of Autonomy, J-P Clarke (University of Texas at Austin)

Abstract: Certification is a barrier to increased autonomy in civil aviation and other safety critical domains, perhaps most so because it is difficult to quantify or even bound the performance of non-deterministic machines using schemes that explicitly enumerate their input-output characteristics. However, non-deterministic humans are certified (granted license) to perform various functions based on an a priori assessment of their decision-making abilities, which suggests that machines could be certified in a similar way. During this presentation, I will discuss the system safety and policy implications of increasing machine autonomy in safety critical systems–from systems where humans and machines work in partnership to systems where there are no humans involved in decision-making and operation. Further, I will leverage analyses of autonomy in purely human systems to enumerate several attributes and requirements which must be satisfied by autonomous machines to provide equivalent guarantees to humans.

John-Paul Clarke

Speaker: John-Paul Clarke is a Professor of Aerospace Engineering and Engineering Mechanics at the University of Texas at Austin, where he holds the Ernest Cockrell, Jr. Memorial Chair in Engineering. He is an expert in aircraft trajectory prediction and optimization and in the development and use of stochastic models and optimization algorithms to improve aviation efficiency and robustness. Prior to UT Austin, he was a faculty member at Georgia Tech, the Vice President of Strategic Technologies at United Technologies Corporation (now Raytheon), a faculty member at MIT, and a researcher at Boeing and the NASA Jet Propulsion Lab. He has also co-founded multiple companies. Dr. Clarke was co-chair of the National Academies Committee that developed the U.S. National Agenda for Autonomy Research related to Civil Aviation and has chaired or served on advisory and technical committees chartered by the AIAA, EU, FAA, ICAO, NASA, the National Academies, the U.S. Army, and the U.S. DOT. He is a Fellow of the AIAA and is a member of AGIFORS, INFORMS, and Sigma Xi.

11:15 am – 11:30 am: Break

11:30 am – 12 pm: A Trust Management Framework for Calibrating Driver Trust in Semi-automated Vehicles, Dawn Tilbury (National Science Foundation, University of Michigan)

Abstract: Although automated vehicles are expected to become ubiquitous in the future, it will be important for people to trust them appropriately. If drivers overtrust the AV’s capabilities, the risks of system failures or accidents increase. On the other hand, if drivers undertrust the AV, they will not fully leverage the benefits of the AV’s functionalities. Therefore, both types of trust miscalibrations (under- and overtrust) are undesirable. We consider the problem of maintaining drivers’ trust in the AV at a calibrated level–in real time, while they operate the AV. To do this, we estimate the driver’s trust in the AV, compare the trust estimate with the trust “reference” that represents the AV’s capabilities in context, and finally influence the driver’s trust to either increase or decrease it. A model for driver trust is developed, a Kalman filter is used to update the estimate in real time, and experimental results are presented that validate the trust management framework.

Dawn M. Tilbury

Speaker: Dawn M. Tilbury has been a professor of Mechanical Engineering at the University of Michigan since 1995. Her research interests lie broadly in the area of control systems, including applications to robotics and manufacturing systems. Since 2017, she has been the Assistant Director for Engineering at the National Science Foundation, where she oversees a federal budget of nearly $1 billion annually, while maintaining her position at the University of Michigan. She has published more than 200 articles in refereed journals and conference proceedings. She is a Fellow of both IEEE and ASME, and a Life Member of SWE.

12 pm – 12:30 pm: Assured Autonomy, Sandeep Neema (Defense Advanced Research Projects Agency)

Abstract: The DARPA Assured Autonomy program aims to advance how computing systems can learn and evolve with machine learning to better manage variations in the environment and enhance the predictability of autonomous systems like driverless vehicles. In this talk, I will provide an overview of this DARPA program along with key results. Specifically, the talk will discuss rigorous design and analysis technologies for continual assurance of learning-enabled autonomous systems in order to guarantee safety properties in all phases of the system lifecycle.

Sandeep Neema

Speaker: Sandeep Neema joined DARPA in July 2016 and again in September 2020. His research interests include cyber physical systems, model-based design methodologies, distributed real-time systems, and mobile software technologies. Prior to joining DARPA, Dr. Neema was a Professor of Electrical Engineering and Computer Science at Vanderbilt University. Dr. Neema participated in numerous DARPA initiatives through his career including the Transformative Apps, Adaptive Vehicle Make, and Model-based Integration of Embedded Systems programs. Dr. Neema has authored and co-authored more than 100 peer-reviewed conference, journal publications, and book chapters. Dr. Neema holds a doctorate in electrical engineering and computer science from Vanderbilt University, and a master's in electrical engineering from Utah State University. He earned a bachelor of technology degree in electrical engineering from the Indian Institute of Technology, New Delhi, India.

12:30 pm – 12:45 pm: Break

12:45 pm – 2 pm: Panel

Day 2 (Tuesday, Dec. 8)

9 am – 9:30 am: Towards Safe and Stable Reinforcement Learning, Pieter Abbeel (University of California, Berkeley)

Abstract: Deep Reinforcement Learning has seen many successes, ranging from the classical game of Go, over video games, to robots learning a wide range of skills from their own trial and error. However, when such trial and error learning needs to happen in the real world, it can often be costly and furthermore learning can destabilize. In this talk I will discuss new approaches towards satisfying (safety) constraints during learning and ensuring stable learning progress.

Pieter Abbeel

Speaker: Pieter Abbeel is a Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He strives to build ever more intelligent systems. His lab pushes the frontiers of deep reinforcement learning, deep imitation learning, deep unsupervised learning, transfer learning, meta-learning, and learning to learn, as well as study the influence of AI on society. Abbeel has received many awards and honors, including the Presidential Early Career Award for Scientists and Engineers (PECASE), NSF-CAREER, Office of Naval Research-Young Investigator Program (ONR-YIP), DARPA-YFA, and TR35. His work is frequently featured in the press, including the New York Times, Wall Street Journal, BBC, Rolling Stone, Wired, and Tech Review.

9:30 am – 10 am: Towards Verified Robot Code, Sayan Mitra (University of Illinois at Urbana-Champaign)

Abstract: Distributed robotics is poised to transform transportation, agriculture, delivery, and exploration. Following the trends in cloud, mobile, and machine learning applications, finding the right programming abstractions is key to unlocking this potential. A robot’s code needs to sense the environment, control the hardware, and communicate with other robots. Current programming languages do not provide the necessary hardware platform-independent abstractions, and, therefore, developing robot applications requires detailed knowledge of control, path planning, network protocols, and various platform-specific details. Porting applications across hardware platforms is tedious. In this talk, I will present our recent explorations in finding good abstractions for robot code. The end result is a new language called Koord which abstracts platform-specific functions for sensing, communication, and low-level control and makes platform-independent control and coordination code portable and modularly verifiable.

Sayan Mitra

Speaker: Sayan Mitra is a Professor of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. His research is in algorithmic analysis of autonomous systems like self-driving cars and spacecraft. Several algorithms and tools developed by his lab have been commercialized. His new book Verifying Cyber-Physical Systems: A Path to Safe Autonomy will be published by MIT Press in 2021. Sayan holds a PhD from MIT and held postdoctoral and visiting positions at Caltech, Oxford, and TU Vienna. His work has been recognized by the NSF CAREER Award, AFOSR Young Investigator Award, IEEE-HKN Teaching Award, a Siebel Fellowship, and several best paper awards.

10 am – 10:15 am: Break

10:15 am – 10:45 am: Optimizing Intended Rewards: Extracting All the Right Information from All the Right Places, Anca Dragan (University of California, Berkeley)

Abstract: AI work tends to focus on how to optimize a specified reward function, but rewards that lead to the desired behavior consistently are not so easy to specify. Rather than optimizing specified reward, which is already hard, robots have the much harder job of optimizing intended reward. While the specified reward does not have as much information as we make our robots pretend, the good news is that humans constantly leak information about what the robot should optimize. In this talk, we will explore how to read the right amount of information from different types of human behavior -- and even the lack thereof.

Anca Dragan

Speaker: Anca Dragan is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley, where she runs the InterACT lab. Her goal is to enable robots to work with, around, and in support of people. She works on algorithms that enable robots to a) coordinate with people in shared spaces, and b) learn what people want them to do. Anca did her PhD in the Robotics Institute at Carnegie Mellon University on legible motion planning. At Berkeley, she helped found the Berkeley AI Research Lab, is a co-PI for the Center for Human-Compatible AI, and has been honored by the Presidential Early Career Award for Scientists and Engineers (PECASE), the Sloan fellowship, the NSF CAREER award, the Office of Naval Research Young Investigator Program, the Okawa award, MIT's TR35, and an IJCAI Early Career Spotlight.

10:45 am – 11:15 am: Safe Autonomy with Deep learning in the Feedback Loop, George Pappas (University of Pennsylvania)

Abstract: Deep learning has been extremely successful in computer vision and perception. Inspired by this success in perceiving environments, deep learning is now one of the main sensing modalities in autonomous robots, including driverless cars. The recent success of deep reinforcement learning in chess or AlphaGo suggests that robot planning control will soon be performed by deep learning in a model free manner, disrupting traditional model-based engineering design. However, recent crashes in driverless cars as well as adversarial attacks in deep networks have exposed the brittleness of deep learning perception which then leads to catastrophic decisions. There is a tremendous opportunity for the cyber physical systems community to embrace these challenges and develop principles, architectures, and tools to ensure safety of autonomous systems. In this talk, I will present our approach in ensuring the robustness and safety of autonomous robots that use deep learning as a perceptual sensor in the feedback loop. Using ideas from robust control, we develop tools to analyze the robustness of deep networks that ensure that the perception of the environment is more accurate. Critical to our approach is creating semantic representations of unknown environments while also quantifying the uncertainty of semantic maps. Autonomous planning and control need to both embrace such semantic representations and formally reason about the environmental uncertainty produced by deep learning in the feedback loop, leading to autonomous robots that operate with prescribed safety in unknown but learned environments.

George J. Pappas

Speaker: George J. Pappas is the UPS Foundation Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is a member of the General Robotics, Automation, Sensing & Perception (GRASP Lab) and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control systems, robotics, formal methods, and machine learning for safe and secure autonomous systems. He has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, the O. Hugo Schuck Best Paper Award, the National Science Foundation Presidential Early Career Awards for Scientists and Engineers (NSF PECASE) award, the George H. Heilmeier Faculty Excellence Award, and numerous best paper awards. He is a Fellow of Institute of Electrical and Electronics Engineers and International Federation of Automatic Control. He was the inaugural steering committee Chair of CPSWeek and on the inaugural organizing committee for the new Learning for Dynamics and Control Conference. More than thirty alumni of his research group are now faculty in leading universities around the world.

11:15 am – 11:30 am: Break

11:30 am – 12 pm: Fantastic Failures and Where to Find Them: Designing Trustworthy Autonomy, Katie Driggs-Campbell (University of Illinois at Urbana-Champaign)

Abstract: Autonomous robots are becoming tangible technologies that will soon impact the human experience. However, the desirable impacts of autonomy are only achievable if the underlying algorithms are robust to real-world conditions and are effective in (near) failure modes. This is often challenging in practice, as the scenarios in which general robots fail are often difficult to identify and characterize. In this talk, we'll discuss how to learn from failures to design robust interactive systems and how we can exploit structure in different applications to efficiently find and classify failures. We'll showcase both our failures and successes on autonomous vehicles and agricultural robots in real-world settings.

Katie Driggs-Campbell

Speaker: Katie Driggs-Campbell is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. Prior to that, she was a Postdoctoral Research Scholar at the Stanford Intelligent Systems Laboratory in the Aeronautics and Astronautics Department. She received a B.S.E. with honors from Arizona State University in 2012, an M.S. from the University of California, Berkeley in 2015, and a PhD in Electrical Engineering and Computer Sciences from the University of California, Berkeley in 2017. Her lab works on human-centered autonomy, focusing on the integration of autonomy into human-dominated fields, merging ideas in robotics, learning, human factors, and control.

12 pm – 12:30 pm: Autonomy at Zipline, Behind the Scenes, Keenan Wyrobek (Zipline)

Abstract: Zipline is the largest operator of autonomous drones in the world. This talk will go behind the scenes sharing nuts and bolts details of what it takes to do this at scale.

Keenan Wyrobek

Speaker: Keenan Wyrobek is co-founder and head of product and engineering at Zipline, the world’s first drone delivery service whose focus is delivering life-saving medicine, even to the most difficult to reach places on earth. Prior to Zipline, Keenan was co-founder of the Robot Operating System (ROS) and lead the development of PR2, the first personal robot software for R&D. Keenan has spent his career delivering high tech products to market across a range of fields, including medical robotics. You can find Keenan on Twitter @keenanwyrobek.

12:30 pm – 12:45 pm: Break

12:45 pm – 2 pm: Panel