Imagine an autonomous vehicle navigating a smoke-filled mine looking for survivors, personal belongings or any other clues to find anyone who might be alive. It identifies objects it sees and decides which paths to take first. As it reaches the limit of where it can explore, a drone sitting on the vehicle flies off to explore the hard-to-reach corners of the mine. All of this is done without any communication with the outside world. Believe it or not, this isn’t science fiction! Team Explorer from Carnegie Mellon University and Oregon State University did exactly this to win the first event of the Defense Advanced Research Projects Agency (DARPA) Subterranean Challenge.
Today we live in the age of data-driven artificial intelligence (AI), where machine intelligence systems solve difficult problems by considering hundreds of millions of trials or training episodes. Hard problems in perception and decision making that were considered too tough by the community even in the recent past are today being successfully solved using techniques such as reinforcement learning (RL).
I’ve often thought about how advances like these in machine perception and automated decision-making could help us do things like build intelligent robots, and in particular tackle the challenges of optimal control of dynamical systems. And since my early days as a graduate student at Carnegie Mellon, I’ve been fascinated by the tight loop between perception — using computer intelligence to sense surroundings — and action — using this feedback and data to make decisions. Today, our work teaching computers to play games (e.g., mastering Ms. Pac-Man) has the ability to fundamentally change the way we will build control systems in the future. The potential applications could impact a wide range of industries with profound impact on safety and productivity — going well beyond just the self-driving cars that dominate today’s news cycles.
Today’s engineered devices and systems use rules-based logic to bring together the scientific principles, technology and mathematics which have been painstakingly discovered over time by subject matter experts and engineers. But what if our engineers of the future could build control systems infused with machine intelligence that go beyond rules-based logic, and respond in real-time to changing environments to accomplish their goal? Technologies such as RL that are seeing tremendous success in solving video games will be key to building real-world sequential decision-making mechanisms and will power our next generation of autonomous systems.
Helping engineers build action-perception loops for the real world
Translating the success of RL in video games to real-world autonomous systems carries big challenges — for example, no one loses a life making the wrong move in a video game! AI can’t learn from its failures as easily in the real world, where the potential cost of mistakes can be huge. Additionally, newer AI techniques are data hungry. For example, it takes hundreds of millions of tries before a seemingly respectable policy can be trained for many of these gaming tasks. So, operating physical systems like machines or chemical processes for millions of cycles to generate data to train AI can be a very expensive proposition.
Today, I’m excited to talk about how new breakthroughs in the world of machine teaching and creating high-fidelity simulations will enable you to tackle these challenges.
Machine teaching – a new paradigm to infuse domain knowledge to help improve learning
Our researchers have been hard at work on developing machine teaching, which infuses expert domain knowledge and harnesses human expertise to break a big problem into easier, smaller tasks. It also can give AI models important clues about how to find a solution faster, dramatically accelerating model training time. There’s still AI underneath the hood, but you as the expert provide examples, or lesson plans, to help the learning algorithms solve the task at hand. Since you are the one giving the lessons, describing the goals, desired behavior, and safety boundary conditions, the resulting AI models are also far more explainable and auditable once they are deployed. I know I wouldn’t want a “black-box” AI model running the control loop for my systems!
Borrowing a quote from Alfred Aho and Jeffrey Ullman, “Computer Science is the science of abstraction, creating the right model for thinking about a problem and devising the appropriate mechanizable techniques to solve it.” I think of machine teaching as the abstraction we are creating, the right model for thinking about applying domain expertise to AI systems. It can help you to bridge between the model-first mindset of engineers and the code-first mindset practiced by software developers.
High-fidelity simulations – A critical path to gather experiences at scale
Similar to machine teaching, simulations offer a way to generate synthetic data that can train machine intelligence systems at scale and without taking unnecessary risks. Simulations are a safe and cost-efficient way to train AI models, if you can model the key elements like the devices, the sensors and the environment interacting with your system. That allows you to simulate all possible scenarios, including edge situations — such as when a certain sensor or actuator fails — to teach the AI how to adapt to those situations.
For example, we built an open source simulator for aerial and other robotic vehicles called Aerial Informatics and Robotics Simulation, or AirSim for short. AirSim allows the simulation of a wide variety of environments, lighting conditions, sensors and fusion of sensor data. AirSim’s ability to create near-realistic autonomy pipelines is how Team Explorer secured its win.
Most of our customers use highly specialized simulation software for their specific use cases. We’re working with leading simulation makers in the industry like MathWorks to bring these simulators to Azure. MathWorks is the leading developer of mathematical computing software, including MATLAB and Simulink, used by millions of engineers and scientists to design complex embedded and multidomain systems. These partnerships will enable you to easily produce the large volumes of synthetic data needed to quickly train AI models for your specific use case.
The possibilities are endless, and the time is now
We’re continuing to bring AI to engineers and designers that will harness their expertise and trustworthy autonomy as the foundation for accelerated innovation. Customers like Delta, Shell and Toyota are already starting to use and benefit from this approach. From industrial applications to search and rescue operations like in the DARPA challenge, the applications of this technology will be endless. We hope you will join us on this journey to start inventing the future!
Related:
Visit: Autonomous systems with Microsoft AI
Read: How autonomous systems use AI that learns from the world around it
Read: Helping first responders achieve more with autonomous systems and AirSim
Read: Machine teaching: How people’s expertise makes AI even more powerful
Learn more: Game of Drones Competition at NeurIPS 2019
Source: The Official Microsoft Blog