We discuss the challenges of autonomous systems and ways to automate their development to meet safety requirements and customer expectations.
The great benefits of autonomous systems, whether cars, airplanes, smart cities, smart factories, or something else, depend heavily on software. As they work on new systems, developers face particular software development lifecycle challenges. They must create software, test it, collect data, and rework it. Automation can help speed this process, enabling new capabilities and features to be developed and refined in shorter times.
RTInsights recently sat down with Matt Jones, VP and Chief Systems Architect at Wind River, to discuss the challenges of autonomous systems and ways to automate development to meet safety requirements and customer expectations. Here is a summary of our conversation.
RTInsights: What’s next for autonomous systems? What’s changing in the technology landscape now to propel wider use and more diverse use cases?
Jones: In the past, there have been many embedded systems that had automatic functionality. But, if we start to look at what’s coming next, that would be true autonomous systems that interact with humans daily. These are things like level five autonomous cars, robo-taxis, factory robots, or drones dropping off my package on a day-to-day basis.
The big difference with these systems is that that they will have more human-like intelligence. They’re also going to be interacting with humans in ways that they just haven’t done in the past. It’s very different from that simple automatic door opening into your local supermarket.
This brings many challenges to the way these systems are architected. When dealing with more complex autonomous systems, you’re not going to get it right the first time. You guess what that system should do. You go and test it in the real world. You get the data back. You improve your guess, and you improve your code. You repeat this experiment over and over. Every time you retest and improve code, that takes time. I call that the iteration time or the iteration cycle time.
There will be many different pieces of software that need testing and improvement. If you can reduce that iteration time so that, instead of a developer being able to test software, say, once a week, what if they could do that once an hour? That’d be 40 times faster. If I can reduce that cycle time from 40 hours to one hour, I can deliver a given quality of a product 40 times faster.
Now imagine a time when, instead of just testing their software on one device every week, they are able to test it every hour on 4,000 devices at cloud scale. As opposed to just being able to update one thing and test it in the real world, one developer can now spin up 4,000 things nearly instantaneously. A developer then gets 4,000 times as much data to make huge improvements so much faster.
In some ways, this is the next step in autonomous evolution. It’s about turning these system creators and developers into superheroes. I don’t say that lightly. How can we make them amazing? How can we give them the tools to create these autonomous cars and drones of the future?
Developers’ tools will also become autonomous in their own right. Imagine the developer manually testing a device, pressing the button, and seeing what it does. In the future, when you’ve got 4,000 of these, no human is going to be able to press all the buttons at once. How can you have autonomous agents helping and guiding everything that the developer does? That includes code scanning, license scanning, and automatically testing every time he or she does something. It’s almost like you have that single developer and a team of loyal wingmen, autonomous agents, making the developer even more effective.
So, in the past, we’ve had automation. In the future, we will have autonomy throughout the development, deployment, and operations of all these different future, exciting systems. I hope technologies to help developers be more creative faster will help spur hockey stick growth when it comes to autonomous system adoption.
RTInsights: What are the development challenges of bringing these systems to market? Are the challenges more of a technical nature or a regulatory nature?
Jones: With any systems that you have on the road like a connected autonomous vehicle, I like to think through a PEST analysis, that’s the political, economic, social, and technical challenges.
Let’s talk about technology first. The technology challenge has historically been about the processing power, the costs of this various hardware, and bringing that to scale. Now the greater technology challenge is often about the software.
Additionally, besides the technology, there are many other complex issues that must be addressed. For autonomous cars, consider the social acceptance aspect. What is the comfort level of drivers around autonomous cars? What if a truck driver is asleep but the car is safely navigating the road autonomously? Does this disturb others’ perspective about the autonomous truck?
Think about how you might feel if a car comes toward you when you’re about to step out into the road to cross with your family. The car doesn’t appear to be stopping. Worse still, there’s nobody to make eye contact with.
Prior to COVID, I used to take a lot of flights. To get around, I would often use a ride share application. The driver would turn up, and I’d get into the back of the car. I’d say, “Hi.” I’d look at my phone the whole journey. My boss would likely be pleased, since I’m probably checking work email. When I got to wherever I was going, I’d say, “Thank you very much,” and I’d get out of the car.
The difference between that journey and a future autonomous car journey is that I don’t have to say hello and thank you. Everything else about my experience would be the same, despite the autonomy.
In all these cases, I do believe that there will be social acceptance with time.
Let’s look at the economic challenge. Think about planes today like hundred-million-dollar Boeing airliners. They must stay in the air 16, 18 hours a day to make a profit. It’s all about sweating the assets. With all the autonomous vehicles that we’ll have on the roads in the future, it’s a similar state. They’re going to be incredibly expensive with the processing power they’ve got, with the LIDAR sensors, the radar sensors, with everything you need to support and maintain them.
But then think about a metropolitan, perhaps a financial district where there is a daytime rush but little activity late at night on the streets. That’s not helpful for sweating the asset of an autonomous vehicle. You have this peak rush hour for maybe an hour, two hours, three hours. Then you have that next rush hour at the end of the day. You’ve got eight hours of “It’s not too bad,” but if you’re sweating this asset for 18 hours a day, you’re looking at that lowest level, that lowest common denominator, normally about 8% of the traffic. You’re still not able to save 92% of the traffic out there if you’re sweating this asset for the time being.
Then you’ve got that final piece in PEST, which would be the political challenges. I bring it up last because this is about the laws and regulations.
I think about that in two ways. In the future world of connected autonomous cars, I want and need one brand of car to talk to another brand. How are those manufacturers going to enable this? Who is going to agree there? If that’s where legislation plays a role, what happens when new innovation is needed? How do we have technology that’s able to keep up faster?
Now consider all the different countries, different states in the US, given that we have 50 different DMVs, 51 with DC, 52 with Puerto Rico, with different rules in all those areas. What happens if you want to take a trip from the East to West Coast in an autonomous car? How does all of that get aligned in a way that makes it easy for the car manufacturers? Then if you consider that one automotive company sell into 184 markets, including all the different states, that’s an awful lot of regulations that they need to encode for going forward.
We’re some way away on each of those four areas. But, in a way, what will make this real, what will help people overcome it is for people to experience all these different systems, based on developers, software engineers, and product people in all these different companies, wanting to make it a reality faster.
RTInsights: What can help? Is the tech industry taking a consolidated approach to getting public acceptance?
Jones: In many industries, there are standards and standards bodies. For aviation, you have the FAA with regulations for avionics systems and how to safely certify them. In the automotive industry, there’s been a push over the last 10, 12 years into safety with ISO 26262 and efforts to create safe software.
Standard alliances say, “This is the minimum level of safety that you would have; this is what you need to do” to be a reasonably prudent engineer within one of these industries. That sentiment is augmented with collaborative technical alliances.
Look at AUTOSAR, which originally looked at how different ECUs [electronic control units] would communicate and share software. Now AUTOSAR is looking at this next generation of autonomous system being portable across different ECUs and standard APIs across car companies.
If I look at the FAA and the air and avionics industry, if I look at automotive, if I look at industrial, they have different needs. They’ve got different requirements. They’ve got different safety standards. They all come from this root specification.
But the question that I have is kind of next generation. How is this going to work when it comes to a smart city? How will I have my smart city of IoT-enabled 5G cell towers enabled by Verizon on the Wind River Studio cloud platform? Where’s all the devices, mission-critical devices on traffic lights, potentially running Windows or Linux, with these vehicles, autonomous vehicles, communicating via those cell towers, using other mechanisms, with just the traffic management system? Again, using VxWorks, Windows, or Linux. Who’s specifying that language or the protocols?
The good news, in some ways, is that this is not the biggest challenge. It’s how we’re talking over the internet, give or take. I can see you. That was previously unheard of 10 years ago. This means that, with alliances like the W3C [the World Wide Web Consortium], every time you see www at the start of a web browser, you can go to Google. It’s these massive standards that we’ve used to create communications over the web. We now need to translate that in a safe and secure way into these future technologies.
The real barrier to entry in my eyes is that people believe that all of the applications that enable these future autonomous systems are totally differentiating. Still, they believe that that entire system in its own right is totally differentiating. It’s not. It’s only really the application. It’s how can we give people open access to say, “Here. This is how you build on these different operating systems. This is how you intercommunicate. Please improve these autonomous models on the top.”
Your differentiation is because your car, your traffic lights, your plane will be better than others in the way they operate. In the application space, it is perceiving the pieces above that safety standard. We need people to share, to collaborate, to say what types of data they want to communicate to go to that next level.
RTInsights: What are the challenges in training complex autonomous systems?
Jones: With any autonomous system, you need access to the data it is creating, not just from a single device, but ideally from fleets of devices to understand how they perceive the environment around them. Once you have this data, you can run various algorithms. You can train it in different ways to create new models, redeploy, and go effectively around this loop.
One of the challenges of this training is that there’s so much data coming off these autonomous systems. Imagine all the sensors you have on a Boeing or an Airbus. Humans don’t necessarily understand the linkage between all these systems. Also, they don’t necessarily understand how a computer can figure out all these random statistical probabilities of “if this and this happens,” which to us would seem completely de-correlated, “X is going to happen next.”
You have this challenge to understand the incredibly complicated math at cloud scale with billions, if not trillions, of data points and how that condenses down to a model. The big challenge then becomes how do I know that that model is safe?
You could say that I could test it based on “proven in use.” It’s hard for some people because that’s a lot of data that you need. Or you could effectively have these cascading systems where if we took a car today and try to drive it against a wall, it will apply the brakes. It has this safety envelope around it. It is not truly autonomous. It’s just saying, “I’m going at this speed, and there’s something in front of me. When it gets to this distance, please apply the brakes because I don’t want to crash.”
How can you have these overlapping systems where I have my safety envelope that’s just state machine based, “If this, then that?” Then you have more like a robotized driver or pilot on top of it looking at stitching together all this different data from cameras, radar, LIDAR, and all these other systems to build up a statistical probability of the world. For the future, you’re going to need both of those pieces to achieve what’s necessary for autonomous operations.
RTInsights: What’s the role of automation in developing and training autonomous systems?
Jones: We just briefly discussed why training is needed and how you would train. But the next question is, what are you training for? I could train an autonomous system, and I can guarantee that it will get from New York to Portland. But I can’t guarantee that you’re going to get a smooth ride. I can’t guarantee you it’s going to take the optimum route.
Can you imagine getting your first ride on an autonomous car, and it was safe, and it wasn’t going to crash, but you felt that it was completely out of control the entire way? I think this is the bit that people forget. That, in reality, you’re training these models to meet and exceed a customer’s expectation.
For autonomous cars, the customer expectation will generally be around the level of the smoothest chauffeur and professional driver they’ve ever met, with perfection the entire time. That is their barometer. People are not explicitly thinking in terms of, “He had a near miss there. Unacceptable.” Even if there was no danger whatsoever, you must train these machines to reach a level of unspoken, unwritten customer expectation.
This, in some ways, is the challenge. We can get something to go from A to B, but then how do we build in user feedback, passenger feedback, rider feedback, and extra pieces of information. Maybe the experience feedback is based on accelerometers of how fast it goes around corners or how fast it accelerates in given situations? Or how it knows, based on weather conditions, what this plane is going to do when it hits the tarmac on a runway?
The challenge throughout all autonomous system development is the human. It’s the challenge of building a system that we humans would love to ride in and love to interact with. It’s the challenge of helping the human developer parameterize and describe our desired experience into software. It’s, in a way, helping that human then convince others that this is the right thing to do to make these future connected intelligence systems a reality in our daily lives.