SME Interview: How to Achieve Functional Safety Success with LHP and NI
-An Interview with Steve Neemeh From LHP Engineering Solutions
We are in a period of transition. As driver assistance systems become more common, drivers must adjust their driving to the capabilities of the vehicle. These capabilities offer significant safety improvements, but they also impact how many safety burdens remain with the driver and require the driver to adapt. One car might not be able to do what another newer car can do. The human must now adapt to varied safety capabilities as it moves from vehicle to vehicle.
These increases in complexity constantly evolve the driver-to-vehicle dynamic. In turn, this results in significant changes to how ADAS control systems are designed and implemented. In this first article in a 3-part series, we discuss some of the considerations and dynamics that are in play, as both vehicles and humans adapt to each other and advance safety together.
The answer can be somewhat complicated, based in part upon how advanced a given system has become. Older traditional vehicle systems are purely reactive, attempting to do only what the driver commands regardless of whether the driver is making a safe decision or not. In an older car, if the driver is navigating a twisting mountain road and pushes the vehicle to a rate of speed that exceeds what the tires and brakes and steering and driver reaction time can safely handle, that car will allow itself to be driven right off a cliff to its demise with no warning, feedback, or preventative intervention.
In contrast, Advanced Driver-Assistance Systems (ADAS) are systems equipped with technology that is designed to increase the overall safety when a vehicle is being driven. They utilize human-to-machine interfaces to augment and improve the driver’s ability to perceive and properly react to hazards on or immediately near the road. Earlier technologies assisted the driver, but the driver remained in clear control. But as ADAS technologies have advanced, the answer has become more complex as more of the safety burden is placed on the systems themselves, and they have been given more authority and capability to intervene.
Today’s ADAS systems are the product of many decades of development. The refinements have been incremental, one building upon the other, and systems such as anti-lock brakes and traction control have pretty much become the norm. As we move forward with more advanced ADAS systems such as lane departure warning systems, forward collision warning systems, and traffic signals recognition, we're getting into whole new fields that require a greater degree of integration of complex systems, beyond just the control of the machine. Now, the systems not only have to inform the driver, in some instances, they might be designed to override them. And once we progress into fully autonomous vehicles, the systems will have to evolve from just assisting the driver, to taking over primary control of the vehicle’s operation.
This advanced decision-making ability requires the melding of advanced capabilities, like perception and sensor fusion. These capabilities must be absolutely accurate, and they must work in harmony. Some elements of the more advanced capabilities may be brought into the automotive space from other vehicle realms. However, they must then be tailored to the unique demands of the automotive realm. The solution is not as simple as just copying verbatim from a more technically advanced vehicle operating environment, such as aerospace, plucking the same systems from planes and putting them into cars, and expecting them to work properly. Some technological elements might be able to be utilized in both realms, but they are two very different realms with two very different sets of requirements that will require two very different solutions.
A little bit of sensor fusion can be found in more traditional aircraft control systems. However, its use is not nearly as extensive as that which will be required in the automotive space. We have always assumed that a pilot will be present and in control of the aircraft, someone who can make those complicated judgments. But when you get into fully autonomous vehicles that lack an active operator, the system must make all those judgment calls that we take for granted. Traditionally, a trained and capable person is in control of the vehicle. If we are replacing that capability by swapping human control for machine control, the system must be able to reliably handle a significantly greater level of complexity.
Because of the uncertainty inherent in the automotive space, and the almost infinite variety of situations or scenarios that a vehicle can get into, all those scenarios need to be handled in some systematic way. Prioritization is paramount so that realistically, the system can assess the ever-changing scenario accurately and efficiently multiple times per second, without having to take on the burden of identifying every single so-called edge case.
In part, the computational burden can be managed by prioritizing what data needs to be captured and processed; for example, the capability to do things like avoiding obstacles. You don’t necessarily have to identify the type of object to avoid it, you just have to recognize that it is some sort of object, and then avoid it properly. That becomes less of an issue of identification, and more an issue of recognizing its trajectory and tracking it, trying to determine through the mathematics of the trajectory whether you'll intercept it, and whether you must take evasive action.
In that scenario, those type of calculations are complex, but they are not the hard part. The hard part is assessing the current as-is operating state. Are you accurately positioned; in other words, are you really where you think you are, and are you actually moving in the direction you think you are going? What are all the threats around you? The moving objects? The static objects? How accurate is the map you're using to navigate? All those things need to be collected and fused together in some way to form the perception of the autonomous system, which can then be used to formulate things in the immediate future like the path plan, which is then executed via the control system.
If you get into level three and level four autonomy, you have a whole new level of safety that is required to have a robust product that's acceptable to the public, and acceptable in terms of potential liabilities and legal risks. When engines or power trains are developed, we must try to make them as reliable as we can, given the amount of money budgeted for product development and the constraints on the product. How much can the vehicle cost, and still be economically viable?
In power trains, risk management is being done reasonably well. Usually, the worst case with a vehicle powertrain issue is that your car breaks down in an inconvenient place. You come to a stop in a crowded highway, and that can be dangerous. What do you do if you are in a fast lane and the vehicle has stalled? That is a serious event, but it's nothing like what can happen on an autonomous system that is fully in control of the brakes and the acceleration and the steering.
Let's look further at steering, which could theoretically be the most serious in terms of potential risk. If you have an immediate failure in your control system, you could conceivably have a failure that would result in a hard right turn in a fraction of a second. The operator could not be expected to manage that sudden change and you'd be in a crash, possibly a fatal crash.
So for steering, the requirements on safety are much, much higher. In that scenario, you'd have to develop what might be termed as aircraft-grade redundancy. That is, the system has the capability to suffer a failure, and is equipped with the controls and the diagnostics and the assessment of the system, such that the failure can be responded to in a safe manner. That could be as simple as asking the operator to take control or safely bring the car to rest. Or, it could be some capability that exists that allows the vehicle to continue to operate for long periods of time.
However, those capabilities typically mean that you have redundant systems. You would have two steering motors, and then two power supplies for those motors. And then redundant sensing for those systems. All those things add complexity and cost. How do you manage it? How do you determine your failure point? In aircraft you have two channels, a main channel and a secondary channel. There is also a backup system that provides a degraded level of performance. In an autonomous automotive environment, how do you switch from a failed system to a functioning system, subsystem, or other channels?
In autonomous automotive systems, machinery that could potentially have a serious failure resulting in potential injuries, must be fitted with those redundant systems. Those systems are expensive. Most of the automotive industry doesn't think about that. And although the aircraft industry does, the big difference is that, in the aircraft industry, the cost of a commercial aircraft is about $100 million per plane. In comparison, a new car is tens of thousands of dollars per car. However, the cost of those redundant systems would be roughly the same for either the plane or the car. Therefore, in an autonomous car, safety becomes a significantly greater percentage of your total product cost, than it is for a commercial aircraft.
There are challenges in terms of technical challenges, and challenges in terms of costs and capability. But when I look at autonomy, I also see a huge challenge which I don't think has been met yet. The actual control of the vehicle is less of a challenge because control of the machine is more traditional and can be readily achieved. But integrating all of that into the perception and path planning is a huge challenge for many reasons, mostly managing the uncertainty that can exist, and the almost unlimited number of scenarios that must be accounted for.
A systematic process must be created to organize the autonomous system itself, so that you've got a good handle on how the system performs against the test criteria. Given the enormity of the automotive operating space, the process must be made as thorough and consistent and efficient as possible to accommodate the sheer scope and volume of the testing work that must be done.
In automotive, the possible combinations of operating variables are staggering. You could be on a highway or on a country road, in any one of hundreds of thousands of cities, on the street or in a mall parking lot, or in a parking garage. You could be in good weather or bad weather, hot or cold, wet or dry, windy or still. The road could be empty or crowded or under construction. There could be all sorts of different surface friction coefficients. All those things – and there's many more – all those things are uncertainties. You have the external parts of the operations, such as, what are these objects? Which objects are obstacles, and which are roads? And then added to the static uncertainties, are the moving ones. How fast are the objects moving, and in what direction?
When you inventory all the objects and variables that must be accounted for, you begin to comprehend what a huge domain it is to test. How do you test it all? That is where some of the new cloud-based scalable testing environments will have to be developed. The software platforms must be tested with a realistic, high-fidelity model of the system, as well as all the scenarios that it may be placed in. And then you must test and test and test all the different scenarios.
And you must do all this for each iteration of every product you design.
The act of designing an automobile is an orchestration of many iterative processes conducted to very tight deadlines. Not only must you test the design of your autonomous control system to those scenarios, but you must also retest them every time you have an iteration in your design. That is extremely challenging in terms of driving the testing computational requirements. How long does it take to run even one test? And can you get through most of your scenarios and test your entire system in a reasonable time? All of these complications and complexities must be resolved.
Being able to extend and scale the test platforms to virtual space and some of the computational services, will be necessary to really do a good job of testing. That will be one of the key enablers to be able to achieve all this. And then, ultimately, you're going to have to run the car and test it like some of the car companies are going to do, run it on the road with a spotter driver, and keep running different scenarios in order to facilitate a better understanding of how the vehicle behaves. You have to possess at least a reasonable confidence that you've tested many of the scenarios. You’ll never get to the point where you can say that you have tested 100% of all conceivable scenarios. But you can accomplish much better coverage than is typically achieved today.
There are a lot of press reports about autonomous systems being right around the corner. Those reports have sounded that way for ten years now, “It’s just right around the corner.” Yet, as some developers begin to comprehend what it would take to really be robust over a variety of operating domains, they rightfully conclude that there is much more work to be done. A huge challenge remains to get the industry as a whole to recognize all that must still be done to achieve the true functional safety required for fully autonomous operation.
There will always be surprises, as there always are in complex engineering projects. But I see a future with improved sensing technology for better assessment of the environment, integrated with AI to form a perception solution. Couple that capability with the technology and bandwidth to test and validate all of this, and you will have the kind of key technologies that will be required to move forward and get a robust autonomous product.
Many clever solutions are being developed to chip away at this mountain of challenges. At least one company, even if they're not running autopilot, is recording all their sensors, making an assessment of what the autopilot would do, and then comparing it against what the driver actually did. That is an ingenious way to get testing for minimal cost. All your customers are driving, and you are acquiring that data and comparing it to what your autopilot would do. That is a pretty awesome way to do it.
Clever as that one solution is, it only partially solves “what should have happened,” versus “what needs to happen now.” There remains a need for real-time computations. Questions remain about the capability to use off-board algorithms in order to make decisions for the control system. Theoretically, and in more modest scenarios, it can be done. And maybe it is a situation where in many locations real-time computations can be done via the Internet. In more optimal situations it can be done, but there will be places and times where the necessary connection to the Internet won’t be there or might be lagging. So, we are exploring whether the computational workload needs to be on board, almost all of it, at least the Critical Control, the near-term trajectory control, such as your next five seconds. The longer-term computations, a lot of that could be in the cloud, certainly navigation, understanding obstacles and changes in the road and the weather… all those things certainly can be done in the cloud. Then the vehicle could be informed as the data are processed and communicated back to the vehicle.
The other part of the overall communication and computational workload solution will be to offload some of that computational burden to processors embedded in the surrounding infrastructure, using vehicle-to-ground and vehicle-to-vehicle communications.
When you look at operating an autonomous vehicle, and you design something that will make this vehicle completely capable, with its own sensing, the problem becomes much easier if you offload some of the computational burden. Get all vehicles talking to each other, so that each of them is communicating their intent, rather than making every other vehicle guess it. Make them communicate their direction, whether they want to turn, etc. All that makes the assessment burden for each vehicle lighter. It can focus on how it wants to react to the known near-term intent of the other vehicles around it, and decide where it needs to maneuver, what's the safe trajectory. All that becomes much simpler, if you know what the other vehicles around you are doing and are planning to do, and they are constantly telling you. The surrounding infrastructure can play a huge role in supporting those communications.
In addition, the road or route or other parts of the infrastructure can talk to the vehicle and inform it about a variety of things like speed limits, curvature of the road, condition to the road, possible anomalies such as construction or objects, all those things. If the vehicle is told that information, it increases the vehicle’s capability to have a high confidence in their trajectory or path plan.
All of this involves the infrastructure of knowledge and communication between the vehicle and the road. But then there's also the option of building dedicated lanes and dedicated pathways in the roadways that are reserved for autonomous vehicles. Of course, this is a huge enabler to deploying autonomous vehicles. And I think that will likely be the way it pans out.
You will have a dedicated lane like you have a commuting lane in some big cities. Maybe entrance to that lane is limited access; you hit your button to have the system take control, and it's able to safely and efficiently navigate into the dedicated lanes. When it is time to exit, then the driver would have to take over. I think that kind of limited operational design domain, where you have autonomy within a certain type of operation, offers a high confidence solution. Something like that is likely, and I think we're there in terms of the technology, as long as the dedicated lanes actually get built.
The concept is not just limited to automobiles or trucks. Other candidates include equipment within distribution centers, factories, and vehicles like yard spotters that are in an enclosed environment where there's no other traffic. They can start to manage the movement of trailers in a distribution center.
Examples like those, and many more, are very good applications for autonomous technology. The improvements can be envisioned, and the potential benefit can be grasped. Navigating the transition period will be a challenge, but the potential payoff in safety and lives saved, cannot be ignored.
-An Interview with Steve Neemeh From LHP Engineering Solutions
Top 10 Questions to Consider When Implementing ADAS
What will make self-driving cars safe and practical? Introduction We are in a period of transition. As self-driving cars become more common,...