Automated vehicles represent a very interesting class of Cyber-Physical Systems, and a domain that is right now pushing the limits of CPS technology. I am writing this posting the evening after the presumably first accident where a pedestrian was killed by an automated test vehicle (despite a supervising person being present).
Most of the topics I have touched upon, such as the challenges, opportunities and complexity facets of CPS, can be very well illustrated with automated driving. An automated vehicle (at high levels of automation, corresponding to roughly SAE levels 3 and above), will need to be able to carry out tasks such as:
- Understanding complex and varying driving environments (roads, signs, debris, people, other vehicles, etc.)
- Understanding where the ego-vehicle is positioned within such environments
- Taking decisions on what to do in the short and longer term
The complex environments have to be mirrored by correspondingly sophisticated and complex perception, mapping, planning and control systems.
Key challenges in developing such automated systems include the following:
- Dealing with unexpected driving scenarios. While industry will likely do their best, it will not be possible to provide exhaustive coverage of driving scenarios. There will be limitations in the training of machine learning systems and sometimes also overtraining. AI (machine learning) systems of today lack the generalization power of humans, and it is hard to reason about the robustness of today’s machine learning systems.
- Dealing with uncertainty in perception and world understanding – for example in terms of the intent of pedestrians and other vehicles on the road,
- Dealing with faults in the ego vehicle (sensors, computation, software and hardware),
Automated driving is pushing the limits of existing technologies and methodologies. A key resulting challenge is that there is no established best practice for how to build such systems. Traditional safety critical systems mandate appropriate risk reduction, the use of simplicity for safety critical parts, and heavy redundancy when safety requires availability (such as in aircraft control systems).
The understanding of these concepts (risk, simplicity, heavy redundancy) in the context of high levels of automated driving is insufficient and current investigations imply that these best practices are at best only partly applicable. For example, safety still requires a proper understanding of the environment and the actions of the ego-vehicle, so the complexity of the perception problem remains. Active safety systems may help to some extent, with the risk of undesired activations of braking/stopping (false positives), which in turn may lead to hazards and accidents.
While the automotive industry is pushing hard and are getting closer to introduce automated vehicles for special business cases such as robot taxis, legislation, safety standards and guidelines are sorely lagging behind. Since automated driving means to design systems that break new ground – it is thus not so strange that these are lagging. It does however mean that there is a lack of a solid understanding of how to build such systems. Standards and guidelines can be developed only when such an understanding exists. Current functional safety standards were conceived long before the era of automated driving, and thus cannot represent best practices for how to deal with such complex systems.
The market drivers are very strong and will continue to drive, but accidents may strike back on the industry. Caution in testing and the introduction is highly advisable!! And it is clear that we need to accelerate the development of methods, theory and practices for automated driving.
Even if automated vehicles (eventually) will perform much much better than humans, there will unfortunately still be accidents. A final challenge then refers to how society perceives the behavior of automated vehicles - going well beyond technology!