The capability of computer programs is now reaching the same level of performance as humans in areas such as image processing and in other tasks such as calculus or memory, computers have already exceeded human capabilities. There is a general assumption that deep learning could bring important progress in other tasks that deal with unstructured data sets (e.g. data analytics or “big data”). Going beyond this AI has the capability to improve control functions and two aspects need to be considered more carefully:
- Use of AI for safety-critical control strategies – The formal or even systematic verification of an AI algorithm is nearly impossible due to the strong impact of the learning set on the performance and behaviour of the algorithm. The usage of AI for safety-critical systems (e.g., object recognition for autonomous driving) needs to be assessed very carefully to ensure that a proof can be issued that no critical issue might occur.
- Usage of AI for private data – similarly, the behaviour of the algorithm is very difficult to predict, and consequently it is very difficult to enforce that a given regulation can be respected in any situation.
In general, it is very difficult to prove that an AI algorithm will behave within a given regulation, technical, ethical or legal constraint. Additional technically imposed limitations might thus be required to ensure this. These would also contribute to supporting customer trust and acceptance.
However, there are some specific challenging scenarios that need to be considered. For instance, imagine an intelligent building management system that can understand how many people are in different sectors of the building. A fire breaks out and is spreading more rapidly than the rate at which people are evacuating. Does the intelligent system close off a part of the building which is on fire, preventing some people from escaping, but overall allowing a greater number to escape and prevent further damage to the building? Or should it always have the rule to ensure that in the event of a fire all escape routes are left available, no matter what the damage is to human life or to the cost of the building. These may lead to some logic statements that may either:
- Minimise the fatalities and casualties with a scoring system, weighting fatalities against different injury types.
- Minimise cost impact with cost damage to the building considered as well as values given on life and injuries.
- The system is programmed to never take positive action which may endanger a life, even if this potentially results in a higher number of injuries and fatalities.
So regulation for AI is needed such that the logic statements used are subject to tests to see how AI systems respond in a real-world environment. Of course this may raise such challenging ethical questions which will require a lot of debate and supporting regulation to make it clear what decisions are acceptable to society.