How do you regulate AI?

haydn.thompson Friday September 7, 2018

The capability of computer programs is now reaching the same level of performance as humans in areas such as image processing and in other tasks such as calculus or memory, computers have already exceeded human capabilities. There is a general assumption that deep learning could bring important progress in other tasks that deal with unstructured data sets (e.g. data analytics or “big data”). Going beyond this AI has the capability to improve control functions and two aspects need to be considered more carefully:

In general, it is very difficult to prove that an AI algorithm will behave within a given regulation, technical, ethical or legal constraint. Additional technically imposed limitations might thus be required to ensure this. These would also contribute to supporting customer trust and acceptance.

However, there are some specific challenging scenarios that need to be considered. For instance, imagine an intelligent building management system that can understand how many people are in different sectors of the building. A fire breaks out and is spreading more rapidly than the rate at which people are evacuating. Does the intelligent system close off a part of the building which is on fire, preventing some people from escaping, but overall allowing a greater number to escape and prevent further damage to the building? Or should it always have the rule to ensure that in the event of a fire all escape routes are left available, no matter what the damage is to human life or to the cost of the building. These may lead to some logic statements that may either:

So regulation for AI is needed such that the logic statements used are subject to tests to see how AI systems respond in a real-world environment. Of course this may raise such challenging ethical questions which will require a lot of debate and supporting regulation to make it clear what decisions are acceptable to society.