Intelligent Transport Systems (ITS)

AI

haydn.thompson Friday August 10, 2018

There are many controversial statements being made about AI. For instance:

  • Elon Musk - AI poses a “fundamental existential risk for human civilisation,” adding the technology is “the scariest problem.”
  • Stephen Hawking - AI is “either the best, or the worst thing, ever to happen to humanity”
  • Bill Gates - cannot understand “why some people are not concerned about AI
  • Mark Zuckenberg - “This will be a serious year of self-improvement and I’m looking forward to learning from working to fix our issues together”


So it is clear that there are concerns. A key problem is that AI raises many ethical issues. Ethics need to be considered both in the design of AI and in how it responds to certain situations. New systems are providing much easier access to data (including private data) and there is a need for proper handling of information (especially with respect to ethical issues). Algorithms are also being increasingly used in sensitive processes, e.g. banking, security. This is providing many benefits in terms of efficiency and performance, but are these systems acting on our behalf and what exactly are they doing?

So there is a need for transparency of what AI algorithms are doing and also an understanding of the learning base used. In future it is being advocated that organisations designing AI systems should employ ethicists and should openly publish the decision-making rationale behind their AI systems. This will allow the public and government to judge the ethics of the system. However, would business be prepared to publish the commercial secrets of how their algorithms work?

We are heading towards a future where AI is likely to be ubiquitous and as a consequence ethical issues will impact more. Thus we need to ensure that there is an appropriate awareness and better training at all levels within industry for engineers, sales and management staff.