Safety, security and availability concerns for advanced CPS

martint Wednesday April 11, 2018

The increasing use of CPS leads to “sophisticated technology in everyone’s hands”. This use and deployment of applications beyond closed industrial applications deserves special attention since it has important implications for risks related to systems on which we increasingly depend, our perceptions of risks and decision making relating to these risks in the first place. The risks, and our perceptions of them, are also likely to evolve as part of the ongoing technological shift! (Click here for more details).

Essentially, the increasing capabilities of CPS are also mirrored by the introduction of new, and/or changing risks. Examples of this are for example seen with automation (e.g. automated vehicles), connectivity (e.g. connected critical infrastructures) and new forms of collaboration (such as between humans and robots in manufacturing, letting robots out of the fences).

For example, consider connectivity and the electrical grid, where we are seeing a change towards heterogeneous and distributed power supplies and new services, all relying on connectivity. Already today, many households today have connected electricity meters with web access to data regarding their electricity consumption and the electricity meters can be upgraded by the operators. This obviously leads to new cybersecurity risks with potential further implications such as unavailability of power.

While all these developments are driven by business cases and opportunities to improve functionality and cost-efficiency, there a number of concerns that become increasingly emphasized:

All these concerns relate to an increase of the overall complexity (as discussed in a previous posting.) What are the implications of these concerns – i.e. regarding the increasing security risks, uncertainty, availability requirements, and the evolving nature of systems and requirements?

One take away is that we need to engage into debate on how these advanced CPS should behave – what the requirements should be. It is interesting to note the debate on data access and privacy that is now taking placing especially concerning Facebook. When lives are at stage, we would like to have these discussions earlier, rather than later.

A further take away is that future CPS will have to be engineered and maintained to be trustworthy – and that effort for accomplishing this need to be prioritized upfront. As a consequence, the system architectures need to be robust, yet also flexible and adaptable, such that systems can deal with failures and attacks, and also be adjusted and improved.

It is well-known that risks have to be closely monitored during the entire life-cycle of (safety/mission) critical systems. However, as opposed to traditional CPS, the level of uncertainty and high availability requirements will require to break new ground. Risks will increasingly have to be addressed operationally, by providing abilities to detect, reason about and deal with risks (such as security threats and failures) as they occur.

It will thus be necessary to gather data about actual operation and feed it back into development to adjust the CPS appropriately (new risks, faults and vulnerabilities learnt, adjusting trade-offs between security, availability and safety, etc.). All these concerns will drive towards extended DevOps for CPS (as introduced in a previous posting).