The increasing use of CPS leads to “sophisticated technology in everyone’s hands”. This use and deployment of applications beyond closed industrial applications deserves special attention since it has important implications for risks related to systems on which we increasingly depend, our perceptions of risks and decision making relating to these risks in the first place. The risks, and our perceptions of them, are also likely to evolve as part of the ongoing technological shift! (Click here for more details).
Essentially, the increasing capabilities of CPS are also mirrored by the introduction of new, and/or changing risks. Examples of this are for example seen with automation (e.g. automated vehicles), connectivity (e.g. connected critical infrastructures) and new forms of collaboration (such as between humans and robots in manufacturing, letting robots out of the fences).
For example, consider connectivity and the electrical grid, where we are seeing a change towards heterogeneous and distributed power supplies and new services, all relying on connectivity. Already today, many households today have connected electricity meters with web access to data regarding their electricity consumption and the electricity meters can be upgraded by the operators. This obviously leads to new cybersecurity risks with potential further implications such as unavailability of power.
While all these developments are driven by business cases and opportunities to improve functionality and cost-efficiency, there a number of concerns that become increasingly emphasized:
- Wider cone of uncertainty at system deployment (release) time. Releasing CPS into complex environments (beyond protective realms) has the implication that the number-of-unknowns – including unforeseen usages and what can go wrong, increase dramatically. As a consequence, the so called cone of uncertainty, what is known about the systems and their environments at design and release time, will widen as opposed to traditional CPS.
- Increased cyber-security threats. Opening up CPS directly implies new potential attack surfaces (forming part of the wider cone of uncertainty). Security risks have to be considered carefully and weighted towards the opportunities of connectivity. Cyber-security needs to be considered as part of CPS architecting in particular to ensure end-to-end security. With increasing strength of security mechanisms, humans in the loop are likely to represent the main vulnerability.
- Unavailability of service. When CPS is applied to critical infrastructures and services that are becoming part of our everyday life, our reliance on them will increase. As opposed to many traditional safety critical systems, there may not be a safe state for shutting down the system if something goes wrong. While we may stop trains, cars, electricity, water, etc. temporarily, the longer the outage, the larger the societal costs and implications.
- Perception of risk. Our perceptions of risks are not always rational, compare for example our perception the risk of driving vs. flying. For many of the new CPS applications it is quite unclear what our perception of risk will be. For example, in automated driving, how safe should a highly automated vehicle be – or rather – how much better than human driven vehicles should the automated vehicle perform? The requirements are not defined as of now, not even minimum ones, and yet, automated vehicles are soon to hit the market.
- Transient effects and socio-technical implications. Introducing new CPS will at least initially lead to a situation with mixtures of modern and existing (old) technology, such as for example automated and traditional cars. People will have to learn and understand the behavior of “the new”, and developers have to deal with mixed systems. On a longer term time-scale, people’s behaviors are likely to change and have impact on the functioning of entire systems.
All these concerns relate to an increase of the overall complexity (as discussed in a previous posting.) What are the implications of these concerns – i.e. regarding the increasing security risks, uncertainty, availability requirements, and the evolving nature of systems and requirements?
One take away is that we need to engage into debate on how these advanced CPS should behave – what the requirements should be. It is interesting to note the debate on data access and privacy that is now taking placing especially concerning Facebook. When lives are at stage, we would like to have these discussions earlier, rather than later.
A further take away is that future CPS will have to be engineered and maintained to be trustworthy – and that effort for accomplishing this need to be prioritized upfront. As a consequence, the system architectures need to be robust, yet also flexible and adaptable, such that systems can deal with failures and attacks, and also be adjusted and improved.
It is well-known that risks have to be closely monitored during the entire life-cycle of (safety/mission) critical systems. However, as opposed to traditional CPS, the level of uncertainty and high availability requirements will require to break new ground. Risks will increasingly have to be addressed operationally, by providing abilities to detect, reason about and deal with risks (such as security threats and failures) as they occur.
It will thus be necessary to gather data about actual operation and feed it back into development to adjust the CPS appropriately (new risks, faults and vulnerabilities learnt, adjusting trade-offs between security, availability and safety, etc.). All these concerns will drive towards extended DevOps for CPS (as introduced in a previous posting).