The potential of AI in risk management
The development of artificial intelligence (AI) is progressing at great speed. Numerous breakthroughs have been achieved in recent years, particularly in the areas of machine vision, linguistic data processing and strategy games. Ultimately, AI is an omnifunctional technology that has the potential to change virtually all areas of life. The use of AI also offers a wide range of opportunities in risk management.
The current AI boom, which began a good five years ago, is primarily due to three developments: first, cheaper computing power; second, larger data sets; and third, deep-learning algorithms that use an enormous number of intermediate layers between input data and results. This has led to significant breakthroughs in machine vision (e.g., superhuman performance in object recognition and skin cancer classification), linguistic computing (e.g., human parity in speech recognition, English-Chinese translation, and the GLUE text comprehension test), and strategy games (e.g., superhuman performance in Go, poker, and Dota 2), among others.
It is to be expected that AI as an omnifunctional technology will greatly change, if not revolutionize, numerous economic sectors and policy areas in the coming years. This is because AI has a wide range of innovative complementarities, such as autonomous vehicles, unmanned aerial vehicles or industrial robots, and thus considerable application potential in all major industries. The following section outlines some of the key opportunities and challenges that arise for risk management as AI applications become more pervasive.
What opportunities in risk management?
In the coming years, we can expect to see the use of AI applications in all phases of risk management, from risk prevention to crisis management. Thus, AI can already make an important contribution in hazard prevention and avoidance. Among other things, machine learning can be used in critical infrastructure protection for predictive maintenance, inspection, and visual detection of infrastructure damage.
For example, machine learning has been used to predict which water pipes in Sydney are at high risk of failure, or where in US cities building inspections are most likely to be worthwhile. Likewise, machine learning has been used in various studies to detect and quantify corrosion or small cracks in concrete or steel structures. This process could soon be used in the inspection of nuclear power plants, roads, bridges or buildings.
AI also promises more precise and, above all, faster processes in the areas of risk analysis and early detection. Since expert-driven risk analysis, as it prevails today, is very resource-intensive, it can usually only be carried out at longer intervals. Here, AI supports a shift away from subjective, expert-driven risk analysis towards a machine-based process. On the one hand, such approaches are used in the modelling of complex, longer-term challenges such as climate change. On the other hand, machine learning and weather data can be used, for example, to update flood or landslide prediction models on a daily, hourly or even real-time basis to optimize early warning systems. "AI systems are heavily dependent on data quality and quantity."
Also use for cyber security
Advances in machine vision support situational awareness and critical infrastructure surveillance in particular. Among other things, intelligent security systems enable the recognition of biometric characteristics, emotions, human actions and atypical behavior in a surveillance area. They also allow video footage to be automatically searched for objects or people in a given time period based on specific features such as size, gender, or clothing color. Similarly, machine learning can also be used to detect cybersecurity anomalies and intrusions. Last but not least, AI can also support crisis management. For example, by using machine learning to automatically extract local damage extents as well as support needs from posts on social media. The success of AI in strategy games is an indication that it could well be used in the future to support decision-making in crisis management. In the longer term, there is also future potential in the field of resilience engineering. Accordingly, AI could be used, for example, in important infrastructure systems to build up generic adaptability and thus help them to adapt to changing environmental conditions.
Risks and challenges
Even though AI is an extremely dynamic field and very high expectations are often placed in this technology, certain limits will remain in its application for the foreseeable future. AI systems are heavily dependent on the quality and quantity of data. Biases that are present in the training data are later reflected in the inference. Likewise, although AI systems capture statistical correlations from enormous amounts of data, they do not yet have an understanding of causal relationships. Where there is no or very sparse data, such as in emerging and future technological risks, current AI cannot match human expertise.
In addition, the widespread use of AI systems also entails new risks, especially when algorithms support or make momentous decisions, such as in medicine, transport, financial markets or critical infrastructure. In such cases, compliance with fairness, accuracy and robustness criteria must be ensured, among other things. For example, by monitoring the extent to which the network weights different inputs when making decisions so that they meet ethical standards and do not discriminate on the basis of origin or gender, for example. Another danger that needs to be prevented, especially in markets, is cascading interactions between algorithms, such as in the "flash crash" (several sharp price drops) on Wall Street in 2010. In addition, AI systems are vulnerable to so-called "adversarial examples", manipulative interventions using images or physical objects that deliberately confuse the AI. For example, researchers at the Massachusetts Institute of Technology 3D-printed a plastic turtle that Google's object recognition AI consistently classified as a firearm. Another American research team has used (semi-)autonomous vehicles with inconspicuous stickers to classify a stop sign as a speed limit sign.
Conclusion
AI is a general-purpose technology and is also making increasing inroads into the field of risk management. For example, the practice of risk analysis and monitoring continues to change with advances in machine vision and linguistic computing. At the same time, however, today's AI systems should not be overestimated. For example, forecasting extreme events using AI is often difficult due to a lack of training data. The rapid and not always linear development of AI makes it difficult to realistically estimate future AI capacities, and there is no expert consensus on the time frame in which "strong AI" could become a reality. AI is advanced statistics, it is not inherently neutral, nor does it currently possess a human-like understanding of concepts. Public and private actors should invest first and foremost in the training and skills of their employees so that they can properly train, use, and assess AI tools. Finally, the transformative potential of AI systems in many areas also means that policymakers need to pay more attention to them. For example, in February the new EU Commission presented its white paper on AI, which envisages the development of legally binding requirements for high-risk applications such as medical decisions or biometric identification. In Switzerland, the interdepartmental AI working group presented its report in December. This report considers the current legislation to be sufficient, but highlights the need for clarification in the areas of international law, public opinion-forming and administration.