Commentary: Why Engineers Need Explainable AI
Johanna Pingel, product marketing manager at MathWorks, a developer of software for mathematical computation, explains which methods make AI explainable and what it takes for trust in AI models to grow.
Decision-makers in companies across all industries are increasingly relying on AI to remain competitive today and in the future. But there is still mistrust of a "black box" AI whose models and solution paths are often no longer comprehensible to us humans. This becomes a challenge when engineers have to explain how their models work - for example, when the model is subject to certain regulations or when potential buyers have to be convinced.
What is explainability of AI?
This is where explainable AI comes into play. This is a set of tools and methods that can help understand AI model decisions and detect and address any problems with black-box models, such as bias or susceptibility to manipulation attempts. Explainability is essential when engineers need to prove that a model meets certain standards or ISO norms. But it is also about increasing confidence in AI models in general.
Explainability can help AI users understand how machine learning models arrive at predictions. To do this, they can, for example, track which parameters influence the decision of an AI model and what the influence looks like. But this is not easy, especially with complex models.
Complexity vs. explainability
Therefore, the question arises: Why don't we use simpler AI models? Models have not always been complex. A simple example is a thermostat that controls the temperature in a room. If the temperature in the room drops below a certain predetermined value, the heating turns on; if it rises above that value, the heating turns off again. But what about parameters such as time of day, office usage, electricity prices or the weather forecast? How high should the heating be set to be truly efficient and sustainable? Just as modern temperature control in buildings takes many more parameters into account, models have become established in many areas that are many times more detailed and thus more complex.
These complex models have the advantage that they usually make more accurate predictions. This means that more accurate analyses can be carried out, which in certain cases also provide a faster answer to questions posed. In addition, engineers are working with increasingly complex data, such as streamed signals and images, which can be processed directly by AI models. This can save valuable time during model creation.
Even though complexity brings all these positive changes, it is increasingly becoming a challenge that models are no longer understood. Engineers must therefore develop new approaches in order to better understand even complex models again and to be able to comprehend calculations.
Methods for explainable AI
Using explainable models can provide valuable insights without adding additional steps to the workflow. For example, in the case of decision trees or linear models, decision making is immediately understandable in terms of which properties influence a model and how.
In order to understand the influence of certain characteristics on the decision, there are certain methods. Through the "Feature Ranking"In this way, it becomes clear which characteristics have the greatest influence on a decision. Subsequently, it must be checked whether the influence of a characteristic changes when it takes on different values.
Another method is LIME (Local Interpretable Model-Agnostic Explanations). Here, an attempt is made to approximate a complex, unexplainable system in the vicinity of a particular data point by creating a less complex, explainable sibling model that produces similar results. In this way, it is possible to find out which predictors influence the decision the most.
But how do you detect nonlinear dependencies among the input data? Engineers can use "Shapley values" for this purpose. They can be used to estimate how the input data of a machine learning model influence the results.
When creating models for image processing or computer vision applications, visualizations are among the best ways to assess the explainability of models. For example, methods such as degree CAM and "occlusion sensitivity" can identify those locations in images and text that most influence the model's decision.
AI beyond explainability
To successfully use explainable AI, engineers and scientists must also be aware of the challenges that come with it. Finding a balance between explainability, complexity, influence of input data and trust in models is not easy. In addition, it must be clear that explaining a black box and thus gaining the trust of decision-makers or control authorities is only one step on the journey to the safe use of AI.
The use of AI in practice requires models that can be understood. They must have been created according to a comprehensible process and be able to operate at a level that is necessary for safety-critical and sensitive applications. At this point, experts rely on verification and validation. In this way, they can ensure that a model used in safety-critical applications meets minimum standards. Or they define safety certifications for areas such as automotive or aerospace. Engineers have many tools and options at their fingertips to increase confidence in AI. They shouldn't stop at explainability.
Conclusion: Explainable AI as a cog in an overall system
Without question, AI will have a strong focus on explainability in the future. The more AI is integrated into safety-critical and everyday applications, the more explainability will be considered an indispensable attribute of AI models. And everyone benefits: engineers have better information about their models and can find and fix bugs faster. They can explain in a comprehensible way how the models meet certain standards, and this greater transparency gives confidence to both decision makers and potential customers.
Nevertheless, engineers, subject matter experts, and business decision makers should not forget that explainability is just one cog in a big clockwork and must be used hand in hand - tooth in tooth - with other important methods, tools, and regulations.
Source and further information: https://ch.mathworks.com/de/