Explainable AI

The increase in performance of modern machine learning models can be attributed to deep learning technology. This allows to train highly complex neural networks that have many thousands, or even billions, of free parameters. This leads to very powerful models. However, such complex models also have a decisive disadvantage: the output of the models and the path of the model from the input to the result are initially completely incomprehensible or inexplicable for humans. This is particularly important when the result has a decisive influence on an action. For example, if a machine learning model is to diagnose a disease whose treatment has a far-reaching impact on the patient. In this case, an explanation is required from both the practitioner and the patient.

It is also necessary to be able to derive an explanation for the output in the event of unexpected or contradictory results. That is the only way the powerful models can show their full strength, because they can, for example, deliver unexpected results that a person could not have generated in this way.

For the development itself, an explanation of the decision-making process is invaluable for the traceability and debugging of machine learning models. If the network misbehaves, possible reasons can be identified much more quickly.

An explanation of the network behavior can also help to increase confidence in the machine learning model, as it is now possible to understand which areas of the input data the network uses to make the decision. For example, if an image classifier only considers the background in an image for classification, it is obvious that the network will not be able to perform the actual task reliably.

We are therefore working on methods that involve a certain degree of explainability. This way, the trust and reliability of the machine learning systems can be ensured.

Scroll to Top