Today, systems based on machine learning (or commonly referred to as “AI systems”) are increasingly being used in important and critical application areas. Machine learning models are becoming more powerful and outperform humans in certain areas. This creates a certain degree of dependency, as the processes may not only be faster, but also more accurate and possibly not always controllable by humans.
Particular attention must therefore be paid to the trustworthiness of the systems. This is understood to mean:
- Reliability and robustness: the system must be able to work correctly in every situation, or recognize a situation in which it is unable to provide reliable output
- Transparency: the system must be transparent with regard to the possibilities, limits and also regarding the training data used
- Explainability: the output of the system must be comprehensible to a human, it must be possible to explain it to the user and third parties.
These aspects are very important for us within our research and are more important than the pure performance of the machine learning systems.