Refine
Year of publication
- 2019 (1) (remove)
Document Type
- Article (1)
Language
- English (1) (remove)
Has Fulltext
- yes (1)
Keywords
- traceability (1) (remove)
In recent years, machine learning methods have taken a
firm place in society and their use continues to grow. The challenge
here is their little to almost non-existent interpretability. The aim of this
paper is to uncover the possibilities of interpreting machine learning. The
novel mechanisms and procedures of the emerging field of interpretable
machine learning are presented. In a two-part analysis, intrinsically
interpretable machine learning methods and established post-hoc interpretation
methods are examined in more detail. The focus is on their
functionality, properties and boundary conditions. Finally, a use case
will be used as an example to demonstrate how post-hoc interpretation
methods can contribute to the explainability of an image classifier and
systematically provide new insights into a model.