Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 1 of 1
Back to Result List

Overview of existing approaches for the interpretation of machine learning models

  • In recent years, machine learning methods have taken a firm place in society and their use continues to grow. The challenge here is their little to almost non-existent interpretability. The aim of this paper is to uncover the possibilities of interpreting machine learning. The novel mechanisms and procedures of the emerging field of interpretable machine learning are presented. In a two-part analysis, intrinsically interpretable machine learning methods and established post-hoc interpretation methods are examined in more detail. The focus is on their functionality, properties and boundary conditions. Finally, a use case will be used as an example to demonstrate how post-hoc interpretation methods can contribute to the explainability of an image classifier and systematically provide new insights into a model.

Export metadata

Additional Services

Share in Twitter Search Google Scholar


Author:Akif Cinar
Document Type:Article
Year of Completion:2019
Publishing Institution:Hochschule Esslingen
Release Date:2020/01/29
Tag:interpretability; interpretable machine learning; intrinsically interpretable machine learning; post-hoc interpretation methods,; traceability
First Page:1
Last Page:11
DDC classes:000 Allgemeines, Informatik, Informationswissenschaft / 000 Allgemeines, Wissenschaft / 004 Informatik
Open Access?:frei verf√ľgbar
Licence (German):License LogoVeröffentlichungsvertrag ohne Print-on-Demand