Volltext-Downloads (blau) und Frontdoor-Views (grau)

Overview of existing approaches for the interpretation of machine learning models

  • In recent years, machine learning methods have taken a firm place in society and their use continues to grow. The challenge here is their little to almost non-existent interpretability. The aim of this paper is to uncover the possibilities of interpreting machine learning. The novel mechanisms and procedures of the emerging field of interpretable machine learning are presented. In a two-part analysis, intrinsically interpretable machine learning methods and established post-hoc interpretation methods are examined in more detail. The focus is on their functionality, properties and boundary conditions. Finally, a use case will be used as an example to demonstrate how post-hoc interpretation methods can contribute to the explainability of an image classifier and systematically provide new insights into a model.

Export metadata

Additional Services

Share in Twitter Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:Akif Cinar
URN:urn:nbn:de:bsz:753-opus4-8323
Document Type:Article
Language:English
Year of Completion:2019
Publishing Institution:Hochschule Esslingen
Release Date:2020/01/29
Tag:interpretability; interpretable machine learning; intrinsically interpretable machine learning; post-hoc interpretation methods,; traceability
First Page:1
Last Page:11
Institutes:Fakultäten / Informationstechnik
DDC class:000 Allgemeines, Informatik, Informationswissenschaft / 000 Allgemeines, Wissenschaft / 004 Informatik
Open Access:frei verfügbar
Licence (German):License LogoVeröffentlichungsvertrag ohne Print-on-Demand