Publications of Andreas Holzinger > Scholar, DBLP, ORCID

Andreas Holzinger build a solid track record in AI/Machine Learning (see definition) for health informatics, always with a strong focus on research-based teaching. Andreas has been working on integrated machine learning, which is manifested in Holzinger’s HCI-KDD approach. This is based on a synergetic combination of two different fields to understand intelligence, enabling context-adaptive systems: Human–Computer Interaction (HCI), rooted in cognitive science, particularly dealing with human intelligence, and Knowledge Discovery/Data Mining (KDD), rooted in computer science, particularly dealing with artificial intelligence. This  approach is the basis for Human-Centered AI (HCAI) in general and Explainability and Causability [1] in particular. Particularly, Andreas has pioneered the interactive machine learning approach with a human-in-the-loop. Andreas proved his concept with his glass-box approach, which becomes now important due to the raising ethical, social, and legal issues governed by the European Union. It is important to make decisions transparent, retraceable and human interpretable so to explain why a machine decsion has been made. “The why” is often more imporant than a classification result.

Subject: Computer Science > Artificial Intelligence (102001)
Technical Area: Machine Learning (102019)
Application Area: Health Informatics (102020)
Keywords: Human-Centered AI (HC-AI), Explainable Artificial Intelligence (explainable AI, ex-AI), interactive Machine Learning (iML)
ORCID-ID: http://orcid.org/0000-0002-6786-5194

Publication metrics as of 25.06.2019 17:00 CET:

Google Scholar citations: 11,108
Google Scholar h-Index: 48
Google Schloar i10-Index: 224
Scopus h-Index = 33
Scopus citations = 5143
Scopus authored papers = 315
DBLP Peer-reviewed conference papers = 173
DBLP Peer-reviewed journal papers = 72
DBLP Edited books and Journal issues = 43
DBLP Peer-reviewed book chapters = 23
DBLP TOTAL = 325
ArXiV contributions: 13

1) 3-pages-CV-Andreas-HOLZINGER (pdf, 323 kB)

2a) Publications-last-five-years-Andreas-Holzinger (pdf, 163 kB)

2b) Selected 10 original journal publications of the last 5 years (pdf, 9 kB)

2c) Selected 5 original contributions with comments (pdf, 75 kB)

3) 5-pages research statement (pdf, 170 kB)

4) 5-pages teaching statement (pdf, 607 kB)

5) 9-minutes Youtube Video Research Statement

[1] Important: The notion of Causability is differentiated from Explainability in that Causability is a property of a human (natural intelligence), while explainability is a property of a technologial system (artificial intelligence). Interpretibility for humans requires efficient human-AI interaction, i.e. a mapping of the explainable AI with the human understanding.

[1] Wichtig: Causability ist eine Eigenschaft einer Person (natürliche Intelligenz), während die Erklärbarkeit eine Eigenschaft eines technischen Systems darstellt (künstliche Intelligenz). Interpretierbarkeit für den Menschen benötigt effiziente Mensch-KI Interaktion, d.h. ein mapping der erklärbaren KI mit dem menschlichen Verstehen.

Selected recent publications:

Andreas Holzinger, Michael Kickmeier-Rust & Heimo Mueller 2019. KANDINSKY Patterns as IQ-Test for machine learning. Springer Lecture Notes LNCS 11713. Cham (CH): Springer Nature Switzerland, pp. 1-14, doi:10.1007/978-3-030-29726-8_1.
[Paper] [Slides]

Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger 2018. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer, pp. 295-303, doi:10.1007/978-3-319-99740-7_21 (GOEBEL – HOLZINGER (2018) Explainable-AI-the-new-42)


Andreas Holzinger, Michael Kickmeier-Rust & Heimo Mueller 2019. KANDINSKY Patterns as IQ-Test for machine learning. Springer Lecture Notes LNCS 11713. Cham (CH): Springer Nature Switzerland, pp. 1-14, doi:10.1007/978-3-030-29726-8_1.
[Paper] [Slides]


Heimo Müller & Andreas Holzinger 2019. Kandinsky Patterns. arXiv:1906.00657