Publications of Andreas Holzinger > Scholar, DBLP, ORCID

Andreas Holzinger build a track record in AI/Machine Learning (see definition) for health informatics. He has been working on integrated machine learning, which is manifested in Holzinger’s HCI-KDD approach. This is based on a synergetic combination of two different fields to understand context and is now very important for what is called explainable-AI and transparent machine learning: Human–Computer Interaction (HCI), rooted in cognitive science, particularly dealing with human intelligence, and Knowledge Discovery/Data Mining (KDD), rooted in computer science, particularly dealing with artificial intelligence. This  approach is the basis for Human-Centered AI (HCAI) in general and Explainability and Causability [1] in particular. Andreas has pioneered the interactive machine learning approach with a human-in-the-loop. Andreas proved his concept with his glass-box approach, which becomes now important due to the raising ethical, social, and legal issues governed by the European Union. It is important to make decisions transparent, retraceable and human interpretable so to explain why a machine decsion has been made. “The why” is often more imporant than a classification result.

Subject: Computer Science > Artificial Intelligence (102001)
Technical Area: Machine Learning (102019)
Application Area: Health Informatics (102020)
Keywords: Human-Centered AI, Explainable AI, ethical responsible Machine Learning, interactive Machine Learning (iML)
ORCID-ID: http://orcid.org/0000-0002-6786-5194

Publication metrics as of 25.07.2019 17:00 MST:

Google Scholar citations: 11,405
Google Scholar h-Index: 49
Google Schloar i10-Index: 227
Scopus h-Index = 34
Scopus citations = 5292
Scopus authored papers = 315
DBLP Peer-reviewed conference papers = 179
DBLP Peer-reviewed journal papers = 73
DBLP Edited books and Journal issues = 46
DBLP Peer-reviewed book chapters = 23
DBLP TOTAL = 336
ArXiV contributions: 14

1) 3-pages-CV-Andreas-HOLZINGER (pdf, 323 kB)

2a) Publications-last-five-years-Andreas-Holzinger (pdf, 163 kB)

2b) Selected 10 original journal publications of the last 5 years (pdf, 9 kB)

2c) Selected 5 original contributions with comments (pdf, 75 kB)

3) 5-pages research statement (pdf, 170 kB)

4) 5-pages teaching statement (pdf, 607 kB)

5) 9-minutes Youtube Video Research Statement

[1] Important: The notion of Causability is differentiated from Explainability in that Causability is a property of a human (natural intelligence), while explainability is a property of a technologial system (artificial intelligence). Interpretibility for humans requires efficient human-AI interaction, i.e. a mapping of the explainable AI with the human understanding.

[1] Wichtig: Causability ist eine Eigenschaft einer Person (natürliche Intelligenz), während die Erklärbarkeit eine Eigenschaft eines technischen Systems darstellt (künstliche Intelligenz). Interpretierbarkeit für den Menschen benötigt effiziente Mensch-KI Interaktion, d.h. ein mapping der erklärbaren KI mit dem menschlichen Verstehen.


KANDINSKY Patterns as Intelligence Test for machines

Andreas Holzinger, Michael Kickmeier-Rust & Heimo Mueller 2019. KANDINSKY Patterns as IQ-Test for machine learning. Springer Lecture Notes LNCS 11713. Cham (CH): Springer Nature Switzerland, pp. 1-14, doi:10.1007/978-3-030-29726-8_1 . In this paper we propose to use our Kandinsky Patterns as an IQ-Test for machines.
[Paper] [Conference Slides] [KANDINSKY Universe (exploration enviroment)]

 


The first publication on our KANDINSKY Universe
The first publication on our KANDINSKY Universe

Heimo Müller & Andreas Holzinger 2019. Kandinsky Patterns. arXiv:1906.00657
KANDINSKY Figures and KANDINSKY Patterns are mathematically describable, simple, self-contained, hence controllable test data sets for the development, validation and training of explainability in artificial intelligence (AI) and machine learning (ML).
[Project Page] #KANDINSKYpatterns


Causability and Explainability of Artificial Intelligence in Medicine
Causability and Explainability of Artificial Intelligence in Medicine

Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal & Heimo Mueller 2019. Causability and Explainability of AI in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, doi:10.1002/widm.1312  Future Human-AI interfaces must map causability which is a property of a human with explainability which is a property of AI – this needs a ground truth = framework for understanding (see KANDINSKY Universe.