Publications of Andreas Holzinger > Scholar, DBLP, ORCID

Andreas Holzinger works on data driven Artificial Intelligence (AI) and machine learning (ML) motivated by efforts to improve human health. Andreas pioneered in interactive ML with the human-in-the-loop, paving the way towards explainability *) and causability. Andreas promotes a synergistic approach of Human-Centered AI (HCAI) to put the human-in-control of AI to align it with human values, privacy, security and safety.

*) Note: I am aware that humans are not fully “explainable” and unpredictable in time, and in certain situations we don’t even need explanations; however, in the medical context it makes sometimes sense to enable a domain expert to ask questions of why an AI made a decision and what if ?

Subject: Computer Science > Artificial Intelligence (ÖFOS 10 2001)
Technical Area: Machine Learning (ÖFOS 10 2019)
Application Area: Health Informatics (ÖFOS 10 2020), Cancer Research (ÖFOS 30 1904)
Keywords: Human-Centered AI, Explainable AI, interactive Machine Learning (iML), Decision Support Systems, medical AI
ORCID-ID: http://orcid.org/0000-0002-6786-5194

Publication metrics as of 15.08.2020

Google Scholar citations: 13,556, Google Scholar h-Index: 55
Scopus h-Index = 37, Scopus citations = 6183
DBLP Peer-reviewed conference papers c = 188, Peer-reviewed journal papers j = 79

In Causability entwickeln wir Maße für die Qualität von Erklärungen, die durch KI-Methoden (explainable AI) produziert werden (z.B. Layer-Wise-Relevance Propagation, welches lediglich eine relevanz-heatmap erzeugt). Dies ist wichtig um es den (menschlichen) medizinischen Expertinnen und Experten zu ermöglichen, nachzuvollziehen und zu verstehen, warum ein Algorithmus zu einem bestimmten Ergebnis kam oder warum ein Ergebnis eine bestimmte Fehlerquote hatte. Dies erfordert ein kontextuelles Verständnis, das durch das Einbringen eines “Human-in-the-Loop” gefördert werden kann. Von Kritikern dieses Ansatzes, die einen autonomen KI-Einsatz bevorzugen, wurde und wird immer wieder die Frage gestellt, was denn der human-in-the-loop machen soll. Die konkrete Antwort: Der Human-in-the-loop bringt menschliche Erfahrung und Konzeptwissen in KI-Prozesse ein – etwas was den besten KI-Algorithmen zur Zeit auf diesem Planeten fehlt, aber gerade in der Medizin unverzichtbar ist.

In Causability we develop measures for the quality of explanations produced by explainable AI methods (e.g. Layer-Wise-Relevance Propagation, which only generates a relevance-heatmap). This is important to enable (human) medical experts not only to re-trace the results but to understand why an algorithm came to a certain result or why a result had a certain error rate (“if you know where your error come from”). This requires new human-AI interfaces which allow a contextual understanding and let domain expert ask causal questions and counterfactuals (“what-if” questions). Critics of this approach, who prefer an autonomous AI deployment, have asked and continue to ask the question what the human-in-the-loop should do; my concrete answer: The human-in-the-loop brings in human experience and conceptual knowledge into AI processes – something that the best AI algorithms to date on this planet are lacking! Supportive underlying structural principles are graphical structures (graphs/networks).


Artificial Intelligence and Machine Learning for Digital Pathology

Andreas Holzinger, Randy Goebel, Michael Mengel & Heimo Müller (eds.) 2020. Artificial Intelligence and Machine Learning for Digital Pathology: State-of-the-Art and Future Challenges, Springer Lecture Notes in Artificial Intelligence Volume 12090, doi:10.1007/978-3-030-50402-1 Data driven AI and ML in digital pathology, radiology, dermatology is promising. In specific cases Deep Learning even exceeds human performance, however, in the context of medicine it is important for a human expert to verify the outcome. Tere is a need for transparency and re-traceability of state-of-the-art solutions to make them usable for ethical responsible medical decision support. Moreover, big data is required for training, covering a wide spectrum of a variety of human diseases in different organ systems. These data sets must meet top-quality and regulatory criteria and must be well annotated for ML at patient-, sample-, and image-level. Here biobanks play a central and future role in providing large collections of high-quality, well-annotated samples and data. The main challenges are finding biobanks containing ‘‘fit-for-purpose’’ samples, providing quality related meta-data, gaining access to standardized medical data and mass scanning of whole slides including efficient data management solutions.


Measuring the Quality of Explanations: The Systems Causability Scale (SCS). Comparing Human and Machine Explanations.

Andreas Holzinger, Andre Carrington & Heimo Müller 2020. Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations. KI – Künstliche Intelligenz (German Journal of Artificial intelligence), Special Issue on Interactive Machine Learning, Edited by Kristian Kersting, TU Darmstadt, 34, (2), doi:10.1007/s13218-020-00636-z., online available via https://link.springer.com/article/10.1007/s13218-020-00636-z

In this paper we introduce the System Causability Scale (SCS) to measure the quality of explanations. It is based on the notion of Causability (Holzinger et al., 2019) combined with concepts adapted from the widely accepted System Usability Scale (SUS). In the same way as usability measures the quality of use, Causability measures the quality of explanations. [xAI-Project] [Scholar]


Causability and Explainability of Artificial Intelligence in Medicine

Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal & Heimo Mueller 2019. Causability and Explainability of AI in Medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, doi:10.1002/widm.1312 

In this paper we introduce the notion of Causability, which is extending explainability and is of great importance for future Human-AI interfaces (see our paper on dialogue systems for intelligent user interfaces). Such interfaces for explainable AI have to map the technical explainability (which is a property of an AI, e.g. the heatmap of a neural network produced by e.g. layer wise relevance propagation) with  causability (which is a property of a human, i.e. the extent to which the technical explanation is interpretable by a human) and to answer questions of why we need a ground truth, i.e. a framework for understanding. Here counterfactuals are important P (y x | x , y ) with the typical activity of “retrospection” and questions including “what-if?” [Systems Causability Scale] [Scholar]


KANDINSKY Patterns: A Swiss-Knife for the Study of Explainable AI

Andreas Holzinger, Peter Kieseberg & Heimo Müller 2020. KANDINSKY Patterns: A Swiss-Knife for the Study of Explainable AI. ERCIM News, (120), 41-42. [pdf, 755 KB] Online available: https://ercim-news.ercim.eu/en120/r-i/kandinsky-patterns-a-swiss-knife-for-the-study-of-explainable-ai  Kandinsky Patterns enable testing, benchmarking and evaluating machine learning algorithms under mathematically strictly controllable conditions, but at the same time are accessible and understandable for human observers and with the possibility to produce (and hide) a ground truth. This will be extremely important in the future, as adversarial examples have already demonstrated their potential in attacking security mechanisms applied in various domains, especially medical environments. Last, but not least, Kandinsky Patterns can be used to produce “counterfactuals” – the “what if”, which is difficult to handle for both humans and machines – but can provide novel insights into the behaviour of explanation methods. [Project page]


A new concordant partial AUC and partial c statistic for imbalanced data in the evaluation of machine learning algorithms

André M. Carrington, Paul W. Fieguth, Hammad Qazi, Andreas Holzinger, Helen H. Chen, Franz Mayr & Douglas G. Manuel 2020. A new concordant partial AUC and partial c statistic for imbalanced data in the evaluation of machine learning algorithms. Springer/Nature BMC Medical Informatics and Decision Making, 20, (1), 4, doi:10.1186/s12911-019-1014-6.

In explainable AI a very important issue is robustness of machine learning algorithms. For measuring robustness, we introduce a novel concordant partial Area Under the Curve (AUC) and a new partial c statistic for Receiver Operator Characteristic (ROC) dataas foundational measures to help to understand and to explain ROC and AUC. Our partial measures are continuous and discrete versions of the same measure, are derived from the AUC and c statistic respectively, are validated as equal to each other, and validated as equal in summation to whole measures where expected. [relevant for xAI]


KANDINSKY Patterns as Intelligence Test for machines

Andreas Holzinger, Michael Kickmeier-Rust & Heimo Mueller 2019. KANDINSKY Patterns as IQ-Test for machine learning. Springer Lecture Notes LNCS 11713. Cham (CH): Springer Nature Switzerland, pp. 1-14, doi:10.1007/978-3-030-29726-8_1 .

AI follows the notion of human intelligence which is not a clearly defined term, according to cognitive science includes abilities to think abstract, to reason, and to solve problems from the real world. A hot topic in current AI/machine learning research is to find whether and to what extent algorithms are able to learn abstract thinking and reasoning similarly as humans can do, or whether the learning remains on purely statistical correlations. In this paper we propose to use our Kandinsky Patterns as an IQ-Test for machines and to study concept learning which is a fundamental problem for future AI/ML. [Paper] [exploration enviroment] [TEDx]

 


Dialogue Systems for Intelligent Human Computer Interactions

Erinc Merdivan, Deepika Singh, Sten Hanke & Andreas Holzinger 2019. Dialogue Systems for Intelligent Human Computer Interactions. Electronic Notes in Theoretical Computer Science, 343, 57-71, doi:10.1016/j.entcs.2019.04.010.

Online available via: https://www.sciencedirect.com/science/article/pii/S1571066119300106

In this paper we present some fundamentals on communication techniques for interaction in dialogues involving speech, gesture, semantic and pragmatic knowledge and present a new image-based method in an Out Of Vocabulary setting. The results show that using dialogue as an image performs well and helps dialogue manager in expanding out of vocabulary dialogue tasks in comparison to Memory Networks. This is important for future Human-AI interfaces. [relevant for xAI]


The first publication on our KANDINSKY Universe, the experimental environment for explainability and causability

Heimo Müller & Andreas Holzinger 2019. Kandinsky Patterns. arXiv:1906.00657

In the medical domain (e.g. histopathology) the ground truth is in generally accepted textbooks, hence in the brain of the human pathologist, but often not directly accessible. Here the KANDINSKY Figures and KANDINSKY Patterns come into play: those are mathematically-logically describable, simple, self-contained, hence controllable test data sets for the development, validation and training of explainability/interpretability in artificial intelligence (AI) and machine learning (ML). While they possess these computationally manageable properties, they are at the same time easily distinguishable by human observers, so can be described by both humans and algorithms. We invite the international machine learning research community to a challenge to experiment with our Kandinsky Patterns to expand and thus make progress in the field of explainable AI and to contribute to the upcoming field of explainability and causability. [Project Page]


Interactive machine learning: experimental evidence for the human in the algorithmic loop: A case study on Ant Colony Optimization

Andreas Holzinger, Markus Plass, Michael Kickmeier-Rust, Katharina Holzinger, Gloria Cerasela Crişan, Camelia-M. Pintea & Vasile Palade 2019. Interactive machine learning: experimental evidence for the human in the algorithmic loop. Applied Intelligence, 49, (7), 2401-2414, doi:10.1007/s10489-018-1361-5. Online available: https://link.springer.com/article/10.1007/s10489-018-1361-5

In this paper we provide novel experimental insights on how we can improve computational intelligence by complementing it with human intelligence in an interactive machine learning approach (iML). For this purpose, we used the Ant Colony Optimization (ACO) framework, because this fosters multi-agent approaches with human agents in the loop (see when we need a human-in-the-loop). We propose a unification between human intelligence and interaction abilities and the computing power of an artificial machine learning system. The “human-in-the-loop” brings in conceptual knowledge that no algorithm on this planet yet has.


Visual analytics for concept exploration in subspaces of patient groups: Making sense of complex datasets with the Doctor-in-the-loop

Michael Hund, Dominic Boehm, Werner Sturm, Michael Sedlmair, Tobias Schreck, Torsten Ullrich, Daniel A. Keim, Ljiljana Majnaric & Andreas Holzinger 2016. Visual analytics for concept exploration in subspaces of patient groups: Making sense of complex datasets with the Doctor-in-the-loop. Brain Informatics, 3, (4), 233-247, doi:10.1007/s40708-016-0043-5. Online available: https://braininformatics.springeropen.com/articles/10.1007/s40708-016-0043-5

In this paper, which is another proof for the human-in-the-loop concept, we present SubVIS, an interactive tool to visually explore subspace clusters from different perspectives, introduce a novel analysis workflow, and discuss future directions for high-dimensional (medical) data analysis and its visual exploration. [relevant for xAI]

 


Interactive Machine Learning (iML) for health informatics: When do we need the human-in-the-loop ?

Andreas Holzinger 2016. Interactive Machine Learning for Health Informatics: When do we need the human-in-the-loop? Brain Informatics, 3, (2), 119-131, doi:10.1007/s40708-016-0042-6. [Online available].

In this highly-cited paper we define iML as ‘‘algorithms interacting with agents,  optimizing their learning behaviour, where the agents can also be human.’’ This ‘‘human-in-the-loop’’ (or a crowd of humans) can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization, where human expertise can help to reduce an exponential search space through heuristic selection of samples through  a glass-box approach. Ultimately, this fosters explainability and verifiability, because a human is able re-trace and thus understand the underlying factors of why a certain decision has been made, but at the same time is able to re-enact and to verify the results. [Scholar]


Biomedical image augmentation using Augmentor

Marcus D. Bloice, Peter M. Roth & Andreas Holzinger 2019. Biomedical image augmentation using Augmentor. Bioinformatics, 35, (1), Oxford Academic Press, 4522-4524, doi:10.1093/bioinformatics/btz259.
Online available: https://academic.oup.com/bioinformatics/article/35/21/4522/5466454

Learning from few data is difficult, because in the medical domain we often do not have these “big data”. Therefore, we have developed the Augmentor package for image augmentation. It provides a stochastic, pipeline-based approach to image augmentation with a number of features that are relevant to biomedical imaging, such as z-stack augmentation and randomized elastic distortions. The software has been designed to be highly extensible meaning an operation that might be specific to a highly specialized task can easily be added to the library, even at runtime. There are two versions available, one in Python and one in Julia. [Project page]


Why imaging data alone is not enough: AI-based integration of imaging, omics, and clinical data
Why imaging data alone is not enough: AI-based integration of imaging, omics, and clinical data

Andreas Holzinger, Benjamin Haibe-Kains & Igor Jurisica 2019. Why imaging data alone is not enough: AI-based integration of imaging, omics, and clinical data. European Journal of Nuclear Medicine and Molecular Imaging, 46, (13), 2722-2730, doi:10.1007/s00259-019-04382-9. Integration of clinical, imaging, molecular data is necessary to understand complex diseases, and to achieve accurate diagnosis to provide the best possible treatment. In addition to the need for sufficient computing resources, suitable algorithms, models, and data infrastructure, three important aspects are often neglected: (1) the need for multiple independent, sufficiently large and, above all, high-quality data sets; (2) the need for domain knowledge and ontologies; and (3) the requirement for multiple networks that provide relevant relationships among biological entities. While one will always get results out of high-dimensional data, all three aspects are essential to provide robust training and validation of ML models, to provide explainable hypotheses and results, and to achieve the necessary trust in AI and confidence for clinical applications. [Preprint available here


Human Activity Recognition Using Recurrent Neural Networks

Deepika Singh, Erinc Merdivan, Ismini Psychoula, Johannes Kropf, Sten Hanke, Matthieu Geist & Andreas Holzinger 2017. Human Activity Recognition Using Recurrent Neural Networks. In: Lecture Notes in Computer Science LNCS 10410. Cham: Springer International, pp. 267-274, doi:10.1007/978-3-319-66808-6_18. In this paper, we introduce a deep learning model that learns to classify human activities without using any prior knowledge. For this purpose, a Long Short Term Memory (LSTM) Recurrent Neural Network was applied to three real world smart home datasets. The results of our experiments show that the proposed approach outperforms existing in terms of accuracy and performance. Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living. The increasingly large amount of data sets calls for machine learning methods. https://arxiv.org/abs/1804.07144


Augmenting Statistical Data Dissemination by Short Quantified Sentences of Natural Language
Augmenting Statistical Data Dissemination by Short Quantified Sentences of Natural Language

Miroslav Hudec, Erika Bednárová & Andreas Holzinger 2018. Augmenting Statistical Data Dissemination by Short Quantified Sentences of Natural Language. Journal of Official Statistics (JOS), 34, (4), 981, doi:10.2478/jos-2018-0048. Online available: https://content.sciendo.com/view/journals/jos/34/4/article-p981.xml

In this paper we study the potential of natural language summaries expressed in short quantified sentences. Linguistic summaries are not intended to replace existing dissemination approaches, but can augment them by providing alternatives for the benefit of diverse users (e.g. domain experts, general public, disabled people, …). The concept of lingusitic summaries is demonstrated on test interfaces, which can be important for future human-AI dialogue systems. [relevant for xAI]


Computational approaches for mining user’s opinions on the Web 2.0

Gerald Petz, Michał Karpowicz, Harald Fuerschuss, Andreas Auinger, Vaclav Stritesky & Andreas Holzinger 2014. Computational approaches for mining user’s opinions on the Web 2.0. Information Processing & Management, 50, (6), 899-908, doi:10.1016/j.ipm.2014.07.005. Computational opinion mining discovers, extracts and analyzes people’s opinions, sentiments, attitudes and emotions towards certain topics in social media. While providing interesting market research information, the user generated content presents numerous challenges regarding systematic analysis, the differences and unique characteristics of the various social media channels. Here we report on the determination of such particularities, and deduces their impact on text preprocessing and opinion mining algorithms (sentiment anslaysis).


Explainable AI: The New 42?

Randy Goebel, Ajay Chander, Katharina Holzinger, Freddy Lecue, Zeynep Akata, Simone Stumpf, Peter Kieseberg & Andreas Holzinger 2018. Explainable AI: the new 42? Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer, pp. 295-303, doi:10.1007/978-3-319-99740-7_21.

In this 2018 output of our yearly xAI-workshop at the CD-MAKE conference we discuss some issues of the current state-of-the-art in what is now called explainable AI and outline what we think is the next big thing in AI/machine learning: the combination of statistical probabilistic machine learning methods with classic logic based symbolic artificial intelligence. Maybe the field of explainable ai can act as an ideal bridge to combine these two worlds. [pdf, 875 kB] 


Biomedical image augmentation using Augmentor

Marcus D. Bloice, Peter M. Roth & Andreas Holzinger 2019. Biomedical image augmentation using Augmentor. Oxford Bioinformatics, 35, (1), 4522-4524, doi:10.1093/bioinformatics/btz259.

Within our Augmentor project aiming to improve model accuracy, generalisation, and to control overfitting, we developed Augmentor, a software package, available in both Python and Julia versions, that provides a high level API for the expansion of image data using a stochastic, pipeline-based approach which effectively allows for images to be sampled from a distribution of augmented images at runtime. Augmentor provides methods for most standard augmentation practices as well as several advanced features such as label-preserving, randomised elastic distortions, and provides many helper functions for typical augmentation tasks used in machine learning.  Online available: https://arxiv.org/abs/1708.04680


Interpretierbare KI: Neue Methoden zeigen Entscheidungswege künstlicher Intelligenz auf

Andreas Holzinger 2018. Interpretierbare KI: Neue Methoden zeigen Entscheidungswege künstlicher Intelligenz auf. c’t Magazin für Computertechnik, 22, 136-141. Machinelles Lernen bringt heute KI-Systeme hervor, die Entscheidungen schneller treffen als ein Mensch. Darf dieser sich aber entmündigen lassen? Neue Methoden machen Entscheidungswege nachvollziehbar, interpretierbar und damit transparent und schaffen so Vertrauen (trust) und Akzeptanz – oder sie decken Missverständnisse auf. Menschen können (manchmal – nicht immer) Zusammenhänge im Kontext verstehen und aus wenigen Beispielen generalisieren. Ein menschlicher Experte kann helfen, wo die KI an ihre Grenzen kommt, aber auch KI kann unterstützen, wo Menschen an ihre Grenzen kommen. Ärzte können von monotonen Routineaufgaben entlastet werden, während gleichzeitig, KI-Systeme und menschliche Experten gemeinsam bessere Entscheidungen treffen als jeweils für sich allein [pdf, 871 kB]. Online verfügbar:  https://www.heise.de/select/ct/2018/22/1540263049336608


Explainable AI

Andreas Holzinger 2018. Explainable AI (ex-AI). Informatik-Spektrum, 41, (2), 138-143, doi:10.1007/s00287-018-1102-5. ,,Explainable AI“ ist kein neues Gebiet. Das Problem der Erklärbarkeit ist so alt wie die AI selbst, ja vielmehr das Resultat ihrer selbst. Während regelbasierte Lösungen der frühen AI nachvollziehbare ,,Glass-Box“-Ansätze darstellten, lag deren Schwäche im Umgang mit Unsicherheiten der realen Welt. Durch die Einführung probabilistischer Modellierung und statistischer Lernmethoden wurden die Anwendungen zunehmend erfolgreicher – aber immer komplexer und opak. Beispielsweise werden Wörter natürlicher Sprache auf hochdimensionale Vektoren abgebildet und dadurch für Menschen nicht mehr verstehbar. In Zukunft werden kontextadaptive Verfahren notwendig werden, die eine Verknüpfung zwischen statistischen Lernmethoden und großen Wissensrepräsentationen (Ontologien) herstellen und Nachvollziehbarkeit, Verständlichkeit und Erklärbarkeit erlauben – dem Ziel von ,,explainable AI“. Online verfügbar: https://link.springer.com/article/10.1007/s00287-018-1102-5


What do we need to build explainable AI systems for the medical domain

Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis & Douglas B. Kell 2017. What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923.

In this highly cited arXiv contribution we outline some of our own research topics (interactive machine learning, image understanding, text understanding, *omics integration) in the broader context of “explainable AI” with a focus on the application in medicine and the health sciences. We argue that research in the context of “explainable AI” can help to facilitate the implementation of AI/ML, because of the importance of replicability, verifiability, transparency, trust and ethical responsibility. At least in the medical domain we will remain on the human-in-control. [xAI] [Scholar]


Human Annotated Dialogues Dataset for Natural Conversational Agents

Erinc Merdivan, Deepika Singh, Sten Hanke, Johannes Kropf, Andreas Holzinger & Matthieu Geist 2020. Human Annotated Dialogues Dataset for Natural Conversational Agents. Applied Sciences, 10, (3), 1-16, doi:10.3390/app10030762. [Scholar]

We developed a benchmark dataset with human annotations and replies, useful to develop metrics for conversational agents. This is relevant for the xAI research community, because conversational agents are gaining huge popularity in industrial applications (e.g. digital assistants, chatbots, and particularly systems for natural language understanding (NLU), for medical decision support). A major drawback is the unavailability of a common metric to evaluate the replies against human judgement for conversation agents. Human responses include: (i) ratings of the dialogue reply in relevance to the dialogue history; and (ii) unique dialogue replies for each dialogue history from the users. This enables evaluating models against six unique human responses for each given history. Detailed analysis on how dialogues are structured and human perception on dialogue score in comparison with existing models are also presented.


Visualization of Histopathological Decision Making Using a Roadbook Metaphor

Birgit Pohn, Marie-Christina Mayer, Robert Reihs, Andreas Holzinger, Kurt Zatloukal & Heimo Müller. Visualization of Histopathological Decision Making Using a Roadbook Metaphor. 23rd International Conference Information Visualisation (IV), 2019 Paris, France. 392-397, doi:10.1109/IV.2019.00073. [RG]

In this paper we investigate medical decision processes and the relevance of explainability in decision making. The first step for implementing decision-paths in systems is to retrace an experienced pathologist’s diagnosis finding process. Recording a route through a landscape composed of human tissue in terms of a roadbook is one possible approach to collect information on how diagnoses are found. Choosing the roadbook metaphor provides a simple schema, that holds basic directions enriched with metadata regarding landmarks on a rally – in the context of pathology such landmarks provide information on the decision finding process.


Towards a Deeper Understanding of How a Pathologist Makes a Diagnosis

Birgit Pohn, Michaela Kargl, Robert Reihs, Andreas Holzinger, Kurt Zatloukal & Heimo Müller. Towards a Deeper Understanding of How a Pathologist Makes a Diagnosis: Visualization of the Diagnostic Process in Histopathology. IEEE Symposium on Computers and Communications (ISCC 2019), 2019 Barcelona. IEEE, 1081-1086, doi:10.1109/ISCC47284.2019.8969598.

Advancements in Artificial Intelligence (AI) and Machine Learning (ML) are enabling new diagnostic capabilities. In this paper we argue that the very first step before introducing AI/ML into diagnostic workflows is a deep understanding of how pathologists work. We developed a visualization concept, including: (a) the sequence of the views observed by the pathologist (Observation Path), (b) the sequence of the spoken comments and statements of the pathologist (Dictation Path), (c) the underlying knowledge and experience of the pathologist (Knowledge Path), (d) information about the current phase of the diagnostic process and (e) the current magnification factor of the microscope chosen by the pathologist.  This is highly important for explainable AI [Paper] [Scholar]


NLP for the Generation of Training Data Sets for Ontology-Guided Weakly-Supervised Machine Learning in Digital Pathology

Robert Reihs, Birgit Pohn, Kurt Zatloukal, Andreas Holzinger & Heimo Müller. NLP for the Generation of Training Data Sets for Ontology-Guided Weakly-Supervised Machine Learning in Digital Pathology. 2019 IEEE Symposium on Computers and Communications (ISCC), 2019. IEEE, 1072-1076, doi:10.1109/ISCC47284.2019.8969703.

The combination of ontologies with machine learning (ML) approaches is a hot topic and not yet extensively investigated but having great future potential – particularly for explainable AI – interpretable machine learning. Since full annotation on pixel level would be impracticably expensive, a practical solution is in weakly-supervised ML. In this paper we used ontology-guided natural language processing (NLP) for term extraction and a decision tree built with an expert-curated classification system. This demonstrates the practical value of our solution to analyze and structure training data sets for ML and as a tool for the generation of biobank catalogues. [xAI-Project] [Scholar] [RG]


In silico modeling for tumor growth visualization

Fleur Jeanquartier, Claire Jean-Quartier, David Cemernek & Andreas Holzinger 2016. In silico modeling for tumor growth visualization. BMC Systems Biology, 10, (1), 1-15, doi:10.1186/s12918-016-0318-8.

In-silico methods overcome the lack of wet experimental possibilities and as dry method succeed in terms of reduction, refinement and replacement of animal experimentation, also known as the 3R principles. Our visualization approach to simulation allows for more flexible usage and easy extension to facilitate understanding and gain novel insight. Biomedical research in general and research on tumor growth in particular will benefit from the systems biology perspective. We aim to provide a comprehensive and expandable simulation tool to visualizing tumor growth. This novel Web-based application offers the advantage of a user-friendly graphical interface with several manipulable input variables to correlate different aspects of tumor growth. [Paper] [Scholar]


In silico cancer research towards 3R

Claire Jean-Quartier, Fleur Jeanquartier, Igor Jurisica & Andreas Holzinger 2018. In silico cancer research towards 3R. Springer/Nature BMC cancer, 18, (1), 408, doi:10.1186/s12885-018-4302-0

Underlining and extending the in-silico approach with respect to the 3Rs (replacement, reduction, refinement) will lead cancer research towards efficient and effective precision medicine. Therefore, we suggest refined translational models and testing methods based on integrative analyses and the incorporation of computational biology within cancer research. We give an overview on in vivo, in vitro and in silico methods used in cancer research. Common models as cell-lines, xenografts, or genetically modified rodents reflect relevant pathological processes to a different degree, but can not replicate the full spectrum of human disease. There is an increasing importance of computational biology, advancing from the task of assisting biological analysis with network biology approaches as the basis for understanding a cell’s functional organization up to model building for predictive systems. [Paper] [Scholar]


From extreme programming & usability engineering to extreme usability in software engineering education (XP+UE > XU)

The success of extreme programming (XP) is based, among other things, on an optimal communication in teams of 6-12 persons, simplicity, frequent releases and a reaction to changing demands. Most of all, the customer is integrated into the development process, with constant feedback. This is very similar to usability engineering (UE) which follows a spiral four phase procedure model (analysis, draft, development, test) and a three step (paper mock-up, prototype, final product) production model. In comparison, these phases are extremely shortened in XP; also the ideal team size in UE user-centered development is 4-6 people, including the end-user. The two development approaches have different goals but, at the same time, employ similar methods to achieve them. It seems obvious that there must be synergy in combining them. The authors present ideas in how to combine them in an even more powerful development method called extreme usability (XU). The most important issue of this paper is that the authors have embedded their ideas into software engineering education. [Scholar]


Biomedical informatics: Discovering knowledge in big data
Biomedical informatics: Discovering knowledge in big data

This book provides a broad overview of the topic Bioinformatics with focus on data, information and knowledge. From data acquisition and storage to visualization, ranging through privacy, regulatory and other practical and theoretical topics, the author touches several fundamental aspects of the innovative interface between Medical and Technology domains that is Biomedical Informatics. Each chapter starts by providing a useful inventory of definitions and commonly used acronyms for each topic and throughout the text, the reader finds several real-world examples, methodologies and ideas that complement the technical and theoretical background. This new edition includes new sections at the end of each chapter, called “future outlook and research avenues,” providing pointers to future challenges. At the beginning of each chapter a new section called “key problems”, has been added, where the author discusses possible traps and unsolvable or major problems. https://www.springer.com/de/book/9783319045276


Expectations of Artificial Intelligence for Pathology
Expectations of Artificial Intelligence for Pathology

Peter Regitnig, Heimo Mueller & Andreas Holzinger 2020. Expectations of Artificial Intelligence in Pathology. Springer Lecture Notes in Artificial Intelligence LNAI 12090. Cham: Springer, pp. 1-15, doi:10.1007/978-3-030-50402-1-1 [For students, pdf, 1,3 MB]

Within the last ten years, essential steps have been made to bring artificial intelligence (AI) successfully into the field of pathology. However, most medical experts are still far away from using AI in daily practice. This paper focuses on tasks, which could be solved, and which could be done better by AI, or image-based algorithms, compared to a human expert. In particular, this paper focuses on the needs and demands of surgical pathologists; examples include: Finding small tumour deposits within lymph nodes, detection and grading of cancer, quantification of positive tumour cells in immunohistochemistry, pre-check of Papanicolaoustained gynaecological cytology in cervical cancer screening, text feature extraction, text interpretation for tumour-coding error prevention and
AI in the next-generation virtual autopsy. 


Legal, regulatory, ethical frameworks for standards in artificial intelligence and autonomous robotic surgery

Shane O’Sullivan, Nathalie Nevejans, Colin Allen, Andrew Blyth, Simon Leonard, Ugo Pagallo, Katharina Holzinger, Andreas Holzinger, Mohammed Imran Sajid & Hutan Ashrafian 2019. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. The International Journal of Medical Robotics and Computer Assisted Surgery, 15, (1), 1-12, doi:10.1002/rcs.1968.  We classify responsibility into (1) Accountability; (2) Liability; and (3) Culpability. All three aspects were addressed when discussing responsibility for AI and autonomous surgical robots, be these civil or military patients (however, these aspects may require revision in cases where robots become citizens). The component which produces the least clarity is Culpability, since it is unthinkable in the current state of technology. We envision that in the near future a surgical robot can learn and perform routine operative tasks that can then be supervised by a human surgeon. This represents a surgical parallel to autonomously driven vehicles. Here a human remains in the ‘driving seat’ as a ‘doctor‐in‐the‐loop’ thereby safeguarding patients undergoing operations that are supported by surgical machines with autonomous capabilities.


Analysis of biomedical data with multilevel glyphs

Heimo Müller, Robert Reihs, Kurt Zatloukal & Andreas Holzinger 2014. Analysis of biomedical data with multilevel glyphs. BMC Bioinformatics, 15, (Suppl 6), S5, doi:10.1186/1471-2105-15-S6-S5 – We present multilevel data glyphs optimized for interactive knowledge discovery and visualization of large biomedical data sets. Data glyphs are 3D objects defined by multiple levels of geometric descriptions (levels of detail) combined with a mapping of data attributes to graphical elements and methods, which specify their spatial position. In the data mapping phase meta information about the attributes (scale, number of distinct values) are compared with the visual capabilities of the graphical elements in order to give a feedback to the user about the correctness of the variable mapping. The spatial arrangement of glyphs is done in a dimetric view, which leads to high data density, a simplified 3D navigation and avoids perspective distortion. We show the usage of data glyphs in the disease analyser for personalized medicine. Data glyphs are successfully applied in the disease analyser. Especially the automatic validation of the data mapping, selection of subgroups within histograms and the visual comparison of the value distributions were seen by experts as an important functionality.


From Machine Learning to Explainable AI (reading for students)
From Machine Learning to Explainable AI (reading for students)

Andreas Holzinger 2018. From Machine Learning to Explainable AI. 2018 World Symposium on Digital Intelligence for Systems and Machines (IEEE DISA). IEEE, pp. 55-66, doi:10.1109/DISA.2018.8490530.

The success of statistical machine learning (ML) methods made the field of Artificial Intelligence (AI) so popular again, after the last AI winter. Meanwhile deep learning approaches even exceed human performance in particular tasks. However, such approaches have some disadvantages besides of needing big quality data, much computational power and engineering effort; those approaches are becoming increasingly opaque, and even if we understand the underlying mathematical principles of such models they still lack explicit declarative knowledge. For example, words are mapped to high-dimensional vectors, making them unintelligible to humans. What we need in the future are context-adaptive procedures, i.e. systems that construct contextual explanatory models for classes of real-world phenomena. This is the goal of explainable AI, which is not a new field; rather, the problem of explainability is as old as AI itself. While rule-based approaches of early AI were comprehensible “glass-box” approaches at least in narrow domains, their weakness was in dealing with uncertainties of the real world. Maybe one step further is in linking probabilistic learning methods with large knowledge representations (ontologies) and logical approaches, thus making results re-traceable, explainable and comprehensible on demand. [For my students]


On Graph Extraction from Image Data

Andreas Holzinger, Bernd Malle & Nicola Giuliani 2014. On Graph Extraction from Image Data. In: Slezak, Dominik, Peters, James F., Tan, Ah-Hwee & Schwabe, Lars (eds.) Brain Informatics and Health, BIH 2014, Lecture Notes in Artificial Intelligence, LNAI 8609. Heidelberg, Berlin: Springer, pp. 552-563, doi:10.1007/978-3-319-09891-3-50

A hot topic in AI/machine learning is to learn from graphs, particularly as graphs are a data structure which fosters explainability/causability. For any such approach one needs at first a relevant and robust representation from the image data. In this paper we present a novel approach for knowledge discovery by extracting graph structures from natural image data. For this purpose, we created a framework built upon modern Web technologies, utilizing
HTML canvas and pure Javascript inside a Web-browser, which is a very promising engineering approach. This was the basis for our Graphinius project  [Paper ]


The European Legal Framework for Medical AI

Schneeberger, D., Stöger, K. & Holzinger, A. The European Legal Framework for Medical AI. In: Springer Lecture Notes in Computer Science LNCS 12279, (2020) Cham. Springer International, doi:10.1007/978-3-030-57321-8_12.

In late February 2020, the European Commission published a White Paper on Artificial Intelligence (AI) and an accompanying report on the safety and liability implications of AI, the Internet of Things (IoT) and robotics. In its White Paper, the Commission highlighted the “European Approach” to AI, stressing that “it is vital that European AI
is grounded in our values and fundamental rights such as human dignity and privacy protection”. It also announced its intention to propose EU legislation for “high risk” AI applications in the nearer future which will include the majority of medical AI applications. Based on this “European Approach” to AI, this paper analyses the current European framework regulating medical AI. Starting with the fundamental rights framework as clear guidelines, subsequently a more in-depth look will be taken at specific areas of law, focusing on data protection, product approval procedures and liability law. This analysis of the current state of law, including its problems and ambiguities regarding AI, is complemented by an outlook at the proposed amendments to product approval procedures and liability law, which, by endorsing a human-centric approach, will fundamentally influence how medical AI and AI in general will be used in Europe in the future. [paper]


Andreas Holzinger, Peter Kieseberg, Edgar Weippl & A Min Tjoa (2018). Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI. Springer Lecture Notes in Computer Science LNCS 11015. Cham: Springer, pp. 1-8, doi:10.1007/978-3-319-99740-7-1

In this short editorial we present some thoughts on present and future trends in Artificial Intelligence (AI) generally, and Machine Learning (ML) specifically. Due to the huge ongoing success in machine learning, particularly in statistical learning from big data, there is rising interest of academia, industry and the public in this field. Industry is investing heavily in AI, and spin-offs and start-ups are emerging on an unprecedented rate. The European Union is allocating a lot of additional funding into AI research grants, and various institutions are calling for a joint European AI research institute. Even universities are taking AI/ML into their curricula and strategic plans. Finally, even the people on the street talk about it, and if grandma knows what her grandson is doing in his new start-up, then the time is ripe: We are reaching a new AI spring. However, as fantastic current approaches seem to be, there are still huge problems to be solved: the best performing models lack transparency, hence are considered to be black boxes. The general and worldwide trends in privacy, data protection, safety and security make such black box solutions difficult to use in practice. Specifically in Europe, where the new General Data Protection Regulation (GDPR) came into effect on May, 28, 2018 which affects everybody (right of explanation). Consequently, a previous niche field for many years, explainable AI, explodes in importance. For the future, we envision a fruitful marriage between classic logical approaches (ontologies) with statistical approaches which may lead to context-adaptive systems (stochastic ontologies) that might work similar as the human brain.