Posts

Causability and Explainability of Artificial Intelligence in Medicine – awarded highly cited paper

Awesome: Our Causabilty and explaniability of artrifical intelligence in medicine paper was awarded the highly cited paper award. This means that the paper is on the top of 1 % in the academic field of computer science. Thanks the community for this acceptance and appraisal of our work on causability and explainablity – cornerstone for robust, trusthworthy AI.

The journal itself is Q1 in two fields Computer Science Artificial Intelligence (rank 27/137) and Computer Science Theory and Methods (rank 12/108) based on the 2019 edition of the Journal Citation Reports.

On Google Scholar it has received so far 234 citations as of March, 30, 2021.

https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1312

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Our paper on Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI. Information Fusion, 71, (7), 28-37, doi:10.1016/j.inffus.2021.01.008

has just been published. Our central hypothesis is that using conceptual knowledge as a guiding model of reality will help to train more explainable, more robust and less biased machine learning models, ideally able to learn from fewer data. One important aspect in the medical domain is that various modalities contribute to one single result. Our main question is “How can we construct a multi-modal feature representation space (spanning images, text, genomics data) using knowledge bases as an initial connector for the development of novel explanation interface techniques?”. In this paper we argue for using Graph Neural Networks as a method-of-choice, enabling information fusion for multi-modal causability (causability – not to confuse with causality – is the measurable extent to which an explanation to a human expert achieves a specified level of causal understanding). We hope that this is a useful contribution to the international scientific community.

The Q1 Journal Information Fusion is ranked Nr. 2 out of 138 in the field of Computer Science, Artificial Intelligence, with SCI-Impact Factor 13,669, see: https://www.sciencedirect.com/journal/information-fusion

Call for Papers Journal Special Issue on Advances in Explainable and Responsible Artificial Intelligence

From explainable AI to responsible AI

Papers due to June, 20, 2020 ICML Workshop intrpretable AI “beyond xAI”

interpretable machine learning workshop beyond explainable ai

Measuring the Quality of Explanations: We released the System Causability Scale (SCS)

In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations.

Ten Commandments for Human-AI interaction – Which are the most important?

Please take part in the survey on ten-commandments for human-ai interaction, ethical legal and social issues of artificial intelligence

Austrian Science Fund FWF Project on Explainable-AI granted

I just got granted a basic research project on explainability by the Austrain Science Fund

Explainable AI – from local explanations to global understanding

A new publication by the Lee Lab on explanations for ensemble tree-based predictions which results a exact solution that guarantees desirable explanation properties for global understanding