The Next Frontier: Artificial Intelligence we can really trust !

In this keynote paper from ECML 2021, I begin my talk with the tremendous advances in the field of statistical machine learning, the availability of large amounts of training data, and the increasing computational power that have ultimately made artificial intelligence (AI) (again) very successful. For certain tasks, algorithms can even achieve performance beyond human levels. Unfortunately, the most powerful methods suffer from both difficulty in explaining why a particular result was obtained and a lack of robustness. Our most powerful machine learning models are very sensitive to even small changes. Perturbations in the input data can have a dramatic impact on the output, leading to completely different results. This is of great importance in virtually all critical domains where we suffer from poor data quality, i.e., we do not have the i.i.d. data we expect. The use of AI in domains that impact human life (agriculture, climate, health, …) has therefore led to an increased need for trustworthy AI. In sensitive domains such as medicine, where traceability, transparency and interpretability are required, explicability is now even mandatory due to regulatory requirements. One possible step to make AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can – sometimes, of course, not always – bring experience, domain knowledge, and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence.

See the paper here:
https://www.researchgate.net/publication/358693275_The_Next_Frontier_AI_We_Can_Really_Trust

Reference (Harvard JMLR style):

Andreas Holzinger (2021). The Next Frontier: AI We Can Really Trust. In: Kamp, Michael (ed.) Proceedings of the ECML PKDD 2021, CCIS 1524. Cham: Springer Nature, pp. 1–14, doi:10.1007/978-3-030-93736-2_33

Reference (IEEE style):

[1] A. Holzinger, “The Next Frontier: AI We Can Really Trust,” in Proceedings of the ECML PKDD 2021, CCIS 1524, M. Kamp, Ed. Cham: Springer Nature, 2021, pp. 1–14, 10.1007/978-3-030-93736-2_33

 

Digital Transformation for SustainableDevelopment Goals (SDGs) – A Security, Safety and Privacy Perspective on AI

Our work on Digital Transformation for Sustainable Development Goals (SDGs) – A Security, Safety and Privacy Perspective on AI has just been published and can be found here:

https://www.researchgate.net/publication/353403620_Digital_Transformation_for_Sustainable_Development_Goals_SDGs_-_A_Security_Safety_and_Privacy_Perspective_on_AI

Thanks to my co-authors !

The main driver of the digital transformation currently underway is undoubtedly artificial intelligence (AI). The potential of AI to benefit humanity and its environment is undeniably enormous. AI can definitely help find new solutions to the most pressing challenges facing our human society in virtually all areas of life: from agriculture and forest ecosystems that affect our entire planet, to the health of every single human being. However, this article highlights a very different aspect. For all its benefits, the large-scale adoption of AI technologies also holds enormous and unimagined potential for new kinds of unforeseen threats. Therefore, all stakeholders, governments, policy makers, and industry, together with academia, must ensure that AI is developed with these potential threats in mind and that the safety, traceability, transparency, explainability, validity, and verifiability of AI applications in our everyday lives are ensured. It is the responsibility of all stakeholders to ensure the use of trustworthy and ethically reliable AI and to avoid the misuse of AI technologies. Achieving this will require a concerted effort to ensure that AI is always consistent with human values and includes a future that is safe in every way for all people on this planet. In this paper, we describe some of these threats and show that safety, security and explainability are indispensable cross-cutting issues and highlight this with two exemplary selected application areas: smart agriculture and smart health.

Reference to the paper:

Andreas Holzinger, Edgar Weippl, A Min Tjoa & Peter Kieseberg (2021). Digital Transformation for Sustainable Development Goals (SDGs) – a Security, Safety and Privacy Perspective on AI. Springer Lecture Notes in Computer Science, LNCS 12844. Cham: Springer, pp. 1-20, doi:10.1007/978-3-030-84060-0_1.

bibTeX:

@incollection{HolzingerWeipplTjoaKiese:2021:SustainableSecurity,
year = {2021},
author = {Holzinger, Andreas and Weippl, Edgar and Tjoa, A Min and Kieseberg, Peter},
title = {Digital Transformation for Sustainable Development Goals (SDGs) – a Security, Safety and Privacy Perspective on AI},
booktitle = {Springer Lecture Notes in Computer Science, LNCS 12844},
publisher = {Springer},
address = {Cham},
pages = {1-20},
abstract = {The main driver of the digital transformation currently underway is undoubtedly artificial intelligence (AI). The potential of AI to benefit humanity and its environment is undeniably enormous. AI can definitely help find new solutions to the most pressing challenges facing our human society in virtually all areas of life: from agriculture and forest ecosystems that affect our entire planet, to the health of every single human being. However, this article highlights a very different aspect. For all its benefits, the large-scale adoption of AI technologies also holds enormous and unimagined potential for new kinds of unforeseen threats. Therefore, all stakeholders, governments, policy makers, and industry, together with academia, must ensure that AI is developed with these potential threats in mind and that the safety, traceability, transparency, explainability, validity, and verifiability of AI applications in our everyday lives are ensured. It is the responsibility of all stakeholders to ensure the use of trustworthy and ethically reliable AI and to avoid the misuse of AI technologies. Achieving this will require a concerted effort to ensure that AI is always consistent with human values and includes a future that is safe in every way for all people on this planet. In this paper, we describe some of these threats and show that safety, security and explainability are indispensable cross-cutting issues and highlight this with two exemplary selected application areas: smart agriculture and smart health.},
doi = {10.1007/978-3-030-84060-0_1}
}

 

 

 

 

 

Causability and Explainability of Artificial Intelligence in Medicine – awarded highly cited paper

Awesome: Our Causabilty and explaniability of artrifical intelligence in medicine paper was awarded the highly cited paper award. This means that the paper is on the top of 1 % in the academic field of computer science. Thanks the community for this acceptance and appraisal of our work on causability and explainablity – cornerstone for robust, trusthworthy AI.

The journal itself is Q1 in two fields Computer Science Artificial Intelligence (rank 27/137) and Computer Science Theory and Methods (rank 12/108) based on the 2019 edition of the Journal Citation Reports.

On Google Scholar it has received so far 234 citations as of March, 30, 2021.

https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1312

Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions

Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions just been published in Knowledge Based Systems, doi:10.1016/j.knosys.2021.106916

We propose a novel classification according to aggregation functions of mixed behavior by variability in ordinal sums of conjunctive and disjunctive functions. Consequently, domain experts are empowered to assign only the most important observations regarding the considered attributes. This has the advantage that the variability of the functions provides opportunities for machine learning to learn the best possible option from the data. Moreover, such a solution is comprehensible, reproducible and explainable-per-design to domain experts. In this paper, we discuss the proposed approach with examples and outline the research steps in interactive machine learning with a human-in-the-loop over aggregation functions. Although human experts are not always able to explain anything either, they are sometimes able to bring in experience, contextual understanding and implicit knowledge, which is desirable in certain machine learning tasks and can contribute to the robustness of algorithms. The obtained theoretical results in ordinal sums are discussed and illustrated on examples.

The Q1 Journal Knowledge-Based Systems is ranked Nr. 15 out of 138 in the field of Computer Science, Artificial Intelligence, with SCI-Impact Factor 5,921, see: https://www.journals.elsevier.com/knowledge-based-systems

Miroslav Hudec, Erika Minarikova, Radko Mesiar, Anna Saranti & Andreas Holzinger (2021). Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions. Knowledge Based Systems, doi:10.1016/j.knosys.2021.106916.

 

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Our paper on Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI. Information Fusion, 71, (7), 28-37, doi:10.1016/j.inffus.2021.01.008

has just been published. Our central hypothesis is that using conceptual knowledge as a guiding model of reality will help to train more explainable, more robust and less biased machine learning models, ideally able to learn from fewer data. One important aspect in the medical domain is that various modalities contribute to one single result. Our main question is “How can we construct a multi-modal feature representation space (spanning images, text, genomics data) using knowledge bases as an initial connector for the development of novel explanation interface techniques?”. In this paper we argue for using Graph Neural Networks as a method-of-choice, enabling information fusion for multi-modal causability (causability – not to confuse with causality – is the measurable extent to which an explanation to a human expert achieves a specified level of causal understanding). We hope that this is a useful contribution to the international scientific community.

The Q1 Journal Information Fusion is ranked Nr. 2 out of 138 in the field of Computer Science, Artificial Intelligence, with SCI-Impact Factor 13,669, see: https://www.sciencedirect.com/journal/information-fusion

Explainable AI – from local explanations to global understanding

A new publication by the Lee Lab on explanations for ensemble tree-based predictions which results a exact solution that guarantees desirable explanation properties for global understanding