Digital Transformation in Smart Farm and Forest Operations

 

Andreas Holzinger has been appointed full professor for digital transformation in smart farm and forest operations at the University of Natural Resources and Life Sciences Vienna (BOKU) and started his endowed chair position with effect of March, 1, 2022. Andreas Holzinger is currently building a new Human-Centered AI Lab at the BOKU Campus Tulln in Lower Austria. The support of the Government of Lower Austria is gratefully acknowledged.

Andreas Holzinger wurde mit Wirksamkeit zum 1.März 2022 zum Universitätsprofessor für Digitale Transformation in intelligenter Land- und Forstwirtschaft an der Universität für Bodenkultur Wien (BOKU) nach §98 UG 2002 ernannt. Andreas Holzinger baut derzeit ein neues Labor für menschenzentrierte Künstliche Intelligenz am BOKU Campus Tulln an der Donau in Niederösterreich auf – dank großzügiger Unterstützung durch das Land Niederösterreich.

https://boku.ac.at/fm/themen/orientierung-und-lageplaene/standort-tulln/birt-newsletter/2022/ausgabe-3-22/mitarbeiter-update

 

AI for Good. Explainability and Robustness for Trustworthy AI ITU Event

AI for Good Discovery. Trustworthy AI: Explainability and Robustness for Trustworthy AI, ITU Event, Geneva, CH [youtube]

Talk recorded live on 16.12.2021, 15:00 – 16:00 – see https://www.youtube.com/watch?v=NCajz8h13uU

Today, thanks to advances in statistical machine learning, AI is once again enormously popular. However, two features need to be further improved in the future a) robustness and b) explainability/interpretability/re-traceability, i.e. to explain why a certain result has been achieved. Disturbances in the input data can have a dramatic impact on the output and lead to completely different results. This is relevant in all critical areas where we suffer from poor data quality, i.e. where we do not have i.i.d. data. Therefore, the use of AI in real-world areas that impact human life (agriculture, climate, forestry, health, …) has led to an increased demand for trustworthy AI. In sensitive areas where re-traceability, transparency, and interpretability are required, explainable AI (XAI) is now even mandatory due to legal requirements. One approach to making AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can sometimes – of course not always – bring experience and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence. Speaker: Andreas Holzinger, Head of Human-Centered AI Lab, Institute for Medical Informatics/Statistics Medizinische Universität Graz Moderators: Wojciech Samek, Head of Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute

Watch the latest #AIforGood videos! https://www.youtube.com/c/AIforGood/v… Explore more #AIforGood content: AI for Good Top Hits https://www.youtube.com/playlist?list… AI for Good Webinars https://www.youtube.com/playlist?list… AI for Good Keynotes https://www.youtube.com/playlist?list… Stay updated and join our weekly AI for Good newsletter: http://eepurl.com/gI2kJ5 Discover what’s next on our programme! https://aiforgood.itu.int/programme/

Check out the latest AI for Good news: https://aiforgood.itu.int/newsroom/ Explore the AI for Good blog: https://aiforgood.itu.int/ai-for-good… Connect on our social media: Website: https://aiforgood.itu.int/ Twitter: https://twitter.com/ITU_AIForGood LinkedIn Page: https://www.linkedin.com/company/2651… LinkedIn Group: https://www.linkedin.com/groups/8567748 Instagram: https://www.instagram.com/aiforgood Facebook: https://www.facebook.com/AIforGood What is AI for Good? The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI.

The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact. Disclaimer: The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.

 

 

Causability and Explainability of Artificial Intelligence in Medicine – awarded highly cited paper

Awesome: Our Causabilty and explaniability of artrifical intelligence in medicine paper was awarded the highly cited paper award. This means that the paper is on the top of 1 % in the academic field of computer science. Thanks the community for this acceptance and appraisal of our work on causability and explainablity – cornerstone for robust, trusthworthy AI.

The journal itself is Q1 in two fields Computer Science Artificial Intelligence (rank 27/137) and Computer Science Theory and Methods (rank 12/108) based on the 2019 edition of the Journal Citation Reports.

On Google Scholar it has received so far 234 citations as of March, 30, 2021.

https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1312

Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions

Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions just been published in Knowledge Based Systems, doi:10.1016/j.knosys.2021.106916

We propose a novel classification according to aggregation functions of mixed behavior by variability in ordinal sums of conjunctive and disjunctive functions. Consequently, domain experts are empowered to assign only the most important observations regarding the considered attributes. This has the advantage that the variability of the functions provides opportunities for machine learning to learn the best possible option from the data. Moreover, such a solution is comprehensible, reproducible and explainable-per-design to domain experts. In this paper, we discuss the proposed approach with examples and outline the research steps in interactive machine learning with a human-in-the-loop over aggregation functions. Although human experts are not always able to explain anything either, they are sometimes able to bring in experience, contextual understanding and implicit knowledge, which is desirable in certain machine learning tasks and can contribute to the robustness of algorithms. The obtained theoretical results in ordinal sums are discussed and illustrated on examples.

The Q1 Journal Knowledge-Based Systems is ranked Nr. 15 out of 138 in the field of Computer Science, Artificial Intelligence, with SCI-Impact Factor 5,921, see: https://www.journals.elsevier.com/knowledge-based-systems

Miroslav Hudec, Erika Minarikova, Radko Mesiar, Anna Saranti & Andreas Holzinger (2021). Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions. Knowledge Based Systems, doi:10.1016/j.knosys.2021.106916.

 

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Our paper on Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI. Information Fusion, 71, (7), 28-37, doi:10.1016/j.inffus.2021.01.008

has just been published. Our central hypothesis is that using conceptual knowledge as a guiding model of reality will help to train more explainable, more robust and less biased machine learning models, ideally able to learn from fewer data. One important aspect in the medical domain is that various modalities contribute to one single result. Our main question is “How can we construct a multi-modal feature representation space (spanning images, text, genomics data) using knowledge bases as an initial connector for the development of novel explanation interface techniques?”. In this paper we argue for using Graph Neural Networks as a method-of-choice, enabling information fusion for multi-modal causability (causability – not to confuse with causality – is the measurable extent to which an explanation to a human expert achieves a specified level of causal understanding). We hope that this is a useful contribution to the international scientific community.

The Q1 Journal Information Fusion is ranked Nr. 2 out of 138 in the field of Computer Science, Artificial Intelligence, with SCI-Impact Factor 13,669, see: https://www.sciencedirect.com/journal/information-fusion

Call for Papers Journal Special Issue on Advances in Explainable and Responsible Artificial Intelligence

From explainable AI to responsible AI

Measuring the Quality of Explanations: We released the System Causability Scale (SCS)

In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations.

Ten Commandments for Human-AI interaction – Which are the most important?

Please take part in the survey on ten-commandments for human-ai interaction, ethical legal and social issues of artificial intelligence

Austrian Science Fund FWF Project on Explainable-AI granted

I just got granted a basic research project on explainability by the Austrain Science Fund