Andreas Holzinger
  • Home
  • Academic Courses
  • Scientific Service
  • Publications
  • Projects
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu

Archive for category: AI News

Graph Neural Networks with the Human-in-the-Loop

May 2, 2025/in AI News, Blog, Interesting Publication, Science News/by Andreas Holzinger

In our Nature Scientific Reports paper we introduce a novel framework that integrates federated learning with Graph Neural Networks (GNNs) to classify diseases, incorporating Human-in-the-Loop methodologies. This advanced framework innovatively employs collaborative voting mechanisms on subgraphs within a Protein-Protein Interaction (PPI) network, situated in a federated ensemble-based deep learning context. This methodological approach marks a significant stride in the development of explainable and privacy-aware Artificial Intelligence, significantly contributing to the progression of personalized digital medicine in a responsible and transparent manner. Read the article here https://doi.org/10.1038/s41598-024-72748-7 and get an overview by listening to this podcast:

https://www.aholzinger.at/wordpress/wp-content/uploads/2025/05/Decoding-Disease_-How-AI-Human-Expertise-and-Privacy-Are-Rewriting-the-Rules-of-Medicine.wav
https://www.aholzinger.at/wordpress/wp-content/uploads/2025/05/human-in-the-loop.png 668 744 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2025-05-02 10:24:412025-05-02 10:39:16Graph Neural Networks with the Human-in-the-Loop

The Next Frontier: Artificial Intelligence we can really trust !

April 12, 2022/in AI News, Interesting Publication/by Andreas Holzinger

In this keynote paper from ECML 2021, I begin my talk with the tremendous advances in the field of statistical machine learning, the availability of large amounts of training data, and the increasing computational power that have ultimately made artificial intelligence (AI) (again) very successful. For certain tasks, algorithms can even achieve performance beyond human levels. Unfortunately, the most powerful methods suffer from both difficulty in explaining why a particular result was obtained and a lack of robustness. Our most powerful machine learning models are very sensitive to even small changes. Perturbations in the input data can have a dramatic impact on the output, leading to completely different results. This is of great importance in virtually all critical domains where we suffer from poor data quality, i.e., we do not have the i.i.d. data we expect. The use of AI in domains that impact human life (agriculture, climate, health, …) has therefore led to an increased need for trustworthy AI. In sensitive domains such as medicine, where traceability, transparency and interpretability are required, explicability is now even mandatory due to regulatory requirements. One possible step to make AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can – sometimes, of course, not always – bring experience, domain knowledge, and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence.

See the paper here:
https://www.researchgate.net/publication/358693275_The_Next_Frontier_AI_We_Can_Really_Trust

Reference (Harvard JMLR style):

Andreas Holzinger (2021). The Next Frontier: AI We Can Really Trust. In: Kamp, Michael (ed.) Proceedings of the ECML PKDD 2021, CCIS 1524. Cham: Springer Nature, pp. 1–14, doi:10.1007/978-3-030-93736-2_33

Reference (IEEE style):

[1] A. Holzinger, “The Next Frontier: AI We Can Really Trust,” in Proceedings of the ECML PKDD 2021, CCIS 1524, M. Kamp, Ed. Cham: Springer Nature, 2021, pp. 1–14, 10.1007/978-3-030-93736-2_33

 

https://www.aholzinger.at/wordpress/wp-content/uploads/2022/04/ai-we-can-really-trust.jpg 505 570 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2022-04-12 14:58:382022-09-14 16:14:55The Next Frontier: Artificial Intelligence we can really trust !

Digital Transformation in Smart Farm and Forest Operations

March 1, 2022/in AI News, Science News/by Andreas Holzinger

 

Andreas Holzinger has been appointed full professor for digital transformation in smart farm and forest operations at the University of Natural Resources and Life Sciences Vienna (BOKU) and started his endowed chair position with effect of March, 1, 2022. Andreas Holzinger is currently building a new Human-Centered AI Lab at the BOKU Campus Tulln in Lower Austria. The support of the Government of Lower Austria is gratefully acknowledged.

Andreas Holzinger wurde mit Wirksamkeit zum 1.März 2022 zum Universitätsprofessor für Digitale Transformation in intelligenter Land- und Forstwirtschaft an der Universität für Bodenkultur Wien (BOKU) nach §98 UG 2002 ernannt. Andreas Holzinger baut derzeit ein neues Labor für menschenzentrierte Künstliche Intelligenz am BOKU Campus Tulln an der Donau in Niederösterreich auf – dank großzügiger Unterstützung durch das Land Niederösterreich.

https://boku.ac.at/fm/themen/orientierung-und-lageplaene/standort-tulln/birt-newsletter/2022/ausgabe-3-22/mitarbeiter-update

 

https://www.aholzinger.at/wordpress/wp-content/uploads/2021/07/sustainability-AI.png 1280 1327 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2022-03-01 10:38:532022-05-28 10:59:49Digital Transformation in Smart Farm and Forest Operations

Andreas Holzinger elected ifip Fellow 2021

December 30, 2021/in AI News/by Andreas Holzinger

At the final 60th Jubilee event of the International Federation of Information Processig (ifip) on December, 21, 2021
12 new ifip fellows were presented. IFIP Vice President and Chair of Fellows Selection Committee, Jan Gulliksen, made the announcements, recognising each for their outstanding technical contributions to the field of information processing:

IFIP News

The new Fellows include, see original entry here: https://www.ifipnews.org/ifip-announces-12-new-fellows/

  • Jean Vanderdonckt, Belgium, for fundamental and applied contributions to model-based user interface development, model-driven engineering of interactive applications and user interface description languages; [Scholar]
  • Ling X Li, USA, for her active involvement in IFIP WG8.9, where her leadership led to high quality research, conferences and journal publications, which contributed significantly to educating ICT professionals;
  • Gerrit van der Veer, Netherlands, for his pioneering work in HCI, developing and teaching new paradigms for the human side of HCI and in promoting international collaboration between HCI scientists and practitioners;
  • Andreas Holzinger, Austria, for his achievements in interactive machine learning with the human-in-the-loop, towards xAI and multimodal causability, advocating a synergistic approach to put the human-in-control of AI; [Scholar]
  • Guy Pujolle, France, for his remarkable pioneering contribution to the development of computer networks, their evolution, their applications, security and to the teaching of the domain; [Scholar]
  • Jacques Sakarovitch, France, for his extraordinary contribution to the methodological research and teaching of theoretical computer science, particularly, automata theory, and for his work in cofounding TC1;
  • Fredi Tröltzsch, Germany, for his outstanding contribution to the theory of optimisation, computations of optimal control for distributed parameter systems with applications to industry, engineering and medical sciences; [Scholar]
  • Jan Pries-Heje, Denmark, for his extensive work in leading research and teaching in the field of information systems, for which he has built a global reputation. Within TC8, his leadership promoted the joint definition of action strategies and nurtured the cohesion of the working groups; [Scholar]
  • Paola Inverardi, Italy, for her significant contributions to software engineering and architecture and for her role as educator and rector of the University of L’Aquila, which she helped to reconstruct after an earthquake; [Scholar]
  • Sushil Jajodia, USA, for his unparalleled technical contributions to cybersecurity with over 500 papers and over 50,000 citations, resulting in seminal papers, patents and a commercial system; [Scholar]
  • Ricardo Augusto Da Luz Reis, Brazil, for his leadership in the field of microelectronics education in Latin America, his research in applied VLSI-SoC design methods and his contribution to building the open scientific community; [Scholar]
  • Pierangela Samarati, Italy, for pioneering and outstanding contributions to research and information sharing in information security, data protection and privacy. [Scholar]

Congratulations to the new ifip Fellows !

The International Federation for Information Processing (ifip) is the leading multi-national organization in Information & Communications Technologies and Sciences and is recognized by the United Nations. The federation represents IT Societies from over 38 countries/regions, covering five continents with a total membership of over half a million, it links more than 3,000 scientists cross-domain both from from Academia and Industry and hosts vver 100 Working Groups and 13 Technical Committees.

http://ifip.org/

Andreas Holzinger is national representative of Austria in Technical Comittee TC 12 “Artificial Intelligence”
https://www.ifiptc12.org/

https://www.aholzinger.at/wordpress/wp-content/uploads/2021/12/ifip-60years.jpg 930 1919 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2021-12-30 16:33:092022-01-05 06:54:53Andreas Holzinger elected ifip Fellow 2021

AI for Good. Explainability and Robustness for Trustworthy AI ITU Event

December 18, 2021/in AI News, Lecture, Science News/by Andreas Holzinger

AI for Good Discovery. Trustworthy AI: Explainability and Robustness for Trustworthy AI, ITU Event, Geneva, CH [youtube]

Talk recorded live on 16.12.2021, 15:00 – 16:00 – see https://www.youtube.com/watch?v=NCajz8h13uU

Today, thanks to advances in statistical machine learning, AI is once again enormously popular. However, two features need to be further improved in the future a) robustness and b) explainability/interpretability/re-traceability, i.e. to explain why a certain result has been achieved. Disturbances in the input data can have a dramatic impact on the output and lead to completely different results. This is relevant in all critical areas where we suffer from poor data quality, i.e. where we do not have i.i.d. data. Therefore, the use of AI in real-world areas that impact human life (agriculture, climate, forestry, health, …) has led to an increased demand for trustworthy AI. In sensitive areas where re-traceability, transparency, and interpretability are required, explainable AI (XAI) is now even mandatory due to legal requirements. One approach to making AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can sometimes – of course not always – bring experience and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence. Speaker: Andreas Holzinger, Head of Human-Centered AI Lab, Institute for Medical Informatics/Statistics Medizinische Universität Graz Moderators: Wojciech Samek, Head of Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute

Watch the latest #AIforGood videos! https://www.youtube.com/c/AIforGood/v… Explore more #AIforGood content: AI for Good Top Hits https://www.youtube.com/playlist?list… AI for Good Webinars https://www.youtube.com/playlist?list… AI for Good Keynotes https://www.youtube.com/playlist?list… Stay updated and join our weekly AI for Good newsletter: http://eepurl.com/gI2kJ5 Discover what’s next on our programme! https://aiforgood.itu.int/programme/

Check out the latest AI for Good news: https://aiforgood.itu.int/newsroom/ Explore the AI for Good blog: https://aiforgood.itu.int/ai-for-good… Connect on our social media: Website: https://aiforgood.itu.int/ Twitter: https://twitter.com/ITU_AIForGood LinkedIn Page: https://www.linkedin.com/company/2651… LinkedIn Group: https://www.linkedin.com/groups/8567748 Instagram: https://www.instagram.com/aiforgood Facebook: https://www.facebook.com/AIforGood What is AI for Good? The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI.

The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact. Disclaimer: The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.

 

 

https://www.aholzinger.at/wordpress/wp-content/uploads/2021/07/sustainability-AI.png 1280 1327 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2021-12-18 18:04:332021-12-30 16:29:01AI for Good. Explainability and Robustness for Trustworthy AI ITU Event

Digital Transformation for SustainableDevelopment Goals (SDGs) – A Security, Safety and Privacy Perspective on AI

August 25, 2021/in AI News, Interesting Publication/by Andreas Holzinger

Our work on Digital Transformation for Sustainable Development Goals (SDGs) – A Security, Safety and Privacy Perspective on AI has just been published and can be found here:

https://www.researchgate.net/publication/353403620_Digital_Transformation_for_Sustainable_Development_Goals_SDGs_-_A_Security_Safety_and_Privacy_Perspective_on_AI

Thanks to my co-authors !

The main driver of the digital transformation currently underway is undoubtedly artificial intelligence (AI). The potential of AI to benefit humanity and its environment is undeniably enormous. AI can definitely help find new solutions to the most pressing challenges facing our human society in virtually all areas of life: from agriculture and forest ecosystems that affect our entire planet, to the health of every single human being. However, this article highlights a very different aspect. For all its benefits, the large-scale adoption of AI technologies also holds enormous and unimagined potential for new kinds of unforeseen threats. Therefore, all stakeholders, governments, policy makers, and industry, together with academia, must ensure that AI is developed with these potential threats in mind and that the safety, traceability, transparency, explainability, validity, and verifiability of AI applications in our everyday lives are ensured. It is the responsibility of all stakeholders to ensure the use of trustworthy and ethically reliable AI and to avoid the misuse of AI technologies. Achieving this will require a concerted effort to ensure that AI is always consistent with human values and includes a future that is safe in every way for all people on this planet. In this paper, we describe some of these threats and show that safety, security and explainability are indispensable cross-cutting issues and highlight this with two exemplary selected application areas: smart agriculture and smart health.

Reference to the paper:

Andreas Holzinger, Edgar Weippl, A Min Tjoa & Peter Kieseberg (2021). Digital Transformation for Sustainable Development Goals (SDGs) – a Security, Safety and Privacy Perspective on AI. Springer Lecture Notes in Computer Science, LNCS 12844. Cham: Springer, pp. 1-20, doi:10.1007/978-3-030-84060-0_1.

bibTeX:

@incollection{HolzingerWeipplTjoaKiese:2021:SustainableSecurity,
year = {2021},
author = {Holzinger, Andreas and Weippl, Edgar and Tjoa, A Min and Kieseberg, Peter},
title = {Digital Transformation for Sustainable Development Goals (SDGs) – a Security, Safety and Privacy Perspective on AI},
booktitle = {Springer Lecture Notes in Computer Science, LNCS 12844},
publisher = {Springer},
address = {Cham},
pages = {1-20},
abstract = {The main driver of the digital transformation currently underway is undoubtedly artificial intelligence (AI). The potential of AI to benefit humanity and its environment is undeniably enormous. AI can definitely help find new solutions to the most pressing challenges facing our human society in virtually all areas of life: from agriculture and forest ecosystems that affect our entire planet, to the health of every single human being. However, this article highlights a very different aspect. For all its benefits, the large-scale adoption of AI technologies also holds enormous and unimagined potential for new kinds of unforeseen threats. Therefore, all stakeholders, governments, policy makers, and industry, together with academia, must ensure that AI is developed with these potential threats in mind and that the safety, traceability, transparency, explainability, validity, and verifiability of AI applications in our everyday lives are ensured. It is the responsibility of all stakeholders to ensure the use of trustworthy and ethically reliable AI and to avoid the misuse of AI technologies. Achieving this will require a concerted effort to ensure that AI is always consistent with human values and includes a future that is safe in every way for all people on this planet. In this paper, we describe some of these threats and show that safety, security and explainability are indispensable cross-cutting issues and highlight this with two exemplary selected application areas: smart agriculture and smart health.},
doi = {10.1007/978-3-030-84060-0_1}
}

 

 

 

 

 

https://www.aholzinger.at/wordpress/wp-content/uploads/2021/07/sustainability-AI.png 1280 1327 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2021-08-25 07:51:112022-05-28 10:37:03Digital Transformation for SustainableDevelopment Goals (SDGs) – A Security, Safety and Privacy Perspective on AI

Causability and Explainability of Artificial Intelligence in Medicine – awarded highly cited paper

March 31, 2021/in AI News, Interesting Publication, Science News/by Andreas Holzinger

Awesome: Our Causabilty and explaniability of artrifical intelligence in medicine paper was awarded the highly cited paper award. This means that the paper is on the top of 1 % in the academic field of computer science. Thanks the community for this acceptance and appraisal of our work on causability and explainablity – cornerstone for robust, trusthworthy AI.

The journal itself is Q1 in two fields Computer Science Artificial Intelligence (rank 27/137) and Computer Science Theory and Methods (rank 12/108) based on the 2019 edition of the Journal Citation Reports.

On Google Scholar it has received so far 234 citations as of March, 30, 2021.

https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1312

https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/explainable-ai-holzinger.jpg 1411 2008 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2021-03-31 14:51:242021-08-25 07:46:07Causability and Explainability of Artificial Intelligence in Medicine – awarded highly cited paper

Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions

March 9, 2021/in AI News, Interesting Publication, Science News/by Andreas Holzinger

Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions just been published in Knowledge Based Systems, doi:10.1016/j.knosys.2021.106916

We propose a novel classification according to aggregation functions of mixed behavior by variability in ordinal sums of conjunctive and disjunctive functions. Consequently, domain experts are empowered to assign only the most important observations regarding the considered attributes. This has the advantage that the variability of the functions provides opportunities for machine learning to learn the best possible option from the data. Moreover, such a solution is comprehensible, reproducible and explainable-per-design to domain experts. In this paper, we discuss the proposed approach with examples and outline the research steps in interactive machine learning with a human-in-the-loop over aggregation functions. Although human experts are not always able to explain anything either, they are sometimes able to bring in experience, contextual understanding and implicit knowledge, which is desirable in certain machine learning tasks and can contribute to the robustness of algorithms. The obtained theoretical results in ordinal sums are discussed and illustrated on examples.

The Q1 Journal Knowledge-Based Systems is ranked Nr. 15 out of 138 in the field of Computer Science, Artificial Intelligence, with SCI-Impact Factor 5,921, see: https://www.journals.elsevier.com/knowledge-based-systems

Miroslav Hudec, Erika Minarikova, Radko Mesiar, Anna Saranti & Andreas Holzinger (2021). Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions. Knowledge Based Systems, doi:10.1016/j.knosys.2021.106916.

 

https://www.aholzinger.at/wordpress/wp-content/uploads/2021/03/novel-explainable-per-design.png 396 388 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2021-03-09 07:48:432021-03-09 08:05:36Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions

Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

March 4, 2021/in AI News, Interesting Publication, Science News/by Andreas Holzinger

Our paper on Multi-Modal Causability with Graph Neural Networks enabling Information Fusion for explainable AI. Information Fusion, 71, (7), 28-37, doi:10.1016/j.inffus.2021.01.008

has just been published. Our central hypothesis is that using conceptual knowledge as a guiding model of reality will help to train more explainable, more robust and less biased machine learning models, ideally able to learn from fewer data. One important aspect in the medical domain is that various modalities contribute to one single result. Our main question is “How can we construct a multi-modal feature representation space (spanning images, text, genomics data) using knowledge bases as an initial connector for the development of novel explanation interface techniques?”. In this paper we argue for using Graph Neural Networks as a method-of-choice, enabling information fusion for multi-modal causability (causability – not to confuse with causality – is the measurable extent to which an explanation to a human expert achieves a specified level of causal understanding). We hope that this is a useful contribution to the international scientific community.

The Q1 Journal Information Fusion is ranked Nr. 2 out of 138 in the field of Computer Science, Artificial Intelligence, with SCI-Impact Factor 13,669, see: https://www.sciencedirect.com/journal/information-fusion

https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png 0 0 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2021-03-04 10:13:412021-03-09 07:48:30Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI

Call for Papers xxAI beyond explainable AI

January 20, 2021/in AI News, call for papers/by Andreas Holzinger

From explainable AI to responsible AI

Read more
https://www.aholzinger.at/wordpress/wp-content/uploads/2021/01/xxAI.png 1248 828 Andreas Holzinger https://www.aholzinger.at/wordpress/wp-content/uploads/2019/09/hcai.png Andreas Holzinger2021-01-20 06:46:432021-01-20 07:41:09Call for Papers xxAI beyond explainable AI
Page 1 of 212

Latest News

  • Graph Neural Networks with the Human-in-the-LoopMay 2, 2025 - 10:24

    In our Nature Scientific Reports paper we introduce a novel framework that integrates federated learning with Graph Neural Networks (GNNs) to classify diseases, incorporating Human-in-the-Loop methodologies. This advanced framework innovatively employs collaborative voting mechanisms on subgraphs within a Protein-Protein Interaction (PPI) network, situated in a federated ensemble-based deep learning context. This methodological approach marks a […]

© 2019 by Andreas Holzinger. All rights reserved.
  • Legal Information (Impressum)
Scroll to top Scroll to top Scroll to top

We use cookies to improve your experience on our website. By browsing this website, you agree to our use of cookies.

Accept all cookies and servicesDo not acceptLearn more

Cookie and Privacy Settings



How we use cookies

We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.

Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.

Essential Website Cookies

These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.

We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.

We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.

Other external services

We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.

Google Webfont Settings:

Google Map Settings:

Google reCaptcha Settings:

Vimeo and Youtube video embeds:

Accept all cookies and servicesDo not accept
Open Message Bar Open Message Bar Open Message Bar