Posts

AI for Good. Explainability and Robustness for Trustworthy AI ITU Event

AI for Good Discovery. Trustworthy AI: Explainability and Robustness for Trustworthy AI, ITU Event, Geneva, CH [youtube]

Talk recorded live on 16.12.2021, 15:00 – 16:00 – see https://www.youtube.com/watch?v=NCajz8h13uU

Today, thanks to advances in statistical machine learning, AI is once again enormously popular. However, two features need to be further improved in the future a) robustness and b) explainability/interpretability/re-traceability, i.e. to explain why a certain result has been achieved. Disturbances in the input data can have a dramatic impact on the output and lead to completely different results. This is relevant in all critical areas where we suffer from poor data quality, i.e. where we do not have i.i.d. data. Therefore, the use of AI in real-world areas that impact human life (agriculture, climate, forestry, health, …) has led to an increased demand for trustworthy AI. In sensitive areas where re-traceability, transparency, and interpretability are required, explainable AI (XAI) is now even mandatory due to legal requirements. One approach to making AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can sometimes – of course not always – bring experience and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence. Speaker: Andreas Holzinger, Head of Human-Centered AI Lab, Institute for Medical Informatics/Statistics Medizinische Universität Graz Moderators: Wojciech Samek, Head of Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute

Watch the latest #AIforGood videos! https://www.youtube.com/c/AIforGood/v… Explore more #AIforGood content: AI for Good Top Hits https://www.youtube.com/playlist?list… AI for Good Webinars https://www.youtube.com/playlist?list… AI for Good Keynotes https://www.youtube.com/playlist?list… Stay updated and join our weekly AI for Good newsletter: http://eepurl.com/gI2kJ5 Discover what’s next on our programme! https://aiforgood.itu.int/programme/

Check out the latest AI for Good news: https://aiforgood.itu.int/newsroom/ Explore the AI for Good blog: https://aiforgood.itu.int/ai-for-good… Connect on our social media: Website: https://aiforgood.itu.int/ Twitter: https://twitter.com/ITU_AIForGood LinkedIn Page: https://www.linkedin.com/company/2651… LinkedIn Group: https://www.linkedin.com/groups/8567748 Instagram: https://www.instagram.com/aiforgood Facebook: https://www.facebook.com/AIforGood What is AI for Good? The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI.

The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact. Disclaimer: The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.

 

 

Digital Transformation for SustainableDevelopment Goals (SDGs) – A Security, Safety and Privacy Perspective on AI

Our work on Digital Transformation for Sustainable Development Goals (SDGs) – A Security, Safety and Privacy Perspective on AI has just been published and can be found here:

https://www.researchgate.net/publication/353403620_Digital_Transformation_for_Sustainable_Development_Goals_SDGs_-_A_Security_Safety_and_Privacy_Perspective_on_AI

Thanks to my co-authors !

The main driver of the digital transformation currently underway is undoubtedly artificial intelligence (AI). The potential of AI to benefit humanity and its environment is undeniably enormous. AI can definitely help find new solutions to the most pressing challenges facing our human society in virtually all areas of life: from agriculture and forest ecosystems that affect our entire planet, to the health of every single human being. However, this article highlights a very different aspect. For all its benefits, the large-scale adoption of AI technologies also holds enormous and unimagined potential for new kinds of unforeseen threats. Therefore, all stakeholders, governments, policy makers, and industry, together with academia, must ensure that AI is developed with these potential threats in mind and that the safety, traceability, transparency, explainability, validity, and verifiability of AI applications in our everyday lives are ensured. It is the responsibility of all stakeholders to ensure the use of trustworthy and ethically reliable AI and to avoid the misuse of AI technologies. Achieving this will require a concerted effort to ensure that AI is always consistent with human values and includes a future that is safe in every way for all people on this planet. In this paper, we describe some of these threats and show that safety, security and explainability are indispensable cross-cutting issues and highlight this with two exemplary selected application areas: smart agriculture and smart health.

Reference to the paper:

Andreas Holzinger, Edgar Weippl, A Min Tjoa & Peter Kieseberg (2021). Digital Transformation for Sustainable Development Goals (SDGs) – a Security, Safety and Privacy Perspective on AI. Springer Lecture Notes in Computer Science, LNCS 12844. Cham: Springer, pp. 1-20, doi:10.1007/978-3-030-84060-0_1.

bibTeX:

@incollection{HolzingerWeipplTjoaKiese:2021:SustainableSecurity,
year = {2021},
author = {Holzinger, Andreas and Weippl, Edgar and Tjoa, A Min and Kieseberg, Peter},
title = {Digital Transformation for Sustainable Development Goals (SDGs) – a Security, Safety and Privacy Perspective on AI},
booktitle = {Springer Lecture Notes in Computer Science, LNCS 12844},
publisher = {Springer},
address = {Cham},
pages = {1-20},
abstract = {The main driver of the digital transformation currently underway is undoubtedly artificial intelligence (AI). The potential of AI to benefit humanity and its environment is undeniably enormous. AI can definitely help find new solutions to the most pressing challenges facing our human society in virtually all areas of life: from agriculture and forest ecosystems that affect our entire planet, to the health of every single human being. However, this article highlights a very different aspect. For all its benefits, the large-scale adoption of AI technologies also holds enormous and unimagined potential for new kinds of unforeseen threats. Therefore, all stakeholders, governments, policy makers, and industry, together with academia, must ensure that AI is developed with these potential threats in mind and that the safety, traceability, transparency, explainability, validity, and verifiability of AI applications in our everyday lives are ensured. It is the responsibility of all stakeholders to ensure the use of trustworthy and ethically reliable AI and to avoid the misuse of AI technologies. Achieving this will require a concerted effort to ensure that AI is always consistent with human values and includes a future that is safe in every way for all people on this planet. In this paper, we describe some of these threats and show that safety, security and explainability are indispensable cross-cutting issues and highlight this with two exemplary selected application areas: smart agriculture and smart health.},
doi = {10.1007/978-3-030-84060-0_1}
}