Austrian Science Fund FWF Project on Explainable-AI granted

I just got granted a basic research project on explainability by the Austrain Science Fund

Talk at First Austrian IFIP Forum 2019, May 8-9

In the context of the First Austrian IFIP Forum and in my role as Austrian TC 12 representative and WG 12.9 working member, I provided a talk on “From Machine Learning to Explainable AI”

Explainable AI – from local explanations to global understanding

A new publication by the Lee Lab on explanations for ensemble tree-based predictions which results a exact solution that guarantees desirable explanation properties for global understanding

Talk on explainability at the Austrian Coucil on Robotics and Artificial Intelligence

On Friday, May, 3, 2019 I had the great honor to provide an invited talk on explainable AI to the Austrian Council on Robotics and Artificial Intelligence …

The EU Guidelines for Trustworthy AI include transparent machine learning

The guidelines of the European Union on building trust in human-centered AI encompass 7 key requirements: Human agency and oversight; Robustness and Safety, Privacy, Transparency, Fairness, Societal well-being, Accountability

First Austrian IFIP-Forum “AI and future society: The third wave of AI”

In my role as Austrian Representative of the IFIP TC 12 Artificial Intelligence, I would like to draw your attention to our First Austrian IFIP Forum “AI and future society” May 8th – Thursday, May 9th 2019 in Vienna