We call for contributions that focus on, but are not limited to the following topics with cross-domain applications:

– Explanations beyond the DNN classifiers: Random forests, unsupervised learning, reinforcement learning
– Explanations beyond heat maps: structured explanations, Q/A and dialog systems, human-in-the-loop
– Explanation beyond explanation : improve ML models and algorithms, verify ML, gain insights

including, but not limited to (alphabetically, not prioritized):

  • Adversarial attacks explainability
  • Believability and manipulability of explantions (especially in contexts where they need to meet a legal evidence standard)
  • Explainability, Causality, Causability (Causa-bi-lity is not a typo, see definitions below *)
  • Causability (the measureable extent to which an explanation to a human achieves a specified level of causal understanding, see Holzinger)
  • Counterfactual explanations
  • Dialogue Systems for xAI
  • Graph Neural Networks
  • Human-AI interfaces
  • Human-centered AI and responsibility
  • Human interpretability
  • Interactive Machine Learning with the human-in-the-loop
  • Interpretable Models (vs. post-hoc explanations)
  • Intelligent User Interfaces
  • Knowledge Graphs
  • Multi-Classifier Systems
  • Ontologies and xAI
  • Question/Answering Dialog Systems
  • Explainability and Recommender systems

*) Please discriminate:

  • Explainability := technically highlights decision relevant parts of machine representations and machine models i.e., parts which contributed to model accuracy in training, or to a specific prediction. It does NOT refer to a human model !
  • Causality : = relationship between cause and effect in the sense of Pearl
  • Causability := the measureable extent to which an explanation to a human achieves a specified level of causal understanding, see Holzinger) It does refer to a human model !