Elsevier Information Fusion (INFFUS), rank 3/134 in the field of Artificial Intelligence, IF=10.716, Q1

This special issue seeks original papers and fresh studies dealing with research findings on XAI and RAI.
The list of topics in this special issue include, but is not limited to:


  • Post-hoc explainability techniques for AI models
  • Neural-symbolic reasoning to explain AI models
  • Fuzzy Rule-based Systems for explaining AI models
  • Counterfactual explanations of AI models
  • Explainability and data fusion
  • Knowledge representation for XAI
  • Human-centered XAI
  • Visual explanations for AI models
  • Contrastive explanation methods for XAI
  • Natural Language generation for XAI
  • Interpretability of other ML tasks (e.g. ranking, recommendation, reinforcement learning)
  • Hybrid transparent-Blackbox modeling
  • Quantitative evaluation (metrics) of the interpretability of AI models
  • Quantitative evaluation (metrics) of the quality of explanations
  • XAI and theory-guided data science


  • Privacy-aware methods for AI models
  • Accountability of decisions made by AI models
  • Bias and fairness in AI models
  • Methodology for an ethical and responsible use of AI based models
  • AI models’ output confidence estimation
  • Adversarial analysis for AI security (attack detection, explanation and defense)
  • Causal reasoning, causal explanations, and causal inference

See guide for authors for specific details:

See this overview article [1] for more information:

[1] A. Barredo Arrieta, N. Diaz-Rodriguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, F. Herrera, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, vol. 58, pp. 82-115, June 2020.