Newsletter
Peer
Home About Methodology Use cases Partners News & Events FAQ Contacts Outcomes

News & Events

PEER Workshop accepted at the 28th European Conference on Artificial Intelligence (October 25-30, 2025, Bologna, Italy)

Author
PEER Team

Reading timea
00:46

Date
Oct 25, 2025

While explainable artificial intelligence (XAI) has become massively popular and impactful over the last years and has become an integral part of all major AI venues, progress in the field is, to some degree, still hindered by a lack of agreed-on evaluation methods and metrics. Many articles present only anecdotal evidence, and the large variation in explanation techniques and application domains makes it challenging to define, quantify, and compare the relevant performance criteria for XAI. This leads to a lack of standardized baselines and established state-of-the-art, making the contributions of newly proposed XAI methods difficult to evaluate. The discussion on how to evaluate explainability and interpretability, whether through user studies or with computational proxy measures, is ongoing.

In recent years, there has also been a growing interest in data- and AI-driven solutions for tackling complex decision-making problems in practice, including NP-hard combinatorial optimization problems such as nurse scheduling or blood matching, as well as sequential decision-making tasks such as online routing or warehouse order picking. Explaining solutions and decisions for such reinforcement learning, planning, or optimization problems introduces additional layers of complexity compared to most of the work in XAI, which focuses on explaining the input-output mappings of "black box" models like neural classifiers.

Solutions in these settings may, for example, feature a complex inner structure or a temporal dimension. Current AI approaches to complex decision-making problems still focus mainly on optimization performance, while comparatively less attention has been paid to explainability; and there is an even more significant gap in research on evaluation metrics and methods for explainability in this context. The "Evaluating Explainable AI and Complex Decision-Making (EXCD)" workshop at ECAI 2025 in Bologna will bring together researchers interested in XAI in general and in AI planning, reinforcement learning, and data-driven optimization in particular, to discuss recent developments in XAI evaluation and collaboratively develop a roadmap to address this gap.

Click here for more info.