AAAI 22 – Explainable Agency in Artificial Intelligence Workshop

As last year, this year I co-organized the AAAI-22 Workshop on Explainable Agency in Artificial Intelligence. Despite the amount of time that reviews and organizational tasks might have taken, working together with Prashan MadumalDavid W. Aha, and Mark T. Keane has been a very rewarding opportunity for me.

The AAAI-22 Workshop on Explainable Agency in Artificial Intelligence was held virtually during 28 February and 1 March 2022. As for the previous year, this workshop’s objective was to discuss the topic of explainable agency and bring together researchers and practitioners from diverse backgrounds to share challenges, discuss new directions, and present recent research in the field.

The presentations focused on different aspects of explainable agency, including: counterfactuals, fairness, human evaluations, and iterative and active communication among agents. Several methods were proposed for generating the explanations of AI agents. These methods included an interactive Constraint-Based Reasoning (C-BR) system for generating counterfactual explanations, an Inverse Reinforcement Learning (IRL) approach for modelling human knowledge and providing relevant counterfactual examples, algorithms for learning interpretable decision tree policies for multi-agent systems, and the estimation and communication of uncertainty by Deep Learning (DL) Agents. Research results were also presented that described the results of human subject studies, investigated human annotations for interpretable Natural Language Processing (NLP), or measured human task prediction performance when receiving a description of the capability of a black-box AI agent. In addition, a live demonstration was given of an explainable AI system that performs human-machine teaming in a real-time strategy game.

Paper presentations were interspersed with invited talks and panel discussions. This workshop included three invited speakers who are experts in their fields. Cynthia Rudin, Professor of Computer Science and Engineering at Duke University, leader of Duke’s Interpretable Machine Learning Lab and recipient of the AAAI Squirrel AI Award, described her group’s work on inherently interpretable models, decision tree-splitting criteria more specifically, and global tree optimization methods such as hierarchical objective lower bound, leaf bound, and equivalent points bound. Chenhao Tan, Assistant Professor at the Department of Computer Science at the University of Chicago and leader of the Chicago Human+AI lab (CHAI), discussed his work on human interactions with explanations of AI classifiers and how these systems should be evaluated in user studies. Eric Ragan, Assistant Professor and leader of the Interactive Data and Immersive Environments (Indie) Lab, introduced work on how humans provide explanations for sequential decision-making tasks and the human perception of intelligent systems requiring humans’ explanations to improve their performance.

This workshop also included two panel discussions. The first panel (which included panellists Cristina Conati (University of British Columbia), Mark Neerincx (TU Delft), and Nava Tintarev (Maastricht University)), on Interactive Explainability, focused on human-agent interaction in explanation scenarios and addressed the best ways to approach this problem to build interactive explainable agents. The second panel (which included panellists Subbarao Kambhampati (Arizona State University), Prashan Madumal (University of Melbourne), and Laurie Paul (Yale University)) discussed associational and causal modelling methods for explainable agency as well as the benefits and limitations of research employing causal methods. Both panels highlighted that the recent literature on explainability has forged ahead, without perhaps sufficiently drawing on the insights that have been gained from other areas of AI with a longer track record on these problems (e.g., early expert systems and planning research on explanations, case-based explanation, intelligent tutoring systems, and recommender systems). The panellists also argued that current algorithmic development on explainability does not include informative user studies, a factor that impedes downstream deployment in real-world applications. The panel on Explainability and Causality discussed the definition of causal explanations in sequential decision-making agents and the difference between having causal knowledge and being able to transfer this knowledge through explanations. Approaches that provide explanations from observational data and from data collected through direct interaction with the environment were also discussed through the lens of the three levels of the causal hierarchy (Pearl, 2019). 

The workshop ended with an analysis of the presented papers and a discussion on the lessons learned from the invited talks and panels. The discussion also focused on the need for a unified evaluation framework for explainable agency.

You can find the proceedings of this workshop on the workshop’s website.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s