AAAI 21 – Explainable Agency in Artificial Intelligence Workshop

This year I had the great opportunity to co-organize the AAAI-21 Workshop on Explainable Agency in Artificial Intelligence together with Prashan Madumal, David W. Aha and Rosina Weber.

The workshop was held virtually during February 8-9, 2021. We aimed to discuss the topic of explainable agency and bring together researchers and practitioners from diverse backgrounds to share challenges, discuss new directions, and present recent research in the field. 

Explainable agency has received substantial but disjoint attention in different sub areas of AI, including machine learning, planning, intelligent agents, and several others. There has been limited interaction among these subareas on explainable agency, and even less work has focused on promoting and sharing sound designs, methods, and measures for evaluating the effectiveness of explanations (generated by AI systems) in human subject studies. This has led to the uneven development of explainable agency, and its evaluation, in multiple AI subareas. Our aim was to address this by encouraging a shared definition of explainable agency and by increasing awareness of work on explainable agency throughout the AI research community and in related disciplines (e.g., human factors, human-computer interaction, and cognitive science).  

The contributed talks and presentations focused on different aspects of explainable agency: explainable machine learning models, counterfactuals, feature attribution, user-centered aspects, transparency, fairness, and evaluation methods. Several methods and algorithms to explain the reasoning of AI agents were proposed, including visualization methods (e.g., saliency maps) for Convolutional Neural Networks (CNNs) or deep reinforcement learning agents, a querying algorithm that generates interrogation policies for Mixtures of Hidden Markov Models (MHMMs), summarization techniques for sampling-based search algorithms (e.g., Monte-Carlo Tree Search), and explanations for competing answers in Visual Question Answering (VQA) systems. Furthermore, design and evaluation techniques of explainable AI systems were presented, such as measurement domains (e.g., for qualitative investigations), explanations architecture features, and formats of explanation.

In addition to the contributed talks and presentations, this workshop included four invited speakers who are experts in their fields. Ofra Amir, Professor of Industrial Engineering and Management at Technion IE&M, introduced the topic of agent policy summarization to describe the agent’s behavior to people. Timothy Miller, Professor of Computing and Information Systems at the University of Melbourne, discussed the scope of explainable AI, its relation to the social sciences, and explainable agency in model-free reinforcement learning. Margaret Burnett, Professor of Computer Science at Oregon State University, proposed personas for identifying the goals (e.g., diversity of thoughts, appropriate trust, informed decisions) of explainable AI. Pat Langley, Director for the Institute for the Study of Learning and Expertise (ISLE), presented the concepts of explainable, normative, and justified agency and discussed the definition and representation of explanation as well as the advantages of designing and constructing justifiable agents.

This workshop included two panel discussions. The first panel (which included panelists Been Kim – Google Brain, Freddy Lecue – CortAIx and Inria, and Vera Liao – IBM) focused on lessons learned and insights gained from deploying XAI techniques while the second panel (which included panelists Denise Agosto – Drexel University, Bertram Malle – Brown University, and Eric Vorm – US Naval Research Laboratory) focused on XAI from a cognitive science perspective. There was some consensus on the continuous co-adaptive nature of the explanatory process and that explanation can be modeled as a form of exploration. The industry panel tackled the problem of explainability as a property of the system and described tools that can accurately and rigorously explain a system’s model (i.e., interpretable models). In contrast, research on human-computer interaction, human factors, and cognitive science focus on human perception of the information provided by a system, and stressed the importance of shaping an explanation for its target audience. In this context an explanation can allow a receiver to understand, criticize, correct, and improve a system. 

You can find the proceedings and recordings of this workshop on the workshop’s website.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s