With Tathagata Chakraborti, Rebecca Eifler, Joerg Hoffmann, Benjamin Krarup, Alan Lindsay, Sarath Sreedharan and Stylianos Loukas Vasileiou, I am coorganizing the Workshop on Explainable AI Planning (XAIP).
As artificial intelligence (AI) is increasingly being adopted into application solutions, the challenge of supporting interacting with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater competence and responsibility to such systems. The challenge is to find effective ways to characterise, and to communicate, the foundations of AI-driven behaviour, when the algorithms and the knowledge on which those algorithms operate are far from transparent to humans. While XAI at large is primarily concerned with black-box learning-based approaches, model-based approaches are well suited — arguably better suited — for explanation, and Explainable AI Planning (XAIP) can play an important role in addressing complex decision-making procedures.
