Foundations of Human-Aware Explanations for Sequential Decision-Making Problems


Abstract

Recent breakthroughs in AI have brought the possibility of developing and deploying practical AI systems within our grasp. Though the growing realization that we might soon have people from all walks of life using and working with these systems has also spurred a lot of interest in ensuring that AI systems can efficiently and effectively work and collaborate with their intended users. Chief among the efforts in this direction has been the pursuit of imbuing these agents with the ability to provide intuitive and useful explanations regarding their decisions and actions to end-users. In this dissertation, I will describe various works that I have done in the area of explaining sequential decision-making problems. Furthermore, I will frame the discussions of my work within a broader framework for understanding and analyzing XAI. My works herein tackles many of the core challenges related to explaining automated decisions to users including (1) techniques to address asymmetry in knowledge between the user and the system, (2) techniques to address asymmetry in inferential capabilities, and (3) techniques to address vocabulary mismatch. The dissertation will also describe the works I have done in generating interpretable behavior and policy summarization. I will conclude this dissertation, by using the framework of human-aware explanation as a lens to analyze and understand the current landscape of explainable planning.


Thesis Committee

Subbarao Kambhampati, Chair | Arizona State University

Been Kim | Google Brain

David Smith | PS Research

Siddharth Srivastava | Arizona State University

Yu Zhang | Arizona State University


Defense

Slides