Research Interests
Human-AI Interaction - Planning for human-robot teams and decision support, explicable planning, explanation generation for humans in the loop and value alignment.
XAI - Explanation Generation for Sequential Decision-Making Problems.
Planning and Learning - Combining planning and learning methods for more effective long term sequential decision-making.
Model-Lite Planning - Representation and learning of incomplete models for plan generation, recognition and recommendation.
Books
Dissertation
Journal Articles
Conference Papers
Expectation Alignment: Handling Reward Misspecification in the Presence of Expectation Mismatch M. Mechergui, S. Sreedharan. NeurIPS 2024.
HELP! Providing Proactive Support in the Presence of Knowledge Asymmetry. T. Caglar, S. Sreedharan. AAMAS 2024.
Can LLMs Fix Issues with Reasoning Models? Towards More Likely Models for AI Planning. T. Caglar, S. Belhaj, M. Katz, T. Chakraborti, S. Sreedharan. AAAI 2024.
Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AI. M. Mechergui, S. Sreedharan. AAAI 2024.
Optimistic Exploration in Reinforcement Learning Using Symbolic Model Estimates S. Sreedharan, M. Katz. NeurIPS 2023.
On the Planning Abilities of Large Language Models -- A Critical Investigation. K. Valmeekam, M. Marquez, S. Sreedharan, S. Kambhampati. NeurIPS 2023. (Accepted as Spotlight)
Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning. L Guan*, Karthik Valmeekam*, S. Sreedharan, S. Kambhampati. NeurIPS 2023.
PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change. K. Valmeekam, M. Marquez, A. Olmo, S. Sreedharan, S. Kambhampati. NeurIPS 2023. (Benchmark Track).
Human-Aware AI – A Foundational Framework for Human-AI Interaction S. Sreedharan. AAAI 2023 (New Faculty Track)
Generalizing Action Justification and Causal Links to Policies S. Sreedharan, C. Muise, S. Kambhampati. ICAPS 2023
Planning for Attacker Entrapment in Adversarial Settings B. Cates, A. Kulkarni, S. Sreedharan. ICAPS 2023
Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AI M. Mechergui, S. Sreedharan AAMAS 2023 (Extended Abstract).
Trust-Aware Planning: Modeling Trust Evolution in Iterated Human-Robot Interaction Z. Zahedi, M. Verma, S. Sreedharan, S. Kambhampati. HRI 2023.
Explicable or Optimal: Trust Aware Planning in Iterated Human Robot Interaction Z. Zahedi, S. Sreedharan, K. Valmeekam, S. Kambhampati. Accepted as a stand-alone video at ICRA 2023.
Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity L. Guan*, S. Sreedharan*, and S. Kambhampati. ICML 2022.
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations S. Sreedharan, U. Soni, M. Verma, S. Srivastava, and S. Kambhampati. ICLR 2022.
On the Computational Complexity of Model Reconcilation. S. Sreedharan, P. Bercher , and S. Kambhampati. IJCAI 2022.
RADAR-X: An Interactive Mixed Initiative Planning Interface Pairing Contrastive Explanations and Revised Plan Suggestions K. Valmeekam, S. Sreedharan, s. Sengupta, s. Kambhampati. ICAPS 2022.
Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems S. Kambhampati, S. Sreedharan, M. Verma, Y. Zha, L. Guan and AAAI 2022 (Blue Sky Track).
Not all users are the same: Providing personalized explanations for sequential decision making problems U. Soni, S. Sreedharan, and S. Kambhampati. IROS 2021.
A Unifying Bayesian Formulation of Measures of Interpretability in Human-AI Interaction S. Sreedharan, A. Kulkarni, D. Smith, and S. Kambhampati. IJCAI-Survey 2021.
Designing Environments Conducive to Interpretable Robot Behavior A. Kulkarni*, S. Sreedharan*, S. Keren, T. Chakraborti, D. Smith, S. Kambhampati. IROS, 2020.
The Emerging Landscape of Explainable AI Planning and Decision Making T. Chakraborti*, S. Sreedharan*, S. Kambhampati. IJCAI, 2020.
TLdR: Policy Summarization for Factored SSP Problems Using Temporal Abstractions. S. Sreedharan, S.Srivastava, and S. Kambhampati. ICAPS 2020.
D3WA+: A Case Study of XAIP in a Model Acquisition Task . S. Sreedharan*, T. Chakraborti*, C. Muise, Y. Khazaeni, and S. Kambhampati. ICAPS 2020.
Expectation-Aware Planning: A Unifying Framework for Synthesizing and Executing Self-Explaining Plans for Human-Aware Planning S. Sreedharan, T. Chakraborti, C. Muise, and S. Kambhampati. AAAI 2020.
Model-Free Model Reconciliation. S. Sreedharan, A. Olmo, A. Mishra and S. Kambhampati. IJCAI 2019.
Why Can't You Do That HAL? Explaining Unsolvability of Planning Tasks. S. Sreedharan, S.Srivastava, D. Smith, and S. Kambhampati. IJCAI 2019.
Balancing Explicability and Explanations for Human-Aware Planning T. Chakraborti*, S. Sreedharan*, S. Kambhampati. IJCAI 2019.
CAP: A Decision Support System for Crew Scheduling using Automated Planning A. Mishra, S. Sengupta, S. Sreedharan, T. Chakraborti and S. Kambhampati. NDM 2019.
Plan Explanations as Model Reconciliation – An Empirical Study T. Chakraborti, S. Sreedharan, S. Grover and S. Kambhampati. HRI 2019.
Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior T. Chakraborti, A. Kulkarni, S. Sreedharan, D. Smith, S. Kambhampati. ICAPS 2019
Projection-Aware Task Planning and Execution for Human-in-the-Loop Operation of Robots in a Mixed-Reality Workspace T. Chakraborti, S. Sreedharan, A. Kulkarni, and S. Kambhampati. IROS 2018.
Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations S. Sreedharan, S. Srivastava, and S. Kambhampati. IJCAI 2018.
Balancing Explicability and Explanations: Emergent Behaviors in Human-Aware Planning. T. Chakraborti*, S. Sreedharan*, and S. Kambhampati. AAMAS 2018 (Extended Abstract).
Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation S. Sreedharan*, T. Chakraborti*, S. Kambhampati. ICAPS 2018.
Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. T. Chakraborti*, S. Sreedharan*, S. Kambhampati, Y. Zhang. IJCAI 2017.
Plan Explicability and Predictability for Robot Task Planning Y. Zhang, S. Sreedharan, A. Kulkarni, T. Chakraborti, HH. Zhuo, S. Kambhampati. ICRA 2017.
Compliant Conditions for Polynomial Time Approximation of Operator Counts. T. Chakraborti, S. Sreedharan, S. Sengupta, T. K. Satish, and S. Kambhampati. Symposium on Combinatorial Search (SOCS) 2016.
A Formal Analysis of Required Cooperation in Multi-agent Planning Y. Zhang, S. Sreedharan and S. Kambhampati. ICAPS 2016.
Capability Models and their application in Multi-agent planning. Y. Zhang, S. Sreedharan and S. Kambhampati. AAMAS 2015.
Workshop Papers (Refereed)
A Mental-Model Centric Landscape of Human-AI Symbiosis Z. Zahedi, S. Sreedharan, S. Kambhampati. R2HCAI AAAI 2023.
Revisiting Value Alignment Through the Lens of Human-Aware AI S. Sreedharan, and S. Kambhampati. Virtual Workshop on Human-Centered AI, NeurIPS 2022.
Towards customizable reinforcement learning agents: Enabling preference specification through online vocabulary expansion U. Soni, S. Sreedharan, M. Verma, L. Guan, M. Marquez, and S. Kambhampati. Workshop on Human in the Loop Learning, NeurIPS 2022.
Large Language Models Still Can't Plan (A Benchmark for LLMs on Planning and Reasoning about Change) K. Valmeekam*, A. Olmo*, S. Sreedharan, and S. Kambhampati. Workshop on Foundations Models for Decision Making, NeurIPS 2022.
Why Did You Do That? Generalizing Causal Link Explanations to Fully Observable Non-Deterministic Planning Problems S. Sreedharan, C. Muise, and S. Kambhampati. ICAPS Workshop on Explainable AI Planning (XAIP) 2022
Leveraging PDDL to Make Inscrutable Agents Interpretable: A Case for Post Hoc Symbolic Explanations for Sequential-Decision Making Problems S. Sreedharan, and S. Kambhampati. ICAPS Workshop on Explainable Planning (XAIP), 2021.
Trust-Aware Planning: Modeling Trust Evolution in Longitudinal Human-Robot Interaction. Z. Zahedi, M. Verma,S. Sreedharan, and S. Kambhampati. ICAPS Workshop on Explainable Planning (XAIP), 2021.
GPT3-to-plan: Extracting plans from text using GPT-3. A. Olmo, S. Sreedharan, and S. Kambhampati. ICAPS Workshop on Planning for Financial Services (FinPlan), 2021.
A Bayesian Account of Measures of Interpretability in Human-AI Interaction. S. Sreedhara, A. Kulkarni, T. Chakraborti, D. Smith, and S. Kambhampati. ICAPS Workshop on Explainable Planning (XAIP), 2020.
RADAR-X: An Interactive Interface Pairing Contrastive Explanations with Revised Plan Suggestions. K. Valmeekam ,S. Sreedharan, S. Sengupta, and S. Kambhampati. ICAPS Workshop on Explainable Planning (XAIP), 2020.
Explainable Composition of Aggregated Assistants. S. Sreedharan, T. Chakraborti, Y. Rizk, and Y. Khazaeni. ICAPS Workshop on Explainable Planning (XAIP), 2020.
A General Framework for Synthesizing and Executing Self-Explaining Plans for Human-AI Interaction. S. Sreedharan, T. Chakraborti, C. Muise, and S. Kambhampati. ICAPS Workshop on Explainable Planning (XAIP), 2019.
Design for interpretability. A. Kulkarni*,S. Sreedharan*, S. Keren, T. Chakraborti, D. Smith, S. Kambhampati. ICAPS Workshop on Explainable Planning (XAIP), 2019.
Plan Explanation Through Search in an Abstract Model Space. S. Sreedharan, M. Madhusoodanan, S. Srivastava, and S. Kambhampati. XAIP ICAPS 2018.
Human-Aware Planning Revisited: A Tale of Three Models. T. Chakraborti, S. Sreedharan, and S. Kambhampati. ICAPS XAIP 2018 .
Balancing Explicability and Explanation in Human-Aware Planning. S. Sreedharan*, T. Chakraborti*, and S. Kambhampati. AAAI Fall Symposium 2017.
Explanations as Model Reconciliation - A Mutli-Agent Perspective. S. Sreedharan*, T. Chakraborti*, and S. Kambhampati. AAAI Fall Symposium 2017.
Alternative Modes of Interaction in Proximal Human-in-the-Loop Operation of Robots. T. Chakraborti, S. Sreedharan, A. Kulkarni, and S. Kambhampati. ICAPS 2017 UISP Workshop; and ICAPS 2017 System Demo and Exhibits.
Plan Explainability and Predictability for Robot Task Planning Y. Zhang; S. Sreedharan, A. Kulkarni;T. Chakraborti; H. H. Zhuo; and S. Kambhampati. In RSS 2016 Workshop on Planning for Human-Robot Interaction