Lin Guan
lguan9 [at] asu [dot] edu

I'm a 4th-year Ph.D. student in Computer Science at Arizona State University. I work at Yochan Lab (AI Lab), supervised by Dr. Subbarao Kambhampati. My research interests lie at the intersection of machine learning (especially reinforcement learning from human feedback and LLMs for task planning) and human-AI interaction. Specifically, I am working on:

(a) Developing algorithms that enable human users to effortlessly specify desired AI-agent behaviors through human inputs such as preference labels (i.e., learning from human feedback or RLHF), symbolic or natural language feedback, etc. Applications include aligning AI-model behaviors with human preferences, character/animation control, autonomous vehicles, and robot skill learning.

(b) Building intelligent agents that can learn to solve long-horizon tasks by leveraging pre-trained large language models (LLMs) for task planning, or by integrating planning with reinforcement learning.

email   |   cv   |   google scholar   |   twitter   |   linkedIn   |   github

profile photo

News

  • [2023.01]

    1 paper accepted to ICLR 2023.
  • [2022.05]

    Student researcher at Google.
  • [2022.04]

    1 paper accepted to ICML 2022 (also accepted to RLDM 2022 and received the Best Paper Award at PRL@ICAPS 2022).
  • [2021.10]

    1 paper accepted to NeurIPS 2021 as a Spotlight presentation (3%).
  • [2021.05]

    Machine learning software engineer intern at TikTok and worked on AutoML.


Affiliations

Google, Research Intern Summer 2022
TikTok, Machine Learning Software Engineer Intern Summer 2021
Arizona State University, Ph.D. in Computer Science Fall 2019 - Expected Spring 2024
The University of Texas at Austin, B.S. in Computer Science Fall 2016 - Spring 2019


Selected Publications

Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning
Lin Guan*, Karthik Valmeekam*, Sarath Sreedharan, Subbarao Kambhampati
Under review

To leverage LLMs in planning tasks, we introduce an alternative paradigm that teases an explicit world (domain) model in PDDL out of LLMs and then uses it to plan with sound domain-independent planners. This is motivated by the insight that LLMs, while incapable of the combinatorial search needed to produce correct plans, may be better suited as the source of world models.

paper   website

Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learning from Human Preferences
Lin Guan, Karthik Valmeekam, Subbarao Kambhampati
ICLR 2023

We introduce the notion of Relative Behavioral Attributes that enables end users to tweak the agent's behavior through nameable concepts even for a tacit-knowledge skill learning task (e.g., decreasing the steering sharpness of an autonomous driving agent, or increasing the softness of the movement of a two-legged "sneaky" agent).

paper   website

Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity
Lin Guan*, Sarath Sreedharan* (equal contribution), Subbarao Kambhampati
ICML 2022 (also received the Best Paper Award at PRL@ICAPS 2022 and accepted to RLDM 2022)

Symbolic knowledge is important for solving long-horizon task and motion planning tasks. But a key resistance to leveraging easily available human symbolic knowledge has been that it might be inexact. In this work, we present a framework to quantify the relationship between the true task model and an inexact STRIPS model, and introduce a novel approach using landmarks and a diversity objective to make up for potential errors in the symbolic knowledge.

paper   website

Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data Augmentation
Lin Guan, Mudit Verma, Sihang Guo, Ruohan Zhang, Subbarao Kambhampati
NeurIPS 2021 (Spotlight, 3%)

We make Human-in-the-Loop RL more efficient and feasible by allowing human teachers to not only give binary feedback but also to highlight task-relevant features.

paper

Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems
Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin Guan
AAAI 2022, Blue Sky Track
paper

We advocated an ambitious research program to broaden the basis for human-AI interaction by proposing that AI systems support a symbolic interface – independent of whether their internal operations themselves are done in human-interpretable symbolic means.

Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping
Yantian Zha, Siddhant Bhambri, Lin Guan
IROS 2021
paper

Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset
Ruohan Zhang, Calen Walshe, Zhuode Liu, Lin Guan, Karl S. Muller, Jake A. Whritner, Luxin Zhang, Mary M Hayhoe, Dana H Ballard
AAAI 2020
paper

We provide a large-scale, high-quality dataset of human actions with simultaneously recorded eye movements (i.e., gaze info) while humans play Atari video games.

Leveraging Human Guidance for Deep Reinforcement Learning Tasks
Ruohan Zhang, Faraz Torabi, Lin Guan, Dana H. Ballard, Peter Stone
IJCAI 2019, Survey Track
paper

This survey provides a high-level overview of five learning frameworks that primarily rely on human guidance other than conventional, step-by-step action demonstrations.


Service

  • [2023]

    Served as a reviewer for ICML 2023 and NeurIPS 2023.
  • [2022]

    Served as a reviewer for NeurIPS 2022.
  • [2021]

    Served as program committee for AAAI 2022 and ICRA 2022.
  • [2020]

    Served as program committee for AAAI 2021.


website adapted from here