Lin Guan
Preferred: lguan9 [at] asu [dot] edu
linguan [at] meta [dot] com

I am a research scientist at Meta GenAI (formerly named Facebook) working on AI agents & personas development.

Previously, I was a Ph.D. student at Arizona State University. I worked at Yochan Lab (AI Lab), supervised by Dr. Subbarao Kambhampati. My research interests lie at the intersection of machine learning and human-AI interaction, especially reinforcement learning from human feedback and LLMs/VLMs for decision making or planning. Specifically, my PhD research has focused on:

(a) Development of systematic mechanisms to robustly harness planning knowledge acquired (particularly via large vision and language models) from vast yet noisy sources of collective knowledge, such as the internet.

(b) Development of intuitive mechanisms to enable users to more efficiently provide guidance (e.g., within a human-in-the-loop RL system), specify task objectives and preferences, and steer or align the outputs of generative models.

twitter   |   cv   |   google scholar   |   linkedIn   |   github

profile photo

News

  • [2024.03]

    1 paper accepted to ICML 2024.
  • [2023.12]

    1 paper accepted to AAAI 2024.
  • [2023.09]

    1 paper accepted to NeurIPS 2023.
  • [2023.01]

    1 paper accepted to ICLR 2023.
  • [2022.05]

    Student researcher at Google.
  • [2022.04]

    1 paper accepted to ICML 2022 (also accepted to RLDM 2022 and received the Best Paper Award at PRL@ICAPS 2022).
  • [2021.10]

    1 paper accepted to NeurIPS 2021 as a Spotlight presentation (3%).


Affiliations

Cruise, PhD Intern, Planning ML Summer 2023
Google, Research Intern Summer 2022
TikTok, Machine Learning Software Engineer Intern Summer 2021
Arizona State University, Ph.D. in Computer Science Fall 2019 - Expected Spring 2024
The University of Texas at Austin, B.S. in Computer Science Fall 2016 - Spring 2019


Selected Publications

Task Success is not Enough: Investigating the Use of Video-Language Models as Behavior Critics for Catching Undesirable Agent Behaviors
Lin Guan*, Yifan Zhou*, Denis Liu, Yantian Zha, Heni Ben Amor, Subbarao Kambhampati
Conference on Language Modeling (COLM) 2024

When no sound verifier is available, can we use large vision and language models (VLMs), which are approximately omniscient, as scalable Behavior Critics to catch undesirable embodied agent behaviors in videos? To answer this, we first construct a benchmark that contains diverse cases of goal-reaching yet undesirable agent policies. Then, we comprehensively evaluate VLM critics to gain a deeper understanding of their strengths and failure modes.

paper   website

Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning
Lin Guan*, Karthik Valmeekam*, Sarath Sreedharan, Subbarao Kambhampati
NeurIPS 2023

To leverage LLMs in planning tasks, we introduce an alternative paradigm that teases an explicit world (domain) model in PDDL out of LLMs and then uses it to plan with sound domain-independent planners. This is motivated by the insight that LLMs, while incapable of the combinatorial search needed to produce correct plans, may be better suited as the source of world models.

paper   website

LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks
Subbarao Kambhampati, Karthik Valmeekam, Lin Guan, Kaya Stechly, Mudit Verma, Siddhant Bhambri, Lucas Saldyt, Anil Murthy
ICML 2024, Position Paper

We present the LLM-Modulo Framework in which LLMs play a spectrum of roles, from guessing candidate plans, to translating those plans into syntactic forms that are more accessible to external critics, to helping end users flesh out incomplete specifications, to helping expert users acquire domain models (that in turn drive model-based critics).

paper

Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learning from Human Preferences
Lin Guan, Karthik Valmeekam, Subbarao Kambhampati
ICLR 2023

We introduce the notion of Relative Behavioral Attributes that enables end users to tweak the agent's behavior through nameable concepts even for a tacit-knowledge skill learning task (e.g., decreasing the steering sharpness of an autonomous driving agent, or increasing the softness of the movement of a two-legged "sneaky" agent).

paper   website

Widening the Pipeline in Human-Guided Reinforcement Learning with Explanation and Context-Aware Data Augmentation
Lin Guan, Mudit Verma, Sihang Guo, Ruohan Zhang, Subbarao Kambhampati
NeurIPS 2021 (Spotlight, 3%)

We make Human-in-the-Loop RL more efficient and feasible by allowing human teachers to not only give binary feedback but also to highlight task-relevant features.

paper

Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity
Lin Guan*, Sarath Sreedharan* (equal contribution), Subbarao Kambhampati
ICML 2022 (also received the Best Paper Award at PRL@ICAPS 2022 and accepted to RLDM 2022)

Explicit symbolic knowledge is important for solving long-horizon task and motion planning tasks. But a key resistance to leveraging easily available human knowledge (or knowledge acquired from LLMs/VLMs) has been that it might be inexact. In this work, we present a framework to quantify the relationship between the true task model and an inexact STRIPS model, and introduce a novel approach using landmarks and a diversity objective to make up for potential errors in the symbolic knowledge.

paper   website

On the role of Large Language Models in Planning
Subbarao Kambhampati, Karthik Valmeekam, Lin Guan
Tutorial at AAAI 2024 (also accepted to ICAPS 2023 Tutorial Program)

This tutorial discusses the fundamental limitations of LLMs in generating plans (especially those that require resolving subgoal interactions), and also presents constructive uses of LLMs for planning tasks.

website

Leveraging Human Guidance for Deep Reinforcement Learning Tasks
Ruohan Zhang, Faraz Torabi, Lin Guan, Dana H. Ballard, Peter Stone
IJCAI 2019, Survey Track

This survey provides a high-level overview of five learning frameworks that primarily rely on human guidance other than conventional, step-by-step action demonstrations.

paper
Learning from Ambiguous Demonstrations with Self-Explanation Guided Reinforcement Learning
Yantian Zha, Lin Guan, Subbarao Kambhampati
AAAI 2024

We showcase how explicit characterization of a domain can facilitate reinforcement learning from ambiguous demonstrations.

paper

Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems
Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin Guan
AAAI 2022, Blue Sky Track

We advocated an ambitious research program to broaden the basis for human-AI interaction by proposing that AI systems support a symbolic interface – independent of whether their internal operations themselves are done in human-interpretable symbolic means.

paper
Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset
Ruohan Zhang, Calen Walshe, Zhuode Liu, Lin Guan, Karl S. Muller, Jake A. Whritner, Luxin Zhang, Mary M Hayhoe, Dana H Ballard
AAAI 2020

We provide a large-scale, high-quality dataset of human actions with simultaneously recorded eye movements (i.e., gaze info) while humans play Atari video games.

paper

Service

  • [2023]

    Served as a reviewer for ICML 2023 and NeurIPS 2023.
  • [2022]

    Served as a reviewer for NeurIPS 2022.
  • [2021]

    Served as program committee for AAAI 2022 and ICRA 2022.
  • [2020]

    Served as program committee for AAAI 2021.


website adapted from here