Loading Events
Speaker: Assistant Professor Dorsa Sadigh, Computer Science at Stanford University
Chair: Dr Harold SOH Soon Hong, Assistant Professor, School of Computing
Location: Hybrid (Zoom and in-person)

Dorsa Sadigh is an assistant professor in Computer Science at Stanford University. Her research interests lie in the intersection of robotics, machine learning, and human-AI interaction. Specifically, she is interested in developing algorithms that learn robot policies from various sources of data and human feedback, and can seamlessly interact and coordinate with humans. Dorsa received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and received her bachelor’s degree in EECS from UC Berkeley in 2012. She is awarded the Sloan Fellowship, NSF CAREER, ONR Young Investigator Award, AFOSR Young Investigator Award, DARPA Young Faculty Award, Okawa Foundation Fellowship, MIT TR35, and the IEEE RAS Early Academic Career Award. Her work has received best paper awards and nominations at robotics conferences such as Conference on Robot Learning (CoRL), Robotics: Science and Systems (RSS).

Abstract: In this talk, I will discuss the problem of interactive learning by discussing how we can actively learn objective functions from human feedback, capturing their preferences. I will then talk about how the value alignment and reward design problem can have solutions beyond active preference-based learning by tapping into the rich context available from large language models. In the second section of the talk, I will more generally talk about the role of large pre-trained models in today’s robotics and control systems. Specifically, I will present two viewpoints:
  1. Pre-training large models for downstream robotics tasks
  2. Finding creative ways of tapping into the rich context of large models to enable more aligned embodied AI agents.

For pretraining, I will introduce Voltron, a language-informed visual representation learning approach that leverages language to ground pretrained visual representations for robotics. For leveraging large models, I will talk about a few vignettes about how we can leverage LLMs and VLMs to learn human preferences, allow for grounded social reasoning, or enable teaching humans using corrective feedback.

Finally, I will conclude the talk by discussing some preliminary results on how large models can be effective pattern machines that can identify patterns in a token invariant fashion and enable pattern transformation, extrapolation, and even show some evidence of pattern optimization for solving control problems.

Copyright @2023 – All Right Reserved, NUS AI Lab

COM1, 13 Computing Drive Singapore 117417