
Next generation of robots should combine ideas from other fields such as computer vision, natural language processing, machine learning and many others, because the close-loop system is required to deal with complex tasks based on multimodal input in the complicated real environment. This workshop focuses on generative models for robot learning, which lies in the important and fundamental field of AI and robotics.
Our topics include but are not limited to:
- Robotics data generation. (i) How can we build simulators with diverse assets with rich interactive properties? And how can we accurately simulate physical consequences for diverse actions of robots? (ii) How can we accelerate the generation process for successful trajectory in the simulation environments? (iii) What are the challenges and possible solutions to alleviate the visual domain gap between the simulators and the real world?
- Generative policy learning. (i) How can we design a generative visual representation learning framework that effectively embeds spatiotemporal information of the scene via self-supervision? (ii) How can we efficiently construct world model for scalable robot learning, and what information of the scene and the robot should be considered in order to acquire accurate feedback from the world model? (iii) How can we extend state-of-the-art generative models such as diffusion models in computer vision and auto-regressive models in natural language processing for policy generation?
- Foundation model grounding. (i) What are the general criteria for designing prompts of LLMs for robot tasks? (ii) How can we build a scalable, efficient and generalizable representation of physical scenes to ground the action prediction of VLMs? (iii) How can we enhance the sample efficiency in VLA model training, and how can we efficiently adapt pre-trained VLA models to novel robot tasks?
- On-device generative model deployment. (i) What is the complexity bottleneck in current pre-trained large generative models, and how can we distinguish and remove the redundant architectures? (ii) How can we dynamically keep the optimal accuracy-efficiency trade-off to adapt to the changing resource limit caused by battery level and utilization variance? (iii) How can we develop the compilation toolbox for pre-trained large generative models on robot-based computational platforms to achieve significant actual speedup and memory saving?
Keynote Speakers

Sergey Levine
UC Berkeley
UC Berkeley

Shuran Song
Stanford
Stanford

Yilun Du
Harvard
Harvard

Qi Dou
CUHK
CUHK

Xiaojuan Qi
HKU
HKU

Jiwen Lu
Tsinghua
Tsinghua
Schedule
TBD
Organizers

Ziwei Wang
NTU
NTU

Congyue Deng
Stanford
Stanford

Changliu Liu
CMU
CMU

Zhenyu Jiang
UT Austin
UT Austin

Haoran Geng
UC Berkeley
UC Berkeley

Huazhe Xu
Tsinghua
Tsinghua

Yansong Tang
Tsinghua
Tsinghua

Philip H. S. Torr
Oxford
Oxford

Ziwei Liu
NTU
NTU

Angelique Taylor
Cornell Tech
Cornell Tech

Yuke Zhu
UT Austin
UT Austin

Jitendra Malik
UC Berkeley
UC Berkeley