Humanoid robots have long held promise to be seamlessly deployed in our daily lives. Despite the rapid progress in humanoid robots' hardware (e.g. Boston Dynamics Atlas, Tesla Optimus, Unitree H1, 1X Neo, Agility Digit), their software is fully or partially hand-designed for specific tasks. The goal of Whole-body Control and Bimanual Manipulation (WCBM) workshop is to provide a platform for roboticists and machine learning researchers to discuss the past achievements of whole-body control and manipulation as well as future research directions on this topic, especially on the problem of enabling autonomous humanoid robots. We invited a group of speakers who are world-renowned experts to present their work on whole-body control for humanoid robots and bimanual robotic systems. We also want to provide researchers with the opportunity to present their latest research by accepting workshop papers. We will review and select the best submissions for spotlight talks and interactive poster presentations. We have also planned guided panel discussions to encourage debate among the invited speakers and workshop participants. In the discussion, we would like to contrast the viewpoints of machine learning researchers and roboticists on the past and future of this research topic.
If you have any questions, feel free to contact us.
Program
Workshop schedule (tentative)
🚪Room "TBD"
Time
Event
8:00 - 8:30
Opening Remark
8:30 - 9:00
Sergey Levine (UC Berkeley, PI)
9:00 - 9:30
Deepak Pathak (CMU, skild.AI)
9:30 - 10:00
Poster Spotlight
10:00 - 10:30
Coffee Break, Posters, Robot Demos
10:30 - 11:00
Karen Liu (Stanford)
11:00 - 11:30
Berthold Bauml (TUM)
11:30 - 12:30
Panel Discussion #1
12:30 - 14:00
Lunch
14:00 - 14:30
Zhengyi Luo (CMU)
14:30 - 15:00
Eric Jang (1X)
15:00 - 15:30
Alberto Rodriguez (Boston Dynamics)
15:30 - 16:00
Coffee Break, Posters, Robot Demos
16:00 - 16:30
Hang Zhao (Tsinghua)
16:30 - 17:00
Corey Lynch (Figure)
17:00 - 17:30
Marc Raibert (RAI)
17:30 - 18:30
Panel Discussion #2
18:30 - 18:40
Closing Remarks
Talks
Invited Speakers
Sergey Levine is an Associate professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. He received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as applications in other decision-making domains. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.
Deepak Pathak is a faculty in the School of Computer Science at Carnegie Mellon University. He received his Ph.D. from UC Berkeley and his research spans computer vision, machine learning, and robotics. He is a recipient of the faculty awards from Google, Samsung, Sony, GoodAI, and graduate fellowship awards from Facebook, NVIDIA, Snapchat. His research has been featured in popular press outlets, including The Economist, The Wall Street Journal, Quanta Magazine, Washington Post, CNET, Wired, and MIT Technology Review among others. Deepak received his Bachelor's from IIT Kanpur with a Gold Medal in Computer Science. He co-founded VisageMap Inc. later acquired by FaceFirst Inc.
Karen Liu is a professor in the Computer Science Department at Stanford University. Prior to joining Stanford, Liu was a faculty member at the School of Interactive Computing at Georgia Tech. She received her Ph.D. degree in Computer Science from the University of Washington. Liu's research interests are in computer graphics and robotics, including physics-based animation, character animation, optimal control, reinforcement learning, and computational biomechanics. She developed computational approaches to modeling realistic and natural human movements, learning complex control policies for humanoids and assistive robots, and advancing fundamental numerical simulation and optimal control algorithms. The algorithms and software developed in her lab have fostered interdisciplinary collaboration with researchers in robotics, computer graphics, mechanical engineering, biomechanics, neuroscience, and biology. Liu received a National Science Foundation CAREER Award, an Alfred P. Sloan Fellowship, and was named Young Innovators Under 35 by Technology Review. In 2012, Liu received the ACM SIGGRAPH Significant New Researcher Award for her contribution in the field of computer graphics.
Berthold Bauml is head of the Autonomous Learning Robots Lab. His longterm goal in this basic research lab is to use learning as the core principle in building really autonomous robots which can operate in complex and changing environments. A prerequisite for this research program is robotic hardware which comes close to human performance with respect to sensor and motor capabilities and has a realtime connection to large and scalable computing resources. In the last years we upgraded one of our torque controlled humanoid robots to our award winning research platform Agile Justin [2] which, among others, now has realtime 3D environment modeling, high resolution spatio-temporal tactile sensing on the whole body and the hands and a wirelessly coupled GPU cluster in-the-loop.
Zhengyi Luo is a final year PhD student at Carnegie Mellon University’s Robotics Institute, School of Computer Science, advised by Prof. Kris Kitani. He earned his bachelor’s degree from University of Pennsylvania in 2019, where he worked with Prof. Kostas Daniilidis. His research has been supported by Qualcomm Innovation Fellowship and the Meta AI Mentorship Program. His research interest lies at the intersection of vision, learning, and robotics. He is working on topcis including human pose estimation, human-object interaction, human motion modelling etc. Through his research, he wants to create methods that effectively interpret spatial-temporal sensory input and build a representation of the 3D world to reason about the interactions between agents and the physical environment. On the application side, he is excited about humanoid robots and AR/VR.
Eric Jang leads the AI team at 1X Technologies, a vertically-integrated humanoid robot company. His research background is on end-to-end mobile manipulation and generative models. Eric recently authored a book on the future of AI and Robotics, titled "AI is Good for You".
Alberto Rodriguez is currently a Research Scientist, Atlas Manipulation Lead at Boston Dynamics. Alberto Rodriguez brings experience from previous roles at Massachusetts Institute of Technology. Alberto Rodriguez holds a 2007 - 2013 Doctor of Philosophy - PhD in Robotics @ Carnegie Mellon University.
Hang Zhao is an Assistant Professor at IIIS, Tsinghua University, Principle Investigator of MARS Lab. His research interests are multi-modal machine learning, autonomous driving and robot learning. He was a Research Scientist at Waymo (known as Google's self-driving project) from 2019 to 2020. Before that, he got his Ph.D. degree at MIT in 2019 under the supervision of Professor Antonio Torralba. Before MIT, he received his B.S. from Zhejiang University in 2013.
Marc Raibert is the Executive Director of the Boston Dynamics AI Institute, a Hyundai Motor Group organization. Raibert was the founder, former CEO, and now Chairman of Boston Dynamics, a robotics company known for creating BigDog, Atlas, Spot, and Handle.
We welcome submissions of full papers as well as work-in-progress and accept submissions of work recently published or currently under review.
In general, we encourage two types of papers:
Empirical paper: Submissions should focus on presenting original research, case studies or novel implementations in the fields related to the workshop (see potential topics below).
Position paper: Authors are encouraged to submit papers that discuss critical and thought-provoking topics within the scientific community.
Potential topics include:
Reinforcement learning for whole-body control and bimanual manipulation
Teleoperation systems for humanoid robots (or other complex robotic systems) and imitation learning
Learning models (e.g. dynamics, perception) and planning for complex, mobile robotic systems
Benchmark and task proposals for whole-body control and manipulation
The WCBM workshop will use OpenReview as the review platform.
Accepted papers will be presented as posters, and a subset of them will be selected for oral presentation.
The paper template and style files can be found at here (adapted based on RSS 2025 template). There is no page limit but recommended 4-8 pages (excluding references and appendix). Submissions must follow the template and style and should be properly anonymized.
Dual Submission Policy
We welcome papers that have never been submitted, are currently under review, or recently published. Accepted papers will be published on the workshop homepage, but will not be part of the official proceedings and are to be considered non-archival.