Welcome to the

Workshop on Whole-body Control and Bimanual Manipulation: Applications in Humanoids and Beyond

at CoRL 2024


Read More

About

WCBM @ CoRL 2024

Motivation

Humanoid robots have long held promise to be seamlessly deployed in our daily lives. Despite the rapid progress in humanoid robots' hardware (e.g. Boston Dynamics Atlas, Tesla Optimus, Unitree H1, 1X Neo, Agility Digit), their software is fully or partially hand-designed for specific tasks. The goal of Whole-body Control and Bimanual Manipulation (WCBM) workshop is to provide a platform for roboticists and machine learning researchers to discuss the past achievements of whole-body control and manipulation as well as future research directions on this topic, especially on the problem of enabling autonomous humanoid robots. We invited a group of speakers who are world-renowned experts to present their work on whole-body control for humanoid robots and bimanual robotic systems. We also want to provide researchers with the opportunity to present their latest research by accepting workshop papers. We will review and select the best submissions for spotlight talks and interactive poster presentations. We have also planned guided panel discussions to encourage debate among the invited speakers and workshop participants. In the discussion, we would like to contrast the viewpoints of machine learning researchers and roboticists on the past and future of this research topic.

If you have any questions, feel free to contact us.

Unfortunately, this workshop will be in-person only since we cannot provide a live streaming nor recording.

Program

Workshop schedule (tentative)

🚪Room "Taurus 2"

TimeEvent
  8:30  -   8:40Opening Remark
  8:40  -   9:05Jitendra Malik (UC Berkeley)
  9:05  -   9:30Yuke Zhu (UT Austin, NVIDIA)
  9:30  -   9:55Manuel Galliker (1X Technologies)
  9:55  -  10:20Vikash Kumar (CMU)
10:20  -  11:00Coffee Break, Posters, Robot Demos
11:00  -  11:20Xingxing Wang (Unitree)
11:20  -  11:40Quentin Rouxel on behalf of Serena Ivaldi (Inria)
11:40  -  12:00Ziwen Zhuang (Tsinghua University)
12:00  -  13:45Lunch
13:45  -  14:10Chelsea Finn (Stanford, PI)
14:10  -  14:35Moritz Baecher (Disney Research)
14:35  -  15:00Poster Spotlight
15:00  -  15:30Coffee Break, Posters, Robot Demos
15:30  -  15:55Jonathan Hurst (Oregon State Univ., Agility Robotics)
15:55  -  16:20Xiaolong Wang (UCSD)
16:20  -  16:40Toru Lin (UC Berkeley)
16:40  -  17:05Scott Kuindersma (Boston Dynamics)
17:05  -  18:00Panel Discussion (with Jitendra Malik, Yuke Zhu, Jonathan Hurst, Chelsea Finn, Scott Kuindersma)
18:00  -  18:10Closing Remarks

Talks

Invited Speakers

Chelsea Finn

Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University, the William George and Ida Mary Hoover Faculty Fellow, and a co-founder of Physical Intelligence (Pi). Her research interests lie in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, her work has pioneered end-to-end deep learning methods for vision-based robotic manipulation, meta-learning algorithms for few-shot learning, and approaches for scaling robot learning to broad datasets. Her research has been recognized by awards such as the Sloan Fellowship, the IEEE RAS Early Academic Career Award, and the ACM doctoral dissertation award, and has been covered by various media outlets including the New York Times, Wired, and Bloomberg. Prior to joining Stanford, she received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley.

Scott Kuindersma

Scott Kuindersma is the Senior Director of Robotics Research at Boston Dynamics where he leads behavior research on Atlas. Prior to joining Boston Dynamics, he was an Assistant Professor of Engineering and Computer Science at Harvard. Scott’s research explores intersections of machine learning and model-based control to improve the capabilities of humanoids and other dynamic mobile manipulators.

Yuke Zhu

Yuke Zhu is an Assistant Professor in the Computer Science department of UT-Austin, where he directs the Robot Perception and Learning (RPL) Lab. He is also a core faculty at Texas Robotics and a senior research scientist at NVIDIA. He focuses on developing intelligent algorithms for generalist robots and embodied agents to reason about and interact with the real world. His research spans robotics, computer vision, and machine learning. He received his Master's and Ph.D. degrees from Stanford University. His works have won several awards and nominations, including the Best Conference Paper Award in ICRA 2019, Outstanding Learning Paper at ICRA 2022, Outstanding Paper at NeurIPS 2022, and Best Paper Finalists in IROS 2019, 2021, and RSS 2023. He received the NSF CAREER Award and the Amazon Research Awards.

Jitendra Malik

Jitendra Malik is Arthur J. Chick Professor of EECS at UC Berkeley. His research has spanned computer vision, machine learning, modeling of human vision, computer graphics, and most recently robotics. He has advised more than 70 Ph.D. students and postdocs, many of whom are now prominent researchers. His honors include numerous best paper prizes, the 2013 Distinguished Researcher award in computer vision, the 2016 ACM/AAAI Allen Newell Award, the 2018 IJCAI Award for Research Excellence in AI, and the 2019 IEEE Computer Society’s Computer Pioneer Award for “leading role in developing Computer Vision into a thriving discipline through pioneering research, leadership, and mentorship”. He is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences.

Toru Lin

Toru Lin is a PhD student at Berkeley AI Research (BAIR) advised by Jitendra Malik and Alexei Efros. She is also affiliated with the NVIDIA GEAR group led by Jim Fan and Yuke Zhu. Previously, she obtained her BSc and MEng from MIT EECS, under the supervision of Antonio Torralba and Phillip Isola. Before transferring to MIT, she was an undergraduate student at The University of Tokyo. She is currently building real-world robot systems that can achieve dexterous low-level skills through end-to-end learning.

Xingxing Wang

Xingxing Wang the Founder/CEO/CTO of Unitree, a technology supremacist. During his master’s degree, he pioneered the development of XDog, a high-performance quadruped robot driven by a low-cost external rotor brushless motor, pioneering the technology of low-cost, high-performance footed robots. After graduating in 2016, joined DJI. At the same time, the XDog robot was reported by many media at home and abroad, and received a huge response in the global robotics circle. He then resigned and founded Unitree in August 2016. It became the first company in the world to publicly retail high-performance quadruped robots, leading global sales year after year, significantly promoting the commercialization process of global high-performance quadruped robots. He has been invited to give speeches at the ICRA 2018~2022 Footed Robot Forum, the top international robotics conference, and has applied for more than 150 domestic and foreign patents. Led the company to obtain multiple rounds of investment from Sequoia, Shunwei, Matrix Partners, etc. He once led a team to have the company’s products appear on the CCTV Year of the Ox Spring Festival Gala stage, the opening ceremony of the Winter Olympics, and the Super Bowl. 2023 Fortune China’s 40 business elites under 40 years old.

Moritz Baecher

Moritz Bächer is a Research Scientist at Disney Research, where he leads the Computational Design and Manufacturing group. He is deeply passionate about solving real-world problems in computational robotics, fabrication, and architecture. His core expertise is the development of differentiable simulators to tackle complex design, control, and characterization problems in (soft) robotics, architecture, and computer graphics. Before joining Disney, he received a Ph.D. from the Harvard School of Engineering and Applied Sciences and graduated with a master’s from ETH Zurich.

Junathan Hurst

Jonathan Hurst is a Professor of Robotics, co-founder of the Oregon State University Robotics Institute, and Chief Technology Officer and co-founder of Agility Robotics. He holds a B.S. in mechanical engineering and an M.S. and Ph.D. in robotics, all from Carnegie Mellon University. His university research focuses on understanding the fundamental science and engineering best practices for legged locomotion. Investigations range from numerical studies and analysis of animal data, to simulation studies of theoretical models, to designing, constructing, and experimenting with legged robots for walking and running, and more recently, using machine learning techniques merged with more traditional control to enable highly dynamic gaits. Agility Robotics is extending this research to commercial applications for robotic legged mobility, working towards a day when robots can go where people go, generate greater productivity across the economy, and improve quality of life for all.

Vikash Kumar

Vikash Kumar is an adjunct professor at the Robotics Institute, CMU. He finished his Ph.D. from the University of Washington with Prof. Emo Todorov and Prof. Sergey Levine, where his research focused on imparting human-level dexterity to anthropomorphic robotic hands. He continued his research as a post-doctoral fellow with Prof. Sergey Levine at University of California Berkeley where he further developed his methods to work on low-cost scalable systems. He also spent time as a Research Scientist at OpenAI and Google-Brain where he diversified his research on low-cost scalable systems to the domain of multi-agent locomotion. He has also been involved with the development of the MuJoCo physics engine, now widely used in the fields of Robotics and Machine Learning. His works have been recognized with the best Master's thesis award, best manipulation paper at ICRA’16, best workshop paper ICRA'22, CIFAR AI chair '20 (declined), and have been widely covered with a wide variety of media outlets such as NewYorkTimes, Reuters, ACM, WIRED, MIT Tech reviews, IEEE Spectrum, etc.

Ziwen Zhuang

Ziwen Zhuang is a PhD student at IIIS, Tsinghua University. He is also working as a research assistant with Professor Hang Zhao at Shanghai Qi Zhi Institute. He completed his Master’s degree at ShanghaiTech University, advised by Professor Soeren Schwertfeger. Ziwen’s research focuses on robot learning, especially athletic intelligence on legged robots. He published Robot Parkour Learning at CoRL 2023, which achieved Best System Award Finalist. Prior to that, he was a research intern at Carnegie Mellon University working with Professor David Held, and he was the algorithm leader of ShanghaiTech RoboMaster team.

Manuel Yves Galliker

Manuel Yves Galliker is the team lead for controls and embedded at 1X Technologies. He holds a B.Sc. and M.Sc. in mechanical engineering with a focus on robotics, systems and controls from ETH Zurich. Manuel's research interests range from mechantronics to optimal, data-driven and reinforcement learned controls with a focus on locomotion and loco-manipulation. During his academic tenure, he dedicated his efforts of his master thesis to researching online gait generation for bipedal robots using Whole-Body Dynamics in a Nonlinear Model Predictive Control approach, contributing at ETH's Robotics Systems Lab and as a visiting researcher at Caltech's AMBER lab. This work culminated in a publication presented at the IEEE Humanoids 2022 conference, where it was distinguished as a finalist for the Best Paper Award. In his current role he is leading the R&D efforts on controls and embedded for the new bipedal Android NEO. In particular the team is aiming to develop whole body control and planning algorithms, ranging from Centroidal MPC to Whole-Body MPC and Reinforcement Learning, to enable consecutively more and more general loco-manipulation behaviors.

Serena Ivaldi

Serena Ivaldi is a tenured senior research scientist (director of research) in Inria, France, leading the humanoid and human-robot interaction activities of Team Larsen in Inria Nancy, France. She obtained her Ph.D. in Humanoid Technologies in 2011 at the Italian Institute of Technology and University of Genoa, Italy, and the French Habilitation to Direct Research in 2022 at the University of Lorraine, France. Prior to joining Inria, she was a post-doctoral researcher at UPMC in Paris, France, then at the University of Darmstadt, Germany. In 2023 she also spent two months at the European Space Agency as visiting expert. She is currently Editor in Chief of the International Journal of Social Robotics and has been serving as Associate Editor for IEEE Robotics and Automation Letters. She was Program Chair of the conferences IEEE/RAS Humanoids 2019, IEEE ARSO 2023, Tutorial Chair for CORL 2021, and is the General Chair of IEEE/RAS Humanoids 2024. She has been serving IEEE RAS as Associate Vice-president of MAB and co-chair of the ICRA Steering Committee. She was a proud judge of the ANA Avatar Xprize competition in 2022. She received the Suzanne Zivi Prize for excellence in research, the 2021 IEEE RA-L Distinguished Service Award as Outstanding Associate Editor, and was nominated in the 50 Women in robotics you need to know about in 2021.

Xiaolong Wang

Xiaolong Wang is an assistant professor at UC San Diego in the ECE department. He is affliated with the CSE department, Center for Visual Computing, Contextual Robotics Institute, and Artificial Intelligence Group. He is a member of the Robotics team in the TILOS NSF AI Institute. He was a postdoctoral fellow at UC Berkeley with Alexei Efros and Trevor Darrell. He received his Ph.D. in robotics from Carnegie Mellon University, at where he worked with Abhinav Gupta.

Posters

Workshop Posters

Check out our workshop papers at OpenReview: https://openreview.net/group?id=robot-learning.org/CoRL/2024/Workshop/WCBM

Poster Spotlights:

  • UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers. Huy Ha, Yihuai Gao, Zipeng Fu, Jie Tan, Shuran Song
  • Goal Achievement Guided Exploration: Mitigating Premature Convergence in Learning Robot Control. Shengchao Yan, Baohe Zhang, Joschka Boedecker, Wolfram Burgard
  • DexHub: Infrastructure for Internet Scale Robotics Data Collection. Younghyo Park, Jagdeep Singh Bhatia, Lars Lien Ankile, Pulkit Agrawal
  • EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning. Jingyun Yang, Ziang Cao, Congyue Deng, Rika Antonova, Shuran Song, Jeannette Bohg
  • Learning to Look Around: Enhancing Teleoperation and Learning with a Human-like Actuated Neck. Bipasha Sen, Michelle Wang, Nandini Thakur, Aditya Agarwal, Pulkit Agrawal

Posters:

  • Augmented Action-space Whole-Body Teleoperation of Mobile Manipulation Robots. Sophie C. Lueth, Georgia Chalvatzaki
  • Time Your Rewards: Learning Temporally Consistent Rewards from a Single Video Demonstration. Huaxiaoyue Wang, William Huey, Anne Wu, Yoav Artzi, Sanjiban Choudhury
  • PerAct2: Benchmarking and Learning for Robotic Bimanual Manipulation Tasks. Markus Grotz, Mohit Shridhar, Yu-Wei Chao, Tamim Asfour, Dieter Fox
  • The Role of Domain Randomization in Training Diffusion Policies for Whole-Body Humanoid Control. Oleg Kaidanov, Firas Al-Hafez, Yusuf SĂĽvari, Boris Belousov, Jan Peters
  • Bi3D Diffuser Actor: 3D Policy Diffusion for Bi-manual Robot Manipulation. Tsung-Wei Ke, Nikolaos Gkanatsios, Jiahe Xu, Katerina Fragkiadaki
  • AsymDex: Leveraging Asymmetry and Relative Motion in Learning Bimanual Dexterity. Zhaodong Yang, Yunhai Han, Harish Ravichandar
  • SkillBlender: Towards Versatile Humanoid Whole-Body Control via Skill Blending. Yuxuan Kuang, Amine Elhafsi, Haoran Geng, Marco Pavone, Yue Wang
  • Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing. Haoru Xue, Chaoyi Pan, Zeji Yi, Guannan Qu, Guanya Shi
  • A Comparison of Imitation Learning Algorithms for Bimanual Manipulation. Michael Drolet, Simon Stepputtis, Siva Kailas, Ajinkya Jain, Jan Peters, Stefan Schaal, Heni Ben Amor
  • Latent Action Pretraining From Videos. Seonghyeon Ye, Joel Jang, Byeongguk Jeon, Se June Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee, Jianfeng Gao, Luke Zettlemoyer, Dieter Fox, Minjoon Seo
  • Surgical Robot Transformer (SRT): Imitation Learning for Surgical Tasks. Ji Woong Kim, Tony Z. Zhao, Samuel Schmidgall, Anton Deguet, Marin Kobilarov, Chelsea Finn, Axel Krieger
  • DynSyn: Dynamical Synergistic Representation for Efficient Learning and Control in Overactuated Embodied Systems. Kaibo He, Chenhui Zuo, Chengtian Ma, Yanan Sui
  • Hierarchical World Models as Visual Whole-Body Humanoid Controllers. Nicklas Hansen, Jyothir S V, Vlad Sobal, Yann LeCun, Xiaolong Wang, Hao Su
  • Continuously Improving Mobile Manipulation with Autonomous Real-World RL. Russell Mendonca, Emmanuel Panov, Bernadette Bucher, Jiuguang Wang, Deepak Pathak
  • Active Vision Might Be All You Need: Exploring Active Vision in Bimanual Robotic Manipulation. Ian Chuang, Andrew Lee, Dechen Gao, Mahdi Naddaf, Iman Soltani
  • Reinforcement Learning with Action Sequence for Data-Efficient Robot Learning. Younggyo Seo, Pieter Abbeel
  • CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics. Jiawei Gao, Ziqin Wang, Zeqi Xiao, Jingbo Wang, Tai Wang, Jinkun Cao, Xiaolin Hu, Si Liu, Jifeng Dai, Jiangmiao Pang
  • OPEN TEACH: A Versatile Teleoperation System for Robotic Manipulation. Aadhithya Iyer, Zhuoran Peng, Yinlong Dai, Irmak Guzey, Siddhant Haldar, Soumith Chintala, Lerrel Pinto
  • AnySkin: Plug-and-play Skin Sensing for Robotic Touch. Raunaq Bhirangi, Venkatesh Pattabiraman, Mehmet Enes Erciyes, Yifeng Cao, Tess Hellebrekers, Lerrel Pinto
  • Learning Autonomous Humanoid Loco-manipulation Through Foundation Models. Jin Wang, Rui Dai, Weijie Wang

Organization

Workshop Organizers

Pieter Abbeel

Pieter Abbeel

Professor at UC Berkeley
Carlo Sferrazza

Carlo Sferrazza

Postdoc at UC Berkeley
Xingyu Lin

Xingyu Lin

Research Scientist at OpenAI
Youngwoon Lee

Youngwoon Lee

Assistant Professor at Yonsei Unviersity

Program Committee

  • Carlo Sferrazza (UC Berkeley)
  • Youngwoon Lee (Yonsei University)
  • Younghyo Park (MIT)
  • Himanshu Gaurav (UC Berkeley)
  • Toru Lin (UC Berkeley)
  • Ziwen Zhuang (Tsinghua University)
  • Fred Shentu (UC Berkeley)
  • Arthur Allshire (UC Berkeley)
  • Haoran Geng (Peking University)
  • Younggyo Seo (UC Berkeley)
  • Xingyu Lin (OpenAI)
  • Huy Ha (Stanford University)
  • Dun-Ming Huang (UC Berkeley)
  • Bike Zhang (UC Berkeley)
  • Junik Bae (Yonsei University)
  • Haoru Xue (UC Berkeley)
  • Mingyo Seo (UT Austin)
  • Tara Sadjadpour (UC Berkeley)
  • Kwanyoung Park (Yonsei University)
  • Qiayuan Liao (UC Berkeley)

Calls

Call for papers

We welcome submissions of full papers as well as work-in-progress and accept submissions of work recently published or currently under review.

In general, we encourage two types of papers:

  • Empirical paper: Submissions should focus on presenting original research, case studies or novel implementations in the fields related to the workshop (see potential topics below).
  • Position paper: Authors are encouraged to submit papers that discuss critical and thought-provoking topics within the scientific community.
Potential topics include:
  • Reinforcement learning for whole-body control and bimanual manipulation
  • Teleoperation systems for humanoid robots (or other complex robotic systems) and imitation learning
  • Learning models (e.g. dynamics, perception) and planning for complex, mobile robotic systems
  • Benchmark and task proposals for whole-body control and manipulation
  • Multimodal, whole-body sensing and perception
  • Simulation to real world transfer
  • Learning from human videos
Important Dates
  • Submission deadline: October 11 18, 2024
  • Notification of acceptance: October 18 25 30, 2024
  • Camera-ready papers due: November 1 4, 2024
  • All deadlines are AoE time.
Submission Guidelines

The WCBM workshop will use OpenReview as the review platform.

Accepted papers will be presented as posters, and a subset of them will be selected for oral presentation.

The paper template and style files can be found at here (adapted based on CoRL 2024 template). There is no page limit but recommended 4-8 pages (excluding references and appendix). Submissions must follow the template and style and should be properly anonymized.

Dual Submission Policy

We welcome papers that have never been submitted, are currently under review, or recently published. Accepted papers will be published on the workshop homepage, but will not be part of the official proceedings and are to be considered non-archival.