
Date: | Saturday, 21th June 2025 |
Time: | 8:45 AM – 12:30 PM (half-day) |
Location: | Olin Hall of Engineering (OHE) 100B |
Virtual Attendance: | Register for Virtual Attendance Here by June 20 |
The Workshop on Human-Robot Contact and Manipulation (HRCM 2025) at RSS 2025 aims to to unite expertise across control theory, shared autonomy, physical modeling, and human understanding to have robots operate in direct or indirect physical contact with people. The workshop topics include (but are not limited to):
- Compliant and/or soft robot hardware design
- Compliant control and physical safety
- Collaborative manipulation and shared control
- Contact modeling and simulation for rigid and soft bodies
- Tactile sensing and haptic interfaces
- Planning, control, and/or learning in contact-rich environments
- Applications involving direct human-robot contact, such as:
- Assisted Activities for Daily Living (ADLs), e.g., feeding, bathing, walking
- Exoskeletons and active prostheses
- Medicine, e.g., patient transfer and surgery
- Benchmarks and metrics for HRCM applications
Workshop Format
In addition to a poster session and invited speakers, this workshop will include a breakout discussion where participants will split into groups of ~5 to cover the following themes:
- Contact-Rich Modeling and Compliant Control
- Enabling the safe operation of robots in direct contact with their environment or other agents by (1) developing algorithms and hardware for planning and control and (2) developing and using rigid, deformable, and soft-body contact models and simulations, sim-to-real transfer, and tactile sensing.
- Breakout Discussion Topic: Create a wish list of models, simulations, datasets, algorithms, etc. that will yield improvements (e.g. more robust, more tractable) in the compliant and contact-rich control settings.
- pHRI Applications
- Discussing all human-centric use cases of the themes in Session 1, such as human augmentation, rehabilitation, and medical devices.
- Breakout Discussion Topic: With consideration of the pHRI applications discussed, what standardized benchmarks and metrics can be used to measure the success of the algorithms, hardware, models, and simulations etc. from the previous theme?
Schedule (Saturday 21 June 2025)
Time in PDT | Start Time in UTC* (your time zone) |
Item |
---|---|---|
8:45am - 8:55am | 21 June 2025 08:45:00 PDT | Opening & Introductions |
8:55am - 9:15am | 21 June 2025 08:55:00 PDT |
Virtual Speaker: Sylvain Calinon
Frugal Learning for Collaborative Tasks with Physical Contacts
|
9:15am - 9:35am | 21 June 2025 09:15:00 PDT |
Speaker: Wanxin Jin
Leveraging Contact Physics for Real-Time and Versatile Contact-Rich Dexterity
|
9:35am - 9:50am | 21 June 2025 09:35:00 PDT | Poster Lightning Talks |
9:50am - 10:25am | 21 June 2025 9:50:00 PDT | Poster Session @ Epstein Family Plaza; Coffee served @ 10am |
10:25am - 10:45am | 21 June 2025 10:25:00 PDT |
Virtual Speaker: Luka Peternel
Biomechanics-Aware Physical Human-Robot Interaction
|
10:45am - 11:05am | 21 June 2025 10:45:00 PDT |
Speaker: Tania Morimoto
Design and Control of Soft Robots and Haptic Interfaces
|
11:05am - 11:25am | 21 June 2025 11:05:00 PDT |
Speaker: Laurel Riek
Talk Title (TBD)
|
11:25am - 11:45am | 21 June 2025 11:25:00 PDT |
Speaker: Vy Nguyen
Co-Designing Inclusive Assistive Robots
|
11:45am - 12:15pm | 21 June 2025 11:45:00 PDT | Breakout Discussion |
12:15pm - 12:30pm | 21 June 2025 12:10:00 PDT | Roundtable / Conclusion |
For example, those in Los Angeles may see UTC-7,
while those in Berlin may see UTC+2.
Please note that there may be differences to your actual time zone.
Invited Speakers
Speaker | Abstract | ||||
---|---|---|---|---|---|
![]()
Sylvain Calinon †
Idiap Research Institute |
Despite significant advances in AI, robots still struggle with tasks involving physical interaction. Robots can easily beat humans at board games such as Chess or Go but are incapable of skillfully moving the game pieces by themselves (the part of the task that humans subconsciously succeed in). Learning collaborative manipulation skills is both hard and fascinating because the movements and behaviors to acquire are tightly connected to our physical world and to embodied forms of intelligence. I will present an overview of representations and learning approaches to help robots acquire collaborative manipulation skills by imitation and self-refinement. I will present the advantages of targeting a frugal learning approach, where the term "frugality" has two goals: 1) learning skills from only few demonstrations or exploration trials; and 2) learning only the components of the skill that really need to be learned. Collaborative tasks with physical contacts require robot controllers that can swiftly adapt to the ongoing situation. For the generation of trajectories and feedback controllers, I will discuss how the underlying cost functions should take into account variations, coordination and task prioritization, where various forms of movement primitives can contribute to the optimization process. First, I will show that a cost function composed of a sum of quadratic error terms can be treated either as a linear quadratic regulator (LQR) problem from an optimal control perspective, or as a product of Gaussian experts from an information fusion perspective. I will showcase the proposed approach in diverse applications requiring shared control, including teleoperation, haptic guidance and physical assistance. I will then show that this underlying dictionary of controllers can be extended to other forms of experts: 1) ergodic controllers to generate exploration and coverage movement behaviors, which can be exploited by robots as a way to cope with uncertainty in sensing, proprioception and motor control; and 2) impedance behaviors exploiting geometric representations (geometric algebra and Riemannian manifolds) as well as implicit shape representations based on distance fields.
Dr Sylvain Calinon is a Senior Research Scientist at the Idiap Research Institute and a Lecturer at the Ecole Polytechnique Fédérale de Lausanne (EPFL). He heads the Robot Learning & Interaction group at Idiap, with expertise in human-robot collaboration, robot learning from demonstration, geometric representations and optimal control. The approaches developed in his group can be applied to a wide range of applications requiring manipulation skills, with robots that are either close to us (assistive and industrial robots), parts of us (prosthetics and exoskeletons), or far away from us (shared control and teleoperation). Website: https://calinon.ch |
||||
![]()
Wanxin Jin
Arizona State University |
Achieving real-time, versatile dexterity—where robots can rapidly reason about when and how to make or break contact with diverse objects—remains a core challenge indexterous manipulation and physical intelligence. While black-box learning dominates the current landscape, such approaches often require large datasets and exhibit poor generalization. In contrast, model-based methods have long been hindered by the complexity and non-smoothness of contact dynamics. In this talk, I will argue that model-based approaches can not only match but outperform end-to-end learning methods for real-time, versatile, contact-rich manipulation. The key is a recently introduced, simple yet capable structure for modeling contact physics. I will show how this model enables contact-implicit model predictive control to run at 100Hz with over 95% success across diverse multi-fingered manipulation tasks. In addition, I will show how the structured contact model enables robots to acquire contact-rich manipulation skills from scratch using only two minutes of real-world interaction data. Finally, I will introduce TwinTrack, a Real2Sim2Real framework that bridges vision and contact physics to achieve real-time, robust tracking of unseen, dynamic objects in contact-rich scenes.
Wanxin Jin is an Assistant Professor in the School for Engineering of Matter, Transport, and Energy at Arizona State University. From 2021 to 2023, he was a postdoctoral fellow at the GRASP Lab at the University of Pennsylvania. He received his Ph.D. from Purdue University in 2021. At ASU, Wanxin Jin leads the Intelligent Robotics and Interactive Systems (IRIS) Lab, where his research focuses on developing fundamental methods that enable robots to interact efficiently with both humans and physical objects. |
||||
![]()
Luka Peternel †
Delft University of Technology |
The talk will focus on how to incorporate human biomechanics into the robot control loop for adaptive physical human-robot co-manipulation. To inform the robot about the real-time human biomechanical state, high-fidelity musculoskeletal models are employed. To provide real-time estimations to the robot controller, we developed a rapid muscle redundancy solver. The talk will examine two major applications of the proposed biomechanics-aware physical human-robot interaction approach. First, musculoskeletal models can be used to inform a collaborative robotic arm about the ergonomics of human co-workers in manufacturing tasks so that the robot can improve human working conditions in terms of joint torque, muscle fatigue, and manipulability in an online manner. Second, musculoskeletal models provide information about the underlying muscle and tissue state, which can be used to optimise robotic physical therapy in terms of range of motion and safety. In this direction, we developed maps that enable the robot controller to effectively navigate underlying biomachanical landscapes.
I received a PhD in robotics from the Faculty of Electrical Engineering, University of Ljubljana, Slovenia in 2015. I conducted my PhD studies at the Department for Automation, Biocybernetics and Robotics, Jožef Stefan Institute in Ljubljana from 2011 to 2015, and at the Department of Brain-Robot Interface, ATR Computational Neuroscience Laboratories in Kyoto, Japan in 2013 and 2014. I was with the Human-Robot Interfaces and Physical Interaction Lab, Advanced Robotics, Italian Institute of Technology in Genoa, Italy from 2015 to 2018. Since 2019, I have been at the Department of Cognitive Robotics, Delft University of Technology in the Netherlands. There, I am leading the Human-Robot Collaboration group within the Human-Robot Interaction section. |
||||
![]()
Tania Morimoto
UC San Diego |
Flexible and soft robots have the potential for significant impact across a range of applications. Their inherent compliance makes them particularly well-suited for tasks requiring close human-robot interaction and collaboration, including minimally invasive surgery and rehabilitation. However, controlling these robotic systems can become non-intuitive for a human teleoperator. In this talk, I will present several new soft robot designs, including various continuum robots and wearable robots. I will also present new haptic interfaces designed to render feedback and guidance during teleoperation of these soft robotic systems. Finally, I will discuss various design tradeoffs and control methods, and will seek to highlight the potential for these soft teleoperated robotic systems in tasks requiring close human robot interaction.
Tania Morimoto is an Associate Professor in the Department of Mechanical and Aerospace Engineering and in the Department of Surgery at the University of California, San Diego. She received the B.S. degree from Massachusetts Institute of Technology, Cambridge, MA, and the M.S. and Ph.D. degrees from Stanford University, Stanford, CA, all in mechanical engineering. She is a recipient of the Hellman Fellowship (2021), the Beckman Young Investigator Award (2022), the NSF CAREER Award (2022), and the ASEE Outstanding New Mechanical Engineering Educator Award (2023). |
||||
![]()
Laurel Riek
UC San Diego |
Healthcare Robotics and Shared Control | ||||
![]()
Vy Nguyen
Hello Robot |
This presentation demonstrates how occupational therapy, robotics, and diverse stakeholders collaborate effectively to develop meaningful assistive applications for the Stretch mobile manipulator robot by Hello Robot Inc. Research demonstrates that successful robot design requires "human-centred and robot-inclusive design" where "human needs and characteristics work as the 'anchor'" while simultaneously ensuring accessibility across diverse individuals and populations. This creates robots that are both fundamentally human-focused and deliberately inclusive of different human abilities and their contexts. Through a participatory design process with diverse stakeholders—including individuals with motor and cognitive impairments, care partners, and community organizations—we demonstrate how Stretch can promote functional independence and performance where it may otherwise be greatly limited or nonexistent. Furthermore, implementing an occupational therapy framework in robotics development ensures that the individual(s) and groups who are using Stretch for their everyday wants and needs have choice in the design process. Attendees will learn practical strategies for implementing co-design approaches in assistive robotics and see real-world examples of how to create robots that empower individuals and our everyday societies.
V is an Occupational Therapy (OT) Clinical Research Lead at Hello Robot, dedicated to advancing assistive robotics. Her work focuses on making the Stretch robot accessible across community, healthcare, and technology, with a special focus on empowering underserved and rural populations. V is driven by the stories and experiences of individuals who use Stretch to enhance their independence and quality of life, which fuels her research and commitment to the field. |
Accepted Abstracts
Presenter Information
Posters and Lightning Talks
Poster: One Poster (3ft x 3.5ft / 91cm x 106cm; which will take up half of a 3ft x 7ft easel) will be provided for each accepted paper. You should be able to install your poster either in the morning before the workshop begins or right after the lightning talks.
Poster Session: The Poster Session will take place during the coffee break at Epstein Family Plaza at USC.
60s Lightning Talk: By Friday, June 13, please send us ONE (1) slide in PDF or a standard image format (i.e. no transitions or animations) that will be put on the screen during your talk. Optionally, you can also designate a location on your slide to insert a <=1min video (no sound), and you can send this video separately to auto-play and loop with your slide.
Email for enquiries: hrcm-organizers@googlegroups.com

University of Pennsylvania

University of Pennsylvania

Northwestern University

Northwestern University

Hello Robot Inc

University of Pennsylvania

Cornell University

Northwestern University
Email: larisaycl@u.northharvester obfuscationwestern.edu

Northwestern University
Email: andrewthompson2019@u.northharvester obfuscationwestern.edu