![Robotic Systems Lab: Legged Robotics at ETH Zürich](/img/default-banner.jpg)
- 257
- 3 375 767
Robotic Systems Lab: Legged Robotics at ETH Zürich
Switzerland
Приєднався 5 тра 2011
The Robotic Systems Lab (RSL) designs machines, creates actuation principles and builds up control technologies for autonomous operation in challenging environments.
The Robotic Systems Lab investigates the development of machines and their intelligence to operate in rough and challenging environments. With a large focus on robots with arms and legs, our research includes novel actuation methods for advanced dynamic interaction, innovative designs for increased system mobility and versatility, and new control and optimization algorithms for locomotion and manipulation. In search of clever solutions, we take inspiration from humans and animals with the goal to improve the skills and autonomy of complex robotic systems to make them applicable in various real-world scenarios.
The Robotic Systems Lab investigates the development of machines and their intelligence to operate in rough and challenging environments. With a large focus on robots with arms and legs, our research includes novel actuation methods for advanced dynamic interaction, innovative designs for increased system mobility and versatility, and new control and optimization algorithms for locomotion and manipulation. In search of clever solutions, we take inspiration from humans and animals with the goal to improve the skills and autonomy of complex robotic systems to make them applicable in various real-world scenarios.
Dynamic Throwing with Robotic Material Handling Machines
Automation of hydraulic material handling machinery is limited to semi-static pick-and-place cycles. This work uses Reinforcement Learning (RL) to design dynamic controllers for material handlers with underactuated arms as commonly used in logistics. Tested both in simulation and in real-world experiments on a 12-ton test platform, the controllers exploit passive joints for dynamic throws, accurately targeting objects beyond static reach.
Paper Link: arxiv.org/abs/2405.19001
Paper Link: arxiv.org/abs/2405.19001
Переглядів: 904
Відео
AIRA Challenge: Teleoperated Mobile Manipulation for Industrial Inspection
Переглядів 3,1 тис.14 годин тому
The Robotic Systems Lab participated in the Advanced Industrial Robotic Applications (AIRA) Challenge at the ACHEMA 2024 process industry trade show, where teams demonstrated their teleoperated robotic solutions for industrial inspection tasks. We competed with the ALMA legged manipulator robot, teleoperated using a second robot arm in a leader-follower configuration, placing us in third place ...
SMUG Planner: A Safe Multi-Goal Planner for Mobile Robots in Challenging Environments
Переглядів 2,7 тис.Місяць тому
Video created by Chen Changan Paper: arxiv.org/abs/2306.05309 Code: github.com/leggedrobotics/smug_planner Authors: Changan Chen, Jonas Frey, Philip Arm, Marco Hutter
Rethinking Robustness Assessment: Adversarial Attack on Learning-based Quadruped Locomotion Control
Переглядів 7 тис.Місяць тому
In our RSS 2024 paper, we present a novel adversarial attack method designed to identify failuer cases in any type of locomotion controller, including state-of-the-art reinforcement learning (RL)-based controllers. Traditional heuristic tests, such as standard benchmarks or human experience, often fall short in uncovering these vulnerabilities. Our approach reveals the vulnerabilities of black-...
Learning Robust Autonomous Navigation and Locomotion for Wheeled-legged Robots
Переглядів 162 тис.2 місяці тому
In our new Science Robotics paper, we introduce an autonomous navigation system developed for our wheeled-legged quadrupeds, designed for fast and efficient navigation within large urban environments. Driven by neural network policies, our simple, unified control system enables smooth gait transitions, smart navigation planning, and highly responsive obstacle avoidance in populated urban enviro...
ViPlanner: Visual Semantic Imperative Learning for Local Navigation (ICRA 2024)
Переглядів 5 тис.2 місяці тому
This video demonstrates how our fully-learned local planner can navigate complex environments by recognizing different terrains and their traversability. Our paper introduces a novel planner design that combines depth and semantic information. It employs the imperative learning paradigm for optimizing the planner weights end-to-end based on the planning task objective. The optimization uses a d...
Lecture - Perception and Learning for Robotics - Best Practices ML-Project
Переглядів 4,8 тис.3 місяці тому
by Jonas Frey jonasfrey96.github.io/ Course Description: asl.ethz.ch/education/lectures/perception_and_learning_for_robotics.html Outline: 00:00:00 - Introduction 00:03:05 - Project Planning 00:03:55 - Literature Review 00:08:02 - Python 00:09:40 - Package Management 00:19:20 - Installing CUDA/Torch 00:20:40 - ML Frameworks 00:21:58 - IDEs 00:24:00 - Debugging 00:26:22 - Reproducibility 00:29:1...
ANYmal Parkour: Learning Agile Navigation for Quadrupedal Robots
Переглядів 20 тис.3 місяці тому
In this video, we demonstrate how our fully-learned method enables robots to conquer challenging scenarios reminiscent of parkour challenges. The paper introduces a hierarchical formulation that trains advanced locomotion skills for various obstacles, including walking, jumping, climbing, and crouching. A high-level policy is used to select and control these skills, allowing the robot to adapt ...
Learning Risk-Aware Locomotion (ICRA 2024)
Переглядів 2,9 тис.4 місяці тому
Title: Learning Risk-Aware Quadrupedal Locomotion using Distributional Reinforcement Learning Authors: Lukas Schneider, Jonas Frey, Takahiro Miki, Marco Hutter Arxiv: arxiv.org/abs/2309.14246 Code: github.com/leggedrobotics/rsl_rl/tree/algorithms accepted for ICRA 2024
Learning to walk in confined spaces using 3D representation
Переглядів 11 тис.4 місяці тому
Takahiro Miki, Joonho Lee, Lorenz Wellhausen and Marco Hutter This paper is accepted to ICRA2024. Arxiv: arxiv.org/abs/2403.00187 Project page: takahiromiki.com/publication-posts/learning-to-walk-in-confined-spaces-using-3d-representation/ 0:00 Introduction 0:20 Method overview 0:27 Low-level policy training 0:40 Low-level policy testing 1:16 High-level policy training 1:57 High-level policy di...
Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg
Переглядів 12 тис.4 місяці тому
We present a learning-based control approach to enable manipulation skills with a quadruped robot's leg. The paper was accepted for publication at ICRA 2024. Project website: sites.google.com/leggedrobotics.com/pedipulate Paper: arxiv.org/abs/2402.10837 Authors: Philip Arm, Mayank Mittal, Hendrik Kolvenbach, Marco Hutter
X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments(Sup.)
Переглядів 2 тис.5 місяців тому
Submitted to IEEE Transactions on Robotics. X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments Turcan Tuna, Julian Nubert, Yoshua Nava, Shehryar Khattak, Marco Hutter Paper: arxiv.org/ Project Website: sites.google.com/leggedrobotics.com/x-icp Abstract: Modern robotic systems are required to operate in challenging environments, which entails reliable ...
ANYmal Unleashed: Revolutionizing Search-and-Rescue with Legged Robots (DTC: Deep Tracking Control)
Переглядів 4,9 тис.5 місяців тому
Explore the frontier of search-and-rescue capabilities of legged robots as ANYmal boldly steps into a disaster site training facility's challenging terrain. Uncover the immense potential of these robotic machines, offering a glimpse into the future of human-robot collaborations in critical and dangerous missions. Join us on this gripping journey where cutting-edge technology meets real-world ch...
Santa vs. Lab Bullies: Festive Showdown at the RSL! 🎅🤖
Переглядів 2,6 тис.6 місяців тому
Santa vs. Lab Bullies: Festive Showdown at the RSL! 🎅🤖
A framework for robotic excavation and dry stone construction using on-site materials
Переглядів 8 тис.7 місяців тому
A framework for robotic excavation and dry stone construction using on-site materials
Resilient Legged Local Navigation: Learning to Traverse with Compromised Perception End-to-End
Переглядів 4 тис.7 місяців тому
Resilient Legged Local Navigation: Learning to Traverse with Compromised Perception End-to-End
Curiosity-Driven Learning of Joint Locomotion and Manipulation Tasks
Переглядів 49 тис.8 місяців тому
Curiosity-Driven Learning of Joint Locomotion and Manipulation Tasks
Bayesian multi-task learning MPC for robotic mobile manipulation - IROS 2023 presentation
Переглядів 1,3 тис.8 місяців тому
Bayesian multi-task learning MPC for robotic mobile manipulation - IROS 2023 presentation
Learning Contact-Based State Estimation for Assembly Tasks
Переглядів 2,4 тис.9 місяців тому
Learning Contact-Based State Estimation for Assembly Tasks
Demonstration of Telemanipulation during ARHCE 2023
Переглядів 2 тис.9 місяців тому
Demonstration of Telemanipulation during ARHCE 2023
ANYexo 2.0: A Fully Actuated Upper-Limb Exoskeleton - Presentation for IEEE CASE 2023
Переглядів 6 тис.10 місяців тому
ANYexo 2.0: A Fully Actuated Upper-Limb Exoskeleton - Presentation for IEEE CASE 2023
Versatile Multi-Contact Planning and Control for Legged Loco-Manipulation
Переглядів 10 тис.10 місяців тому
Versatile Multi-Contact Planning and Control for Legged Loco-Manipulation
iPlanner: Imperative Path Planning (RSS 2023)
Переглядів 6 тис.10 місяців тому
iPlanner: Imperative Path Planning (RSS 2023)
LSTP: Long Short-Term Motion Planning for Legged and Legged-Wheeled Systems
Переглядів 5 тис.10 місяців тому
LSTP: Long Short-Term Motion Planning for Legged and Legged-Wheeled Systems
Towards Legged Locomotion on Steep Planetary Terrain
Переглядів 3 тис.11 місяців тому
Towards Legged Locomotion on Steep Planetary Terrain
Scientific Exploration of Challenging Planetary Analog Environments with a Team of Legged Robots
Переглядів 10 тис.11 місяців тому
Scientific Exploration of Challenging Planetary Analog Environments with a Team of Legged Robots
A Helping Hand: Alma's Cybathlon Challenge
Переглядів 1,3 тис.Рік тому
A Helping Hand: Alma's Cybathlon Challenge
Human-Robot Attachment for Exoskeletons - Self-Adapting Attachment Mechanism
Переглядів 52 тис.Рік тому
Human-Robot Attachment for Exoskeletons - Self-Adapting Attachment Mechanism
Human-Robot Attachment - Representative Experiment Video: Self-Attachment
Переглядів 34 тис.Рік тому
Human-Robot Attachment - Representative Experiment Video: Self-Attachment
😳😦😍🔥 I'm scared.
How do you ensure safety?
pretty smart
You scared the NBA players
incredibly awesome, just imagine seeing these huge machines throwing and catching giant boulders at each other because they found a better way to move boulders faster
nice one - so you used gazebo or Issac sim ?
In the paper they say Isaac Gym is used for training and Gazebo is used for evaluation of the trained controller.
wow
👏Great Job
I listened with the sound off and I genuinely can't tell whether this is real or cgi. It moves kinda weird and unnatural, I think that's why. Not necessarily a bad thing if it does the job well - certainly the design and especially the software is very cool :)
You competed so well! Why third place?
Tactile and force feedback information amount and accuracy needed for (near) human-like operation would be interesting to understand indeed 🙂
Yeah I was wondering about adding whiskers ...
👍👍👍👍👍👍👍
I wanna see teleoperated bots on rails running up and down space stations and mars transit ships!
i want one of these at my job so i can sit at home instead Lol
Cool
Is there any way to download pdf exercises ?
I would envision in future where people will ride on this thing like a horse or bike, VERY COOL!!!
Yesss! That’s what I’m sayin
So you have roller skating dog.....what else can it do?
Would it help if it had a neck ?
Wondered what that dog was saying
I have seen the future of food delivery here, well done 👏 would love to work for you guys 😍
Thanks!!
There is another application for this in that it could be used for off-road vehicles to navigate over difficult terrain. This is a bit different tan say a robotaxi driving on a city street or open country rod where there is traffic and the vehicles are moving at high speed. But here in the off road environment traffic is not so much an issue or charging lanes and such but the off road vehicles are moving slowly and having to pick their way through rough terrain. BTW, I have a number of videos on my UA-cam channel that give examples of use cases where one might need something like this tp have a bot drive a tank, truck, e-bike, or such off road.
A variant of this could be to have a homolid robot ride an e-bike (or e-trike or e-unicycle or e scooter) and then dismount when and push the e-bike et al when needed then remount the bike and ride on. This then would extend this controller to humanoid robots where the wheels are not part of the bot but are separate. This allows legacy products that are already optimized to be efficient and already exist to serve the wheeled function and the two, the bot and the bike act as one just as the wheeled robot dog but with even more versatility. But the control is much the same the only addition being the AI to mount and dismount the e-bike and to be able to push it when walking. But even that is a plus in that it would be easir for the bot to maintain balance when walking when pushing a bike in that would give it more stability and make it less likely to fall. Thus imagaine a humaoid robot riding say a cargo e-bike and doing last mile deliveries where it rides along the street just as any cyclist would then gets off at its destination and leaves it package and then remount its cargo e-bike and moves to the next delivery. This is something a robot dog (even on wheels) would not be able to do.
Cool future
Too bad it doesnt have a neck you could save a lot of time with aa. Neck
Would a firearm be able to disable it?
I wonder how this would function with mechanim wheels on it aswell
Great job! Can't wait to see it in action!
Really Impressive Work!
AHaaaaa !
Looks smooth and ready for the roller rink! For promotional rather than practical purposes.
ñöÑÉ öf țнΙș ιȘ ğθΙÑĞ țΘ țΜЯñ öΜț шéll. αடட öF țн।S ш।lட ьé μȘ६d țÖ éÑșlΑv६ hμМдÑ।ㅜ४. ιț'S țhé él।țéș шhΘ hдvé μlțιмαț६ly fμñDéD αñD pμșНéd șΘç।६țy țö ьμιld șμCh țh।ñGS șΘ țh६y çαñ ьé μșéd föЯ шαЯrιñğ, çöñțröl, αñd éñșடαv६méñț öf öțh६Я hμмдñS șΘ țНéș६ éட।țéș çдñ plдy pНαrдöН öñCé mör६... αñD țh६ μSéfμட ιD।öㅜș шιடட ьμ।டD шhдㅜ६vér țН६y șαy.
Cool Work!
Amazing work, ppl!
Super nice work!🎉
this is so cool
Fantastic, thanks to you guys it won´t be possible to kick robots so that they fall and smash on the ground ... wait let me rethink ... we will have less chances to destroy a robot with a malfunction on a killing mission ? Future A.I. is going to wipe us out ..one after another ... didn´t you guys learn from the Sci-Fi movies ? Mashines against humans ? Terminator , Matrix, I, Robot ? At least leave a vulnerable area on the robot ... We live in really interesting times !
You obviously didn't learn from them: the lesson wasn't not to make AI. It was to treat them like living things and give them place in society.
What are the benefits of the RL-based controller over say a robust MPC?
Handling model mismatch (e.g. unknown payload), nonlinearities, assumption violations (e.g. slippery), uncertainties (perception noises)
@@user-es2zs4br2k the robustness is NOT due to some magic in RL but due to the fact that via offline training the solution map of the robust MPC problem gets stored in the neural network, while traditional MPC solvers can not solve such complex problems online. To me, they’re just different methods to solve the same optimization problem.
Super, super work!
Hello, how much does a set of this equipment cost per excavator?
every day you know new artificial form of life automatized
What a time to be alive!
Looks promising, add a arm manipulator and you are almost gold.
Will do
Tachikoma, version 1
Exactly
우와 충격적인 관절과 대응 타이어에 변화에 대한 허용범위와 대응 그리고 타이어는 바로 정교함에 여유를 주는 요소이고 관절대응은 정말 인간을 넘어서는 순발력이네요 연구진들 대단합니다.
Thank you 🙏
It reminds me of the wheelers from the movie return to OZ
It’s a great movie 🎥
True yes
Wow! AKIRA 炭団 has been realized
Finally!!
Nice
❤❤❤
❤️❤️❤️
What app?
Early tachikoma.
It’s just the beginning
I know what this is for. This is so that they can install a gun on that arm for shooting at targets, or, if the gun gets detached somehow, the robot will pick up a kitchen knife and stab the targets.