Design of LIGHTDOG: A high payload-to-weight, hose-less hydraulic quadrupedal robot
Shin, Seunghoon
;
Hong, Seungwoo
;
Kim, Min-Su
;
Oh, Jun-Ho
;
Park, Hae-Won
초록
This paper presents LIGHTDOG, a torque-controlled, hydraulically-actuated quadrupedal robot designed for a high power-to-weight ratio and substantial payload capabilities. Hydraulic systems present complexity, weight, and thermal management challenges, which are addressed by embedding all the oil channels inside the robot, inspired by biological vascular structures. This embedding is facilitated by a distinctive robot body design featuring integrated oil channels and the inclusion of a double-vane rotary actuator design, allowing for the internalization of oil channels within the joints. These oil channels not only contribute to the robot's compactness and lightness, but also allow heat spread throughout the body to dissipate through the robot's frame, managing heat during robot motion without the need for external radiators. The rotary actuator is equipped with a structure designed to reduce internal leakage and friction torque, which can reduce energy losses in the hydraulic system. Optimization methods were applied to the slider-crank mechanism and hydraulic actuator sizing to reduce lateral and axial forces, as well as the energy consumed by the hydraulic actuators. Our work has validated the feasibility and payload capacity of LIGHTDOG through several experiments. LIGHTDOG, weighing 45 kg, demonstrates a payload capacity of 130 kg during a squatting motion, significantly exceeding its own weight.
서지사항
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, v.45, no.2, pp.285 - 307, 2026-02
Learning Quadrupedal Locomotion for a Heavy Hydraulic Robot Using an Actuator Model
Lee, Minho
;
Kim, Hyeonseok
;
Kim, Jin Tak
;
Park, Sangshin
;
Lee, Jeong Hyun
;
Cho, Jungsan
;
Hwangbo, Jemin
초록
The simulation-to-reality (sim-to-real) transfer of large-scale hydraulic robots presents a significant challenge in robotics because of the inherent slow control response and complex fluid dynamics. The complex dynamics result from the multiple interconnected cylinder structure and the difference in fluid rates of the cylinders. These characteristics complicate detailed simulation for all joints, making it unsuitable for reinforcement learning (RL) applications. In this work, we propose an analytical actuator model driven by hydraulic dynamics to represent the complicated actuators. The model predicts joint torques for all 12 actuators in under 1 mu s, allowing rapid processing in RL environments. We compare our model with neural network-based actuator models and demonstrate the advantages of our model in data-limited scenarios. The locomotion policy trained in RL with our model is deployed on a hydraulic quadruped robot, BeTheX-Q, which is over 300 kg. This work is the first demonstration of a successful transfer of stable and robust command-tracking locomotion with RL on a heavy hydraulic quadruped robot, demonstrating advanced sim-to-real transferability.
LVI-Q: Robust LiDAR-Visual-Inertial-Kinematic Odometry for Quadruped Robots Using Tightly-Coupled and Efficient Alternating Optimization
Marsim, Kevin Christiansen
;
Oh, Minho
;
Yu, Byeongho
;
Lee, Seungjae
;
Nahrendra, I. Made Aswin
;
Lim, Hyungtae
;
Myung, Hyun
초록
Autonomous navigation for legged robots in complex and dynamic environments relies on robust simultaneous localization and mapping (SLAM) systems to accurately map surroundings and localize the robot, ensuring safe and efficient operation. While prior sensor fusion-based SLAM approaches have integrated various sensor modalities to improve their robustness, these algorithms are still susceptible to estimation drift in challenging environments due to their reliance on unsuitable fusion strategies. Therefore, we propose a robust LiDAR-visual-inertial-kinematic odometry system that integrates information from multiple sensors, such as a camera, LiDAR, inertial measurement unit (IMU), and joint encoders, for visual and LiDAR-based odometry estimation. Our system employs a fusion-based pose estimation approach that runs optimization-based visual-inertial-kinematic odometry (VIKO) and filter-based LiDAR-inertial-kinematic odometry (LIKO) based on measurement availability. In VIKO, we utilize the foot-preintegration technique and robust LiDAR-visual depth consistency using superpixel clusters in a sliding window optimization. In LIKO, we incorporate foot kinematics and employ a point-to-plane residual in an error-state iterative Kalman filter (ESIKF). Compared with other sensor fusion-based SLAM algorithms, our approach shows robust performance across public and long-term datasets.
Inertial-Joint Learning-aided Robust Pose Estimation for Legged Robots
KIM, YEEUN
;
Choi, Junwan
;
Myung, Hyun
초록
Legged robots have received much attention recently due to their maneuverability in various environments. For autonomously operating the legged robots in harsh conditions like bushes, mountains, and mud, stable and accurate state estimation is necessary. However, estimating the robot’s pose under such slippery and visually not informative conditions is hard. Therefore, this paper proposes a novel framework for the legged robot’s state estimation. Our proposed algorithm utilizes the deep inertial-joint factor based on the inertial-joint network. With real-world experiments, our proposed system validates that the deep inertial-joint factor can help improve the performance of the state estimator.
학술대회명
International Conference on Robot Intelligence Technology and Applications (RiTA 2023)
BIG-STEP: Better Initialized State Estimator for Legged Robots with Fast and Robust Ground Segmentation
Song, Seunggyu
;
YU, BYEONGHO
;
OH, MINHO
;
Myung, Hyun
초록
Legged robots are crucial in various applications, from search and rescue operations to exploration missions in challenging terrain. Accurate estimation of the robot’s state is paramount for achieving precise and reliable navigation. However, estimating the state of legged robots presents unique challenges due to inherent uncertainties, dynamics, and environmental interactions. We propose a novel state estimator for legged robots that leverages the ground plane to mitigate errors in the z-component of the state estimate especially. By exploiting the information provided by the effectively estimated ground plane, which serves as a reliable reference, the proposed estimator compensates for errors and enhances the accuracy of the estimated state. To validate the effectiveness of the proposed state estimator, real-world experiments are conducted on a legged robot platform. The results demonstrate significant improvements in state estimation accuracy, particularly in the z-component when compared with conventional state estimation methods. The proposed state estimator can potentially enhance the performance and autonomy of legged robots in various applications, including locomotion control, terrain mapping, and environment perception. Furthermore, its robustness and accuracy make it well-suited for scenarios where precise state estimation is crucial for safe and effective operation.
학술대회명
International Conference on Control, Automation and System (ICCAS 2023)