Diffusion models demonstrate superior performance in capturing complex distributions from large-scale datasets, providing a promising solution for quadrupedal locomotion control. However, offline policy is sensitive to Out-of-Distribution (OOD) states due to the limited state coverage in the datasets. In this work, we propose a two-stage learning framework combining offline learning and online preference alignment for legged locomotion control. Through the offline stage, the diffusion planner learns the joint distribution of state-action sequences from expert datasets without using reward labels. Subsequently, we perform the online interaction in the simulation environment based on the trained offline planer, which significantly addresses the OOD issues and improves the robustness. Specifically, we propose a novel weak preference labeling method without the ground-truth reward or human preferences. The proposed method exhibits superior stability and velocity tracking accuracy in pacing, trotting, and bounding gait under both slow- and high-speed scenarios and can perform zero-shot transfer to the real Unitree Go1 robots.
The overall illustration of the proposed framework. (1) Generate Datasets: the offline datasets among pacing, trotting, and bounding gait are collected through the expert PPO policy in the walk-these-ways task. (2) Behavior Cloning: given a condition input, the diffusion policy can produce a sequence of states and actions. (3) Preference Alignment: Conduct the preference alignment on the offline diffusion planner based on proposed weak preference labels. (4) Sim2Real: The refined policy is deployed on the Unitree Go1 robot.