Preference Aligned Diffusion Planner for Quadrupedal Locomotion Control

Xinyi Yuan*1, Zhiwei Shang*2, Zifan Wang2, Chenkai Wang4, Zhao Shan3,
Meixin Zhu✉️5, Chenjia Bai✉️3, Weiwei Wan1, Kensuke Harada1, Xuelong Li3
1Osaka University, 2Hong Kong University of Science and Technology (Guangzhou), 3Institute of Artificial Intelligence (TeleAI), China Telecom, 4Southern University of Science and Technology 5Southeast University
Interpolate start reference image.

Tasks accomplished by the proposed architecture.

(a-b): Trotting gait simulation and real-world test, (c-d): Pacing gait simulation and real-world test, (e-f): Bounding gait simulation and real-world test.

Abstract

Diffusion models demonstrate superior performance in capturing complex distributions from large-scale datasets, providing a promising solution for quadrupedal locomotion control. However, the robustness of the diffusion planner is inherently dependent on the diversity of the pre-collected datasets. To mitigate this issue, we propose a two-stage learning framework to enhance the capability of the diffusion planner under limited dataset (reward-agnostic). Through the offline stage, the diffusion planner learns the joint distribution of state-action sequences from expert datasets without using reward labels. Subsequently, we perform the online interaction in the simulation environment based on the trained offline planner, which significantly diversified the original behavior and thus improves the robustness. Specifically, we propose a novel weak preference labeling method without the ground-truth reward or human preferences. The proposed method exhibits superior stability and velocity tracking accuracy in pacing, trotting, and bounding gait under different speeds and can perform a zero-shot transfer to the real Unitree Go1 robots.

Interpolate start reference image.

The overall illustration of the proposed framework. (1) Generate Datasets: the offline datasets among pacing, trotting, and bounding gait are collected through the expert PPO policy in the walk-these-ways task. (2) Behavior Cloning: given a condition input, the diffusion policy can produce a sequence of states and actions. (3) Preference Alignment: Conduct the preference alignment on the offline diffusion planner based on proposed weak preference labels. (4) Sim2Real: The refined policy is deployed on the Unitree Go1 robot.

Video