logo DNAct
DNAct: Diffusion Guided Multi-Task 3D Policy Learning

Ge Yan*Yueh-Hua Wu*Xiaolong Wang
UC San Diego
*Equal contribution

DNAct is a generalizable language-conditioned multi-task policy, capable of learning a 3D and multi-modal representation for multi-task manipulation with limited demonstrations.

Abstract

This paper presents DNAct, a language-conditioned multi-task policy framework that integrates neural rendering pre-training and diffusion training to enforce multi-modality learning in action sequence spaces. To learn a generalizable multi-task policy with few demonstrations, the pre-training phase of DNAct leverages neural rendering to distill 2D semantic features from foundation models such as Stable Diffusion to a 3D space, which provides a comprehensive semantic understanding regarding the scene. Consequently, it allows various applications to challenging robotic tasks requiring rich 3D semantics and accurate geometry.

Furthermore, we introduce a novel approach utilizing diffusion training to learn a vision and language feature that encapsulates the inherent multi-modality in the multi-task demonstrations. By reconstructing the action sequences from different tasks via the diffusion process, the model is capable of distinguishing different modalities and thus improving the robustness and the generalizability of the learned representation. DNAct significantly surpasses SOTA NeRF-based multi-task manipulation approaches with over 30% improvement in success rate.


3D Semantic Scene Understanding

DNAct learns a unified 3D and semantic representation via generalizable NeRF. We not only utilize the neural rendering to synthesize novel views in RGB but also predict the corresponding semantic features from 2D foundation models. From distilling pre-trained 2D features, we learn a generalizable 3D representation with commonsense priors from internet-scale datasets. This pre-trained representation equips the policy with out-of-distribution generalization ability. Below is the results of novel view synthesis for both RGB and feature embedding using only 3 camera views.

Hit the ball

Stack the block

Put in the bowl

Visualization of 3D point feature

We show the visualization of the 3D point clouds and the corresponding point features below. Our model can effectively learn an accurate 3D semantic representation and parse different objects in the scene.

Hit the ball

Put in the bowl

Sweep to dustpan

Put in the bin

Method

DNAct first pretrains a 3D encoder by distilling 2D semantic features from foundation models to a 3D space via neural rendering with NeRF. The pre-trained 3D representation provides a comprehensive semantic understanding regarding the scene. With higher sample efficiency, this pretraining stage significantly facilitates training efficiency and model performance across various task domains. Additionlly, we adopt a learning-from-scratch PointNext encoder to capture the accurate 3D geometry of the scene, which offers better generalization and in-domain adaptation capabilities.

Subsequently, DNAct is jointly optimized via a diffusion process, which involves the optimization of a 3D feature-conditioned noise predictor and the predictor is designed to reconstruct action sequences across different tasks. The diffusion training enables the model to not only distinguish representation of various modalities arising from multi-task demonstrations but also become more robust and generalizable for novel objects and arrangements.