SemGeoMo: Dynamic Contextual Human Motion Generation with Semantic and Geometric Guidance

1ShanghaiTech University, 2The University of Hong Kong
Given sequential point clouds of interactive targets, SemGeoMo generates realistic and high-quality human interactive motions along with corresponding textual descriptions. By leveraging both semantic and geometric guidance, our method ensures the semantic coherence and geometric accuracy of the generated results.

Abstract

Generating reasonable and high-quality human interactive motions in a given dynamic environment is crucial for understanding, modeling, transferring, and applying human behaviors to both virtual and physical robots. In this paper, we introduce an effective method, SemGeoMo, for dynamic contextual human motion generation, which fully leverages the text-affordance-joint multi-level semantic and geometric guidance in the generation process, improving the semantic rationality and geometric correctness of generative motions. Our method achieves state-of-the-art performance on three datasets and demonstrates superior generalization capability for diverse interaction scenarios.

Method

The pipeline of our two-stage framework. LLM Annotator provides the semantic guidance. SemGeo Hierarchical Guidance Generation takes textual information and sequential point cloud as condition and generate affordance-level and joint-level guidance. Then SemGeo-guided Motion Generation utlizes semantic and geometric information to generate responsive human motion.

Visualization on comparison results.

A person pulls the clothesstand and sets it back down.
Comparison 1 Comparison 2 Comparison 3 Comparison 4
A person lifts the plastic box, moves it, and puts it down.
Comparison 5 Comparison 6 Comparison 7 Comparison 8
A person lifts the trashcan and moves it.
Comparison 9 Comparison 10 Comparison 11 Comparison 12
Ours
CHOIS
MDM-PC
OMOMO

BibTeX


xxx