LaserHuman: Language-guided Scene-aware Human Motion Generation in Free Environment

Peishan Cong1, Ziyi Wang1, Yiming Ren1, Zhiyang Dou3,

Wei Yin2, Kai Cheng3, Xiaoxiao Long4, Xinge Zhu5, Jingyi Yu1, Yuexin Ma1,

1ShanghaiTech University,

2University of Adelaide 3University of Science and Technology of China

4The University of Hong Kong 5The Chinese University of Hong Kong

Abstract

Language-guided scene-aware human motion generation has great significance for entertainment and robotics. In response to the limitations of existing datasets, we introduce LaserHuman, which stands out with its inclusion of genuine human motions within 3D environments, unbounded free-form natural language descriptions, a blend of indoor and outdoor scenarios, and dynamic, ever-changing scenes. Diverse modalities of capture data and rich annotations present great opportunities for the research of Scene-Text-to-Motion generation, and can also facilitate the development of real-life applications. Moreover, to generate semantically consistent and physically plausible human motions, we propose a multi-conditional diffusion model, which is simple but effective, achieving state-of-the-art performance on existing datasets.

LaserHuman consists of large-scale sequences of rich human motions and abundant human interactions captured in various real scenarios with free-form language descriptions, providing valuable data for conditioned human motion generation. We demonstrate two scenarios, where colored humans are annotated targets and white humans are interacted humans in the dynamic scene. The human mesh color from light to dark represents an increase in timing.

Visulization.

Genertation results with comparison.

Genertation results.

BibTeX


        @article{cong2024laserhuman, 
        title={Laserhuman: language-guided scene-aware human motion generation in free environment}, 
        author={Cong, Peishan and Wang, Ziyi and Dou, Zhiyang and Ren, Yiming and Yin, Wei and Cheng, Kai and Sun, Yujing and Long, Xiaoxiao and Zhu, Xinge and Ma, Yuexin}, 
        journal={arXiv preprint arXiv:2403.13307}, 
        year={2024} }
      }
      
}