Gait Recognition in Large-scale Free Environment via Single LiDAR

Xiao Han1, Yiming Ren1, Peishan Cong1, Yujing Sun2, Jingya Wang1, Lan Xu1, Yuexin Ma1,

1ShanghaiTech University, 2The University of Hong Kong

We propose FreeGait, a new LiDAR-based in-the-wild gait dataset under various crowd density and occlusion across different real-life scenes. FreeGait is captured in diverse large-scale real-life scenarios with free trajectory, resulting in various challenges such as 1. occlusions, 2. noise from crowd, 3. noise from carrying objects and etc., as shown in the right part.

Abstract

Human gait recognition is crucial in multimedia, enabling identification through walking patterns without direct interaction, enhancing the integration across various media forms in real-world applications like smart homes, healthcare and non-intrusive security. LiDAR's ability to capture depth makes it pivotal for robotic perception and holds promise for real-world gait recognition. In this paper, based on a single LiDAR, we present the Hierarchical Multi-representation Feature Interaction Network (HMRNet) for robust gait recognition. Prevailing LiDAR-based gait datasets primarily derive from controlled settings with predefined trajectory, remaining a gap with real-world scenarios. To facilitate LiDAR-based gait recognition research, we introduce FreeGait, a comprehensive gait dataset from large-scale, unconstrained settings, enriched with multi-modal and varied 2D/3D data. Notably, our approach achieves state-of-the-art performance on prior dataset (SUSTech1K) and on FreeGait.

Method

The pipeline of our method. We extract dense body structure information from range views, and undistorted geometric and motion features via motion-aware feature embedding (MAFE) from point clouds. Then, adaptive cross-representation mapping module (ACM) is applied to fuse two-representation features at different levels hierarchically. Lastly, the gait-saliency feature enhancement (GSFE) module is leveraged to highlight more gait-informative features for final identification.

Quantitative comparisons

Dataset Struture.

Dataset file structure

FreeGait
|── subject id # person id
|   |── device # view
|   |   |── sequence # sequence
|   |   |   |── image # silhouettes
|   |   |   |   |── image.pkl # silhouette sequence
|   |   |   |── lidar # raw lidar point clouds
|   |   |   |   |── lidar.pkl # raw lidar point clouds sequence
|   |   |   |── range_pkl # projected range image from lidar point clouds
|   |   |   |   |── range_pkl.pkl # projected range image sequence from lidar point clouds
|   |   |   |── kp3d # 3D human key points
|   |   |   |   |── kp3d.pkl # 3D human key points sequence
|   |   |   |── smpl # 3D human smpl pose, shape, translation
|   |   |   |   |── smpl.pkl # 3D human smpl pose, shape, translation sequence
Please use python pickle.load() to read each data pkl

BibTeX

@inproceedings{han2024gait,
        title={Gait Recognition in Large-scale Free Environment via Single LiDAR},
        author={Han, Xiao and Ren, Yiming and Cong, Peishan and Sun, Yujing and Wang, Jingya and Xu, Lan and Ma, Yuexin},
        booktitle={ACM Multimedia 2024},
        year={2024}
      }