A Multimodal Data Processing System for LiDAR-Based Human Activity Recognition


Increasingly, the task of detecting and recognizing the actions of a human has been delegated to some form of neural network processing camera or wearable sensor data. Due to the degree to which the camera can be affected by lighting and wearable sensors scantiness, neither one modality can capture the required data to perform the task confidently. That being the case, range sensors, like light detection and ranging (LiDAR), can complement the process to perceive the environment more robustly. Most recently, researchers have been exploring ways to apply convolutional neural networks to 3-D data. These methods typically rely on a single modality and cannot draw on information from complementing sensor streams to improve accuracy. This article proposes a framework to tackle human activity recognition by leveraging the benefits of sensor fusion and multimodal machine learning. Given both RGB and point cloud data, our method describes the activities being performed by subjects using regions with a convolutional neural network (R-CNN) and a 3-D modified Fisher vector network. Evaluated on a custom captured multimodal dataset demonstrates that the model outputs remarkably accurate human activity classification (90%). Furthermore, this framework can be used for sports analytics, understanding social behavior, surveillance, and perhaps most notably by autonomous vehicles (AVs) to data-driven decision-making policies in urban areas and indoor environments.


  author={Roche, Jamie and De-Silva, Varuna and Hook, Joosep and Möncks, Mirco and Kondoz, Ahmet},
  journal={IEEE Transactions on Cybernetics}, 
  title={A Multimodal Data Processing System for LiDAR-Based Human Activity Recognition}, 
APA Reference

Roche, J., De-Silva, V., Hook, J., Möncks, M., & Kondoz, A. (2021). A Multimodal Data Processing System for LiDAR-Based Human Activity Recognition. IEEE Transactions on Cybernetics, 1-14.

Cyber-human Lab Contributors