Robot-Based Real-Time Point Cloud Digital Twin Modeling in Augmented Reality

Authors

  • Hengxu You 1 Informatics, Cobots and Intelligent Construction (ICIC) Lab, Department of Civil and Coastal Engineering, University of Florida
  • Fang Xu Informatics, Cobots and Intelligent Construction (ICIC) Lab, Department of Civil and Coastal Engineering, University of Florida
  • Eric Du Informatics, Cobots and Intelligent Construction (ICIC) Lab, Department of Civil and Coastal Engineering, University of Florida

DOI:

https://doi.org/10.57922/tcrc.602

Keywords:

Augmented Reality, 3D indoor reconstruction, LiDAR, Digital Twins;, SLAM

Abstract

Augmented Reality (AR) can be used as an intuitive user interface of the reality capture models for structural inspection, emergency response and teleoperations. For a seamless AR user experience, reality capture models usually need to be collected, processed, and rendered in a real-time manner. Nonetheless, most scanning technologies require a prolonged in-situ time on the site for data collection. And due to the excessive amount of data to be transferred and processed, it is difficult to achieve real-time scene reproduction. It is also challenging to render high-fidelity captured scenes via most commercial AR devices that do not have powerful graphic processing units for rapid renderings, especially when there are dynamic objects. This paper proposes an AR-based real-time 3D scene rendering method solely based on low-resolution mobile LiDAR. Velodyne-16 is used to collect dynamic point cloud data given its high stability in mobile scanning and low price compared to other high-resolution LiDAR devices. To maintain an economic data transmission to an AR headset, the obtained points message only includes the intensity and three-dimensional coordinates, relatively sparse. To overcome the defect of data sparsity, a scene texture information enrichment algorithm is implemented based on SLAM. Using IMU data and the LOAM algorithm, the scanning information of each angle position is transferred into a coordinate system with the initial position as the origin, and the supplementary scene reconstruction result containing denser texture information is obtained. To eliminate duplicate or similar scan points, voxel down-sampling is used after merging all scans, and the down-sampled point cloud will be used as a fixed background. Finally, the VFX and Shader processes are applied to render textural and physical information through ROS-Unity. The real-time scene scanning results can be displayed with a satisfactory quality via HoloLens 2 with dynamic objects. The method is expected to contribute to rapid 3D scene reconstruction based on economic sensing and processing methods, as well as LiDAR integration with extended reality technologies

Downloads

Published

2022-08-19

Conference Proceedings Volume

Section

Academic Papers