Robot-Based Real-Time Point Cloud Digital Twin Modeling in Augmented Reality
DOI:
https://doi.org/10.57922/tcrc.602Keywords:
Augmented Reality, 3D indoor reconstruction, LiDAR, Digital Twins;, SLAMAbstract
Augmented Reality (AR) can be used as an intuitive user interface of the reality capture models for structural inspection, emergency response and teleoperations. For a seamless AR user experience, reality capture models usually need to be collected, processed, and rendered in a real-time manner. Nonetheless, most scanning technologies require a prolonged in-situ time on the site for data collection. And due to the excessive amount of data to be transferred and processed, it is difficult to achieve real-time scene reproduction. It is also challenging to render high-fidelity captured scenes via most commercial AR devices that do not have powerful graphic processing units for rapid renderings, especially when there are dynamic objects. This paper proposes an AR-based real-time 3D scene rendering method solely based on low-resolution mobile LiDAR. Velodyne-16 is used to collect dynamic point cloud data given its high stability in mobile scanning and low price compared to other high-resolution LiDAR devices. To maintain an economic data transmission to an AR headset, the obtained points message only includes the intensity and three-dimensional coordinates, relatively sparse. To overcome the defect of data sparsity, a scene texture information enrichment algorithm is implemented based on SLAM. Using IMU data and the LOAM algorithm, the scanning information of each angle position is transferred into a coordinate system with the initial position as the origin, and the supplementary scene reconstruction result containing denser texture information is obtained. To eliminate duplicate or similar scan points, voxel down-sampling is used after merging all scans, and the down-sampled point cloud will be used as a fixed background. Finally, the VFX and Shader processes are applied to render textural and physical information through ROS-Unity. The real-time scene scanning results can be displayed with a satisfactory quality via HoloLens 2 with dynamic objects. The method is expected to contribute to rapid 3D scene reconstruction based on economic sensing and processing methods, as well as LiDAR integration with extended reality technologies
Downloads
Published
Conference Proceedings Volume
Section
License
Copyright (c) 2022 University of New Brunswick
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors/employers retain all proprietary rights in any process, procedure or article of manufacture described in the Work. Authors/employers may reproduce or authorize others to reproduce the Work, material extracted verbatim from the Work, or derivative works for the author’s personal use or for company use, provided that the source is indicated.