Partner: Michał Pelka

Institute of Mathematical Machines (PL)

Prace konferencyjne
1.Będkowski J., Pelka M., Majek K., Fitri T., Naruniec J., Open source robotic 3D mapping framework with ROS - Robot Operating System, PCL - Point Cloud Library and Cloud Compare, 5TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING AND INFORMATICS, 2015-08-10/08-11, Legian-Bali (ID), DOI: 10.1109/ICEEI.2015.7352578, pp.644-649, 2015

Streszczenie:

We propose an open source robotic 3D mapping framework based on Robot Operating System, Point Cloud Library and Cloud Compare software extended by functionality of importing and exporting datasets. The added value is an integrated solution for robotic 3D mapping and new publicly available datasets (accurate 3D maps with geodetic precision) for evaluation purpose Datasets were gathered by mobile robot in stop scan fashion. Presented results are a variety of tools for working with such datasets, for task such as: preprocessing (filtering, down sampling), data registration (ICP, NDT), graph optimization (ELCH, LUM), tools for validation (comparison of 3D maps and trajectories), performance evaluation (plots of various outputs of algorithms). The tools form a complete pipeline for 3D data processing. We use this framework as a reference methodology in recent work on SLAM algorithms.

Słowa kluczowe:

Three-dimensional displays, Robot kinematics, Cameras, Mobile communication, Robot sensing systems, XML

Afiliacje autorów:

Będkowski J.-other affiliation
Pelka M.-Institute of Mathematical Machines (PL)
Majek K.-Institute of Mathematical Machines (PL)
Fitri T.-Institute of Mathematical Machines (PL)
Naruniec J.-Politechnika Warszawska (PL)
15p.
2.Musialik P., Majek K., Majek P., Pelka M., Będkowski J., Masłowski A., Typiak A., Accurate 3D mapping and immersive visualization for Search and Rescue, RoMoCo 2015, 10th International Workshop on Robot Motion and Control, 2015-07-06/07-08, Poznań (PL), DOI: 10.1109/RoMoCo.2015.7219728, pp.153-158, 2015

Streszczenie:

This paper concentrates on the topic of gathering, processing and presenting 3D data for use in Search and Rescue operations. The data are gathered by unmanned ground platforms, in form of 3D point clouds. The clouds are matched and transformed into a consistent, highly accurate 3D model. The paper describes the pipeline for such matching based on Iterative Closest Point algorithm supported by loop closing done with LUM method. The pipeline was implemented for parallel computation with Nvidia CUDA, which leads to higher matching accuracy and lower computation time. An analysis of performance for multiple GPUs is presented. The second problem discussed in the paper is immersive visualization of 3d data for search and rescue personnel. Five strategies are discussed: plain 3D point cloud, hypsometry, normal vectors, space descriptors and an approach based on light simulation through the use of NVIDIA OptiX Ray Tracing Engine. The results from each strategy were shown to end users for validation. The paper discusses the feedback given. The results of the research are used in the development of a support module for ICARUS project.

Słowa kluczowe:

Three-dimensional displays, Data visualization, Graphics processing units, Image color analysis, Computational modeling, Solid modeling, Pipelines

Afiliacje autorów:

Musialik P.-Institute of Mathematical Machines (PL)
Majek K.-Institute of Mathematical Machines (PL)
Majek P.-Institute of Mathematical Machines (PL)
Pelka M.-Institute of Mathematical Machines (PL)
Będkowski J.-other affiliation
Masłowski A.-Politechnika Warszawska (PL)
Typiak A.-other affiliation
15p.

Abstrakty konferencyjne
1.Pelka M., Majek K., Będkowski J., Testing the affordable system for digitizing USAR scenes, SSRR 2019, IEEE INTERNATIONAL SYMPOSIUM ON SAFETY,SECURITY AND RESCUE ROBOTICS, 2019-09-02/09-04, Würzburg (DE), DOI: 10.1109/SSRR.2019.8848929, pp.104-105, 2019

Streszczenie:

Affordable technological solutions are always welcome, thus we decided to test the backpack based 3D mapping system for digitizing USAR scenes. The system is composed of Intel RealSense Tracking Camera T265, three Velodynes VLP16, custom electronics for multi-lidar synchronization and VR Zotac GO backpack computer equipped with GeForce GTX1070. This configuration allows the operator to collect and process 3D point clouds to obtain a consistent 3D map. To reach satisfactory accuracy we use RealSense as initial guess of trajectory from Visual Odometry (VO). Lidar odometry corrects trajectory and reduces scale error from VO. The academic 6DSLAM is used for loop closure and finally classical ICP algorithm refines the final 3D point cloud. All steps can be done in the field in reasonable time. The VR backpack can be used for virtual travel over digital content afterwords. Additionally deep neural network is used to perform online object detection using Relsense camera input.

Afiliacje autorów:

Pelka M.-Institute of Mathematical Machines (PL)
Majek K.-Institute of Mathematical Machines (PL)
Będkowski J.-IPPT PAN