A Convolutional Network for Semantic Facade Segmentation and Interpretation
Collection editors:
International Society for Photogrammetry and Remote Sensing (ISPRS)
Title of conference publication:
XXIII ISPRS Congress, Commission III, 12-19 July 2016, Prague, Czech Republic
Journal:
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Issue:
XLI-B-3
Organizer (entity):
International Society for Photogrammetry and Remote Sensing (ISPRS)
Conference title:
International Society for Photogrammetry and Remote Sensing Congress (23., 2016, Prag)
Venue:
Prag
Year of conference:
2016
Date of conference beginning:
12.07.2016
Date of conference ending:
19.07.2016
Publishing institution:
International Society for Photogrammetry and Remote Sensing (ISPRS)
Year:
2016
Pages from - to:
709-715
Language:
Englisch
Abstract:
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tiepoints and depth maps for assigning z coordinates to tiepoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tiepoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tiepoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future. «
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture d... »