Jump to Main Content
Pyramid scene parsing network in 3D: Improving semantic segmentation of point clouds with multi-scale contextual information
- Fang, Hao, Lafarge, Florent
- ISPRS journal of photogrammetry and remote sensing 2019 v.154 pp. 246-258
- algorithms, data collection, geometry, models
- Analyzing and extracting geometric features from 3D data is a fundamental step in 3D scene understanding. Recent works demonstrated that deep learning architectures can operate directly on raw point clouds, i.e. without the use of intermediate grid-like structures. These architectures are however not designed to encode contextual information in-between objects efficiently. Inspired by a global feature aggregation algorithm designed for images (Zhao et al., 2017), we propose a 3D pyramid module to enrich pointwise features with multi-scale contextual information. Our module can be easily coupled with 3D semantic segmentation methods operating on 3D point clouds. We evaluated our method on three large scale datasets with four baseline models. Experimental results show that the use of enriched features brings significant improvements to the semantic segmentation of indoor and outdoor scenes.