3DMPE: 3D Multi-Perspective Embedding

3DMPE
📅 2019-12-18✍️ Md. Rahat-uz-Zaman, Vahan Huroyan, Stephen Kobourov📚 International Conference on 3D Vision 2024🎯 article
    We describe a 3-Dimensional Multi-Perspective Embedding (3DMPE) approach for 3D point cloud reconstruction. The algorithm takes as input two or more 2D snapshots, i.e., 2D subspace projections of an unknown 3D point cloud, along with a correspondence between the points, although not all points are required to be present in all projections. Different from current state-of-the-art algorithms that require training on many examples and perform well mostly on those types of objects that were seen in training, ours is an optimization-based (unsupervised learning) algorithm that solves a simultaneous multi-perspective optimization problem and works well on any type of object. We demonstrate the algorithm's performance on multiple datasets using three quality measures: Earth Mover distance, Chamfer distance, and ROA. We quantitatively evaluate the scalability and robustness of 3DMPE when varying the number of input projections and the size of the input data. Finally, we demonstrate the robustness of 3DMPE with various noise regimes, including incorrect correspondences between the points and incorrect distance measurements.
    Share on