Depth Reconstruction of Low-Texture Surfaces from a Single View
Abstract
We propose a deep learning-based method for recovering depth maps and normal vectors of low-texture surfaces from a single RGB image. Our approach relies on an autoencoding network with multiple decoders which are trained jointly. It is based on a semantic segmentation network, called SegNet, with design modifications intended to speed up training. We demonstrate that despite reducing the network parameters and training time significantly, our performance is still comparable to the original network. We also present a new dataset of depth maps and surface normals for texture-less surfaces.