Remarkable progress has been made in 3D reconstruction from single-view RGB-D inputs. MCC is the current state-of-the-art
method in this field, which achieves unprecedented success by combining vision Transformers with large-scale training. However,
we identified two key limitations of MCC: 1) The Transformer decoder is inefficient in handling large number of query points;
2) The 3D representation struggles to recover high-fidelity details. In this paper, we propose a new approach called NU-MCC that
addresses these limitations. NU-MCC includes two key innovations: a Neighborhood decoder and a Repulsive Unsigned Distance
Function (Repulsive UDF). First, our Neighborhood decoder introduces center points as an efficient proxy of input visual features,
allowing each query point to only attend to a small neighborhood. This design not only results in much faster inference speed
but also enables the exploitation of finer-scale visual features for improved recovery of 3D textures. Second, our Repulsive UDF
is a novel alternative to the occupancy field used in MCC, significantly improving the quality of 3D object reconstruction.
Compared to standard UDFs that suffer from holes in results, our proposed Repulsive UDF can achieve more complete surface
reconstruction. Experimental results demonstrate that NU-MCC is able to learn a strong 3D representation, significantly
advancing the state of the art in single-view 3D reconstruction. Particularly, it outperforms MCC by 9.7% in terms of the
F1-score on the CO3D-v2 dataset with more than 5x faster running speed.