The image shows a detailed academic poster presented at the CVPR (Computer Vision and Pattern Recognition) conference held in Seattle, WA from June 18-22, 2023. The poster is titled "2S-UDF: A Novel Two-stage UDF Learning Method for Robust Non-watertight Model Reconstruction from Multi-view Images" by authors Junkai Deng, Fei Hou, Xuhui Chen, Wencheng Wang, and Ying He, affiliated with Nanyang Technological University and Chinese Academy of Sciences. The poster introduces a method for reconstructing non-watertight 3D models from multiple view images, emphasizing the Objective and detailing the methodology across Key Stages. Stage 1 approximates UDF learning with volume rendering concepts, while Stage 2 refines UDF through a direct weight mapping approach. Different sections include comparisons with benchmarks such as DTU Benchmark and Other Benchmarks, illustrating the efficiency of their method against existing ones like NeuralUDF, NeuDF, and NeAT. The pipeline section outlines the steps involved, from volume rendering to loss function. Figures and illustrations provide visual comparisons of model reconstructions, and detailed reference sections cite critical background literature. This visually informative poster stands as a substantial contribution to advancements in 3D model reconstruction technology. Text transcribed from the image: ISCAS NANYANG TECHNOLOGICAL UNIVERSITY ④ 中国科学院大学 University of Chinese Academy of Sciences 2S-UDF: A Novel Two-stage UDF Learning Method for Robust Non-watertight Model Reconstruction from Multi-view Images Junkai Deng Fei Hou Xuhui Chen -KEY STAGES Wencheng Wang Ying He OBJECTIVE Our objective is to reconstruct non-watertight models from multi-view images. Stage 1: Approximate UDF Learning-Volume Rendering We use the NeuS [4]-like volume rendering pipeline. We map the distance value f(r(t)) to the volume density σ(r(t)) by RELATED NeuralUDF [1] Introduces indicator function increasing rendering complexity and volatileness. NeUDF [2] Requires numerical approximation of unbounded integral. NeAT [3] Requires object masks for training. PIPELINE Dist. Pred. MLP Softplus Rendering Color Pred. MLP Param. BG. Pred. MLP "Key Stages" 1&2 SULAR Composition Background Rendered Image o(r(t)) se-f(r()) 1+e¯f(r(t)) C Stable: No unbounded integral: Opaque: X (almost) Unbiased: x (almost) Stage 2: UDF Refinement-Direct Weight Mapping We directly map the UDF value to the color weight. Opaque: ✓ A UDF maxima, accumulated weight below threshold. Not a truncation point. B On the surface. C Ray truncation point. D On the surface. Ray already truncated-no color contribution. Unbiased: DEEPFASHION3D BENCHMARK- GT Ours Loss NeuralUDF [1] NeUDF [2] I GT Image GT GT DTU BENCHMARK Ours CVPR SEATTLE, WA JUNE 17-21, 2024 NeUDF [2] 11 OTHER BENCHMARK Ours NeUDF [2] Neural UDF [1] NeAT [3] REFERENCES [1] Long et al. NeuralUDF: Learning Unsigned Distance Fields for Multi-View Reconstruction of Surfaces with Arbitrary Topologies. CVPR 2023. [2] Liu et al. NeUDF: Leaning Neural Unsigned Distance Fields with Volume Rendering. CVPR 2023. [3] Meng et al. NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from Multi- View Images. CVPR 2023. [4] Wang et al. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. NeuriPS 2021. 24