This image shows an academic poster presentation at a conference, specifically from the KAIST SGVR Lab. The poster is titled "UFORecon: Generalizable Sparse-View Surface Reconstruction from Arbitrary and Unfavorable Sets." The research was conducted by Youngju Na, Woo Jae Kim, Kyu Beom Han, Suhyeon Ha, and Sung-Eui Yoon. The poster is divided into several sections: 1. **Goal**: Describes the aim of reconstructing surfaces from sparse-view images that include arbitrary and unfavorable view combinations. Illustrative figures and comparative results between different sets and methods are shown. 2. **Challenges**: Lists the main challenges faced in generalizable surface reconstruction, including assumptions on favorably-selected views and difficulties related to training view-combinations. It introduces the View-Combination Score (VC-score) as a metric for evaluating the informativeness of view combinations. 3. **Our Approach**: Details the cross-view correlation-aware volume rendering framework employed by the researchers. Diagrams illustrate the framework components and how it handles unfavorable sets of images. 4. **Core Components**: Breaks down the major elements of their approach, such as the Cascade Cross-View Correlation Framework and methods for random set training. 5. **Quantitative Results**: Provides data and comparisons of the framework against baseline methods across various quantitative metrics. 6. **Qualitative Results**: Includes visual comparisons of reconstructed surfaces from different methods to demonstrate the effectiveness of their approach. 7. **Conclusion**: Summarizes the findings that the UFORecon method generalizes to different view-combinations, providing improved quality in 3D surface reconstructions even with unfavorable sets. 8. **References**: Lists the literature cited by the authors in their research. The poster is presented under the context of a conference or exhibition, indicating its contribution to the field of computer vision or related areas. A QR code for additional references and possible interactive elements is also included. The setup further includes adjacent posters and a cylindrical object, possibly carrying more information or promotional materials nearby. Text transcribed from the image: KNST SGVR Lab 1. Goal Generalizable surface reconstruction from sparse-view images with arbitrary and unfavorable view combinations. Sparse-view Images Favorable Set Unfavorable Set Favorable Set Unfavorable Set UFORecon: Generalizable Sparse-View Surface Reconstruction from Arbitrary and Unfavorable Sets Youngju Na Woo Jae Kim Kyu Beom Han Suhyeon Ha Sung-Eui Yoon 3. Our Approach: Cross-view correlation-aware volume rendering framework FPN -0 Cross-view Matching Transformer W 4.2. Qualitative results Favorable Set N Correlation Frustum (Sec. 4.3) Differentiable Warping D-DU-d Concreation Ret (144) Reconstruction Transformer (Sec 4.5) SDF C V V + W DTU [3] Normal Set CVPR SEATTLE, WA JUNE 17-21, 2024 25 25 Unfavorable Set ● 香港科技大學 THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY Contril Volecon 151) 話話話話 A generative NeRF-to-NeRF translation 2 Depth ValRecon Outs VolRecom Our Vo Unfavorable set of N images (CD: 0.74 (0.59) 0.54 NC2 x 2 Cross-view features. ŝ= cos(fi, f) Volume Rendering (116) (1.00) 0.54 40 671 X- (C., D.} task together with a generic solution. VolRecon Ours VolRecon Ours 2. Challenges Previous generalizable surface reconstruction methods: a) Assume favorably-selected view combinations. b) Easily overfit to training view-combinations. View-Combination Score (VC-score) Represents the informativeness of view combinations in reconstruction. 71-1 1 VC = =1=1+1 s(i,j), s(i,j) = Σ(G(O₁(P)) P: common 3d track, By: angle between view i and j. G: piecewise-gaussian function Chamfer Distance() VolRecon (Best Set Training) Ours (Best Set Training) Ours (Random Set Training) 3.1. Core components Cascaded Cross-View Correlation Frustum: Constructs correlation frustums to capture global correlations among views. .Geometry-aware Similarity Encoding: Explicitly encodes pairwise feature similarities as view. consistent prior. . Reconstruction Transformers: Aggregates global and local correlations to estimate geometry accurately. 4.1. Quantitative results Comparison with baselines on various VC-scores. (F.F) Geometry-aware Similarity Encoding (Sec 4.4). 3.2. Random set training Randomly selects camera configurations during training to enhance view-combination robustness. Effectively integrates with our correlation-aware framework. T sampling BB 888 9 1 BUT 377 MAG A 3D VAE-GAN framework that can leam edits corresponding to the a set of input A contrastive learning framework to di Superior efficiency, quality, and diver Blended-MVS [4] w/ and w/o random set training V RTR Training M 5. Conclusion Introduce novel concept in generalizable surface-reconstruction, view-combination Generalizability. Effectively combine and validate the efficacy of correlation module and random set training strategy. •Achieve SoTA performance on all VC-score levels. [1] Ren et al. CVPR 2023 Inference Network pipeline: Latent distill extract edit codes from the tr translated NeRF Optimizie a KL loss to constrain the latere and Lcontr to optimize the appearance and Inference: sample a latent vector from Ga view consistent 3D scene with high quality Metric: Chamfer Distance (CD) I Low VC-score leads to severe degradation! Set 24 Weken 120 259 1 LOW 142 183 18 134 674 no 12) 127 Favorable NeTR (2) LOS 231 144 09 1.42 OAK 1.35 087 1.30 www 1.07 0.77 0.39 VolRecon (1) Favorable 1.42 105 112 137 (VC 1788) Our 0.76 205 131 0.82 1.12 05 115 117 LII 87 0.58 0.54 AM Our 0.71 2.19 134 0.47 1.16 125 117 150 0.57 SL 289 249 265 112 264 110 217 The Normal RTR (2) 206 2.31 231 1.75 211 1.09 134 1.35 " LM 205 174 La 153 ReTR* (2) 5% (VC 192) Ours 2.59 131 12 104 ADAY C ALADD 137 1.16 095 M 100 LO 12 Ours Random set training Metric: Chamfer Distance (CD) Unfavorable Method 3.18 2.94 VolRecon [1] 2.74 (+1.32.92%) 3.88 (+0.7, 22) ReTR (2) 1.62 (+0.45,38%) 2.88 (-0.06.20) 099 6. References 156 Oury 221 142 123 124 8.72 037 BAS La Ours 1.01 (+0.02.2%) 128 (-0.28, 18%) (2) Liang et al. NeuriPS 2023 164 $26 46) 243 LS "THIE 241 236 249 334 335 1.44 360 33 (3) Aanes et al. UCV 2016 10% 102 10: Unfavorable RTR (2) 3.90 17 422 222 2.93 1 231 224 236 236 192 163 283 2H First row: best set training. (VC 57) Oury 199 2.23 LAS 122 1.92 130 1.25 120 1.29 13 (4) Yao et al. CVPR 2020 View-Combination Score Our 131 LAS FY 1.37 0.99 M LAS W LM Second row: random set training. 200