Detailed image caption: This educational poster appears to be related to a research project undertaken at the University of Amsterdam. The poster highlights key aspects of the project, leading with the headline "4 Experiment" and credited to authors Zuoyue Li, Zhenqiang Li, and Zhaopeng. **The main sections of the poster include:** 1. **Why 3D Generation?** - **Consistency naturally holds:** The poster states that 3D generation maintains consistent geometry and render. - **Do not need a preset trajectory:** It emphasizes the benefit of not requiring a preset trajectory, likely in comparison to other methods. 2. **Why Diffusion Models Instead of GANs?** - **Better performance:** The poster argues that diffusion models provide superior performance. - **Stability during training:** It also notes that diffusion models offer increased stability during the training phase compared to GANs (Generative Adversarial Networks). 3. **Comparison of Different Generative Models:** - **3D GAN-based method:** One of the techniques compared. - **2D GAN-based method:** Another comparative method. - **2D diffusion-model-based method:** The poster seems to favor this method. 4. **Baseline Comparison:** - **Dataset:** The HoliCity dataset is used for the baseline comparison. - **GT Geometry:** The use of ground-truth geometry is noted. - **Various Metrics:** These metrics are employed to evaluate performance. - **Method/Metric Comparison:** Specific methods like Sat2Vid, InfiniCity, and MVDiffusion are compared against the authors' approach, though detailed metrics are not visible in this excerpt. 5. **Model Generalization:** - **OmniCity Dataset:** Used to demonstrate generalization of the model across satellite, ground-view, bird-view, and other perspectives. - **Visualization:** The poster includes numerous matrices of images to showcase visual results from different generative models and datasets. 6. **Model Architecture Illustration:** - A schematic showing the composition of the generative model, indicating components like the generator, renderer, point cloud with features, and the flow of data resulting in rendered images. Overall, the poster provides a comprehensive overview of the project, highlighting rationale, methodologies, datasets, and comparative results with visual evidence to support the claims made. Text transcribed from the image: UNIVERSITY oft OF AMSTERDAM Zuoyue Li Zhenqiang Li Zhaopen en or predicted geometry and render sistency • Why 3D generation? Consistency naturally holds Do not need preset trajectory Why diffusion models instead of GANs? Better performance Stability during training 4 Experiment • Baseline comparison Method/Metric HoliCity dataset GT geometry Various metrics Sat2 Vid InfiniCity MVDiffusion Ours w/ different generative models 3D GAN-based method = 2D GAN-based method ion: 2D diffusion-model-based method Rendered Images B Generated background on Model Point cloud w/ feature M Sat. Ground-view Bird-view MVDiff. Ground-view Model generalization OmniCity data ture Factor Renderer Rendering Ours Bird-view