This image shows an open book lying on a flat surface, likely a table with a grey marble pattern. The visible page is printed with a dense, formatted list of text, organized into distinct, single-spaced lines. Each line appears to represent individual entries, potentially from a conference schedule or an academic publication, as indicated by the names and titles listed. The entries are numbered sequentially in small font on the left margin of the page. Many of the lines have been highlighted with bold and underlined text, emphasizing particular sections or topics. The text is predominantly in English. The header at the top of the visible page indicates the day of the week and the date - "Wednesday, June 19." Text transcribed from the image: SDAY, JUNE 19 Generation for ol, Jaskirat Singh, n, Liang Zheng Consistent Video an Kurtkaya, Generation via ntao Lin, alization , Varun Chwa, Michael ustomized Qing, en Zhou, nic Human truction n, Yu Liu, ek, Dima xt-free Qing, g Sang uency Chen, Zhao on Guha em g 9, an PROGRAM GUIDE Se Lil tic Propagation, Haofeng Liu, Chenshu Xu, Yifei Yang, Zeng, Shengfeng He 190 Leng Continuous 3D Words for Text-to-Image Generation, Ta- Matthew Fisher, Radomir Mech, Andrew Markham, Niki Trigoni 191 CHAIN: Enhancing Generalization in Data-Efficient GANS via lipsitz continuity constrAlned Normalization, Yao Ni, Piotr Koniusz 192 ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models, Kwak, Erqun Dong, Yuhe Jin, Hanseok Ko, Shweta Mahajan, Kwang Moo Jeong- Yi Di: Joint-Image Diffusion Models for Finetuning-Free Personalized JeDi: Xun Huang, Ting-Chun Wang, Ming-Yu Liu, Yogesh Balaji 193 Text-to-Image Generation. Yu Zeng, Vishal M. Patel, Haochen Wang, 194 GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models, Taoran Yi, Jiemin Fang, Junjie Wang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, Xinggang Wang 195 Prompting Hard or Hardly Prompting: Prompt Inversion for Text- to-Image Diffusion Models, Shweta Mahajan, Tanzila Rahman, Kwang Moo Yi, Leonid Sigal 带 WEDNESDAY, JUNE 19 214 Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis, Zanlin Ni, Yulin Wang, Renping Zhou, Jiayi Guo, Jinyi Hu, Zhiyuan Liu, Shiji Song, Yuan Yao, Gao Huang 215 Texture-Preserving Diffusion Models for High-Fidelity Virtual Jin Tao, Xiangmin Xu Try-On, Xu Yang, Changxing Ding, Zhibin Hong, Junhao Huang, 196 MIGC: Multi-Instance Generation Controller for Text-to-Image 197 Towards Text-guided 3D Scene Composition, Qihang Zhang, Synthesis, Dewei Zhou, You Li, Fan Ma, Xiaoting Zhang, Yi Yang Chaoyang Wang, Aliaksandr Siarohin, Peiye Zhuang, Yinghao Xu, Ceyuan Yang, Dahua Lin, Bolei Zhou, Sergey Tulyakov, Hsin-Ying Lee 198 BerfScene: Bev-conditioned Equivariant Radiance Fields for Infinite 3D Scene Generation, Qihang Zhang, Yinghao Xu, Yujun Shen, Bo Dai, Bolei Zhou, Ceyuan Yang 199 Face2Diffusion for Fast and Editable Face Personalization, Kaede Shiohara, Toshihiko Yamasaki 216 PromptCoT: Align Prompt Distribution via Adapted Chain-of- Hu, Thought, Junyi Yao, Yijiang Liu, Zhen Dong, Mingfei Guo, Helan , Kurt Keutzer, Li Du, Daquan Zhou, Shanghang Zhang Synthesis, Willi 217 Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Menapace, Aliaksandr Starohin, Ivan SKOTORhodov, Ekaterina Deyneka, Tsai-Shien Chen, Anil Kag, Yuwei Fang, Aleksei Stoliar, Elisa Ricci, Jian Ren, Sergey Tulyakov 218 L-MAGIC: Language Model Assisted Generation of Images with Coherence, Zhipeng Cai, Matthias Mueller, Reiner Birki, Diana Vasudev Lai, Michael Paulitsch Wofk, Shao-Yen Tseng, Junda Cheng, Gabriela Ben-Melech Stan, 200 FreeDrag: Feature Dragging for Reliable Point-based Image Editing, Pengyang Ling, Lin Chen, Pan Zhang, Huaian Chen, Yi Jin, Jinjin Zheng 201 OmniLocalRF: Omnidirectional Local Radiance Fields from Dynamic Videos, Dongyoung Choi, Hyeonjoong Jang, Min H. Kim 202 DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data, Qihao Liu, Yi Zhang, Song Bai, Adam Kortylewski, Alan Yuille 203 Generate Like Experts: Multi-Stage Font Generation by Incorporating Font Transfer Process into Diffusion Models, Bin Fu, Fanghua Yu, Anran Liu, Zixuan Wang, Jie Wen, Junjun He, Yu Qiao 219 Text-Driven Image Editing via Learnable Regions, Yuanze Lin, Yi-Wen Chen, Yi-Hsuan Tsai, Lu Jiang, Ming-Hsuan Yang 220 On Exact Inversion of DPM-Solvers, Seongmin Hong, 204 Panacea: Panoramic and Controllable Video Generation for Autonomous Driving, Yuqing Wen, Yucheng Zhao, Yingfei Liu, Fan Jia, Yanhui Wang, Chong Luo, Chi Zhang, Tiancai Wang, Xiaoyan Sun, Xiangyu Zhang 205 360DVD: Controllable Panorama Video Generation with 360-Degree Video Diffusion Model, Qian Wang, Weiqi Li, Chong Mou, Xinhua Cheng, Jian Zhang Kyeonghyun Lee, Suh Yoon Jeon, Hyewon Bae, Se Young Chun 221 Instruct-Imagen: Image Generation with Multi-modal Instruction, *Hextang Hu, Kelvin C.K. Chan, Yu-Chuan Su, Wenhu Chen, Yandong Li, Kihyuk Sohn, Yang Zhao, Xue Ben, Boqing Gong, William Cohen, Ming-Wei Chang, Xuhui Jia 206 CLIC: Concept Learning in Context, Mehdi Safaee, Aryan Mikaeili, *Or Patashnik, Daniel Cohen-Or, Ali Mahdavi-Amiri 207 Z*: Zero-shot Style Transfer via Attention Reweighting, Yingying Deng, Xiangyu He, Fan Tang, Weiming Dong 208 Tackling the Singularities at the Endpoints of Time Intervals in *Diffion Models, Pengze Zhang, Hubery Yin, Chen Li, Xianhua Xie 209 CosmicMan: A Text-to-Image Foundation Model for Humans, *Shikai Li, Jianglin Fu, Kaiyuan Liu, Wentao Wang, Kwan-Yee Lin, Wayne Wu 222 ConsistNet: Enforcing 3D Consistency for Multi-view Images Diffusion, Jiayu Yang, Ziang Cheng, Yunfei Duan, Pan Ji, Hongdong Li 223 LAMP: Learn A Motion Pattern for Few-Shot Video Generation Ruiqi Wu, Liangyu Chen, Tong Yang, Chunle Guo, Chongyi Li, Xiangyu Zhang 224 Task-Customized Mixture of Adapters for General Image Fuston, Pengfei Zhu, Yang Sun, Bing Cao, Qinghua Hu 225 Beyond Textual Constraints: Learning Novel Diffusion Conditions with Fewer Examples, Yuyang Yu, Bangzhen Liu, Chenxi Zheng, Xuemiao Xu, Huaidong Zhang, Shengfeng He 226 Portrait4D: Learning One-Shot 4D Head Avatar Synthesis using Synthetic Data, Yu Deng, Duomin Wang, Xiaohang Ren, Xingyu Chen, Baoyuan Wang 210 Customize your NeRF: Adaptive Source Driven 3D Scene Editing via Local-Global Iterative Training, Runze He, Shaofei Huang, Xuecheng Nie, Tianrui Hui, Luoqi Liu, Jiao Dai, Jizhong Han, Guanbin Li, Si Liu 211 PICTURE: PhotorealistIC virtual Try-on from Unconstrained designs, Shuliang Ning, Duomin Wang, Yipeng Qin, Zirong Jin, Baoyuan Wang, Xiaoguang Han 227 Animating General Image with Large Visual Motion Model, Dengsheng Chen, Xiaoming Wei, Xiaolin Wei 228 Sat2Scene: 3D Urban Scene Generation from Satellite Images *with Diffusion, Zuoyue Li, Zhenqiang Li, Zhaopeng Cui, Marc Pollereys, Martin R. Oswald 229 Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners, Yazhou Xing, Yingqing He, Zeyue Tian, Xintao Wang, Qiteng Chen 212 Focus on Your Instruction: Fine-grained and Multi-instruction Image Editing by Attention Modulation, Qin Guo, Tianwei Lin 213 Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework, Ziyao Huang, Fan Tang, Yong Zhang, Xiaodong Cun, Juan Cao, Jintao Li, Tong-Yee Lee 230 AVID: Any-Length Video Inpainting with Diffusion Model, Zhixing Zhang, Bichen Wu, Xiaoyan Wang, Yaqiao Luo, Luxin Zhang, Yinan Zhao, Peter Vajda, Dimitris Metaxas, Licheng Yu 231 Generative Powers of Ten, Xiaojuan Wang, Janne Kontkanen, *Brian Curless, Steven M. Seitz, Ira Kemelmacher-Shlizerman, Ben Mildenhall, Pratul Srinivasan, Dor Verbin, Aleksander Holynski 232 DistriFusion: Distributed Parallel Inference for High-Resolution *Diffusion Models, Muyang Li, Tianle Cai, Jiaxin Cao, Qinsheng Zhang, Han Cai, Junjie Bai, Yangqing Jia, Kai Li, Song Han 233 Condition-Aware Neural Network for Controlled Image Generation, Han Cai, Muyang Li, Qinsheng Zhang, Ming-Yu Liu, Song Han 234 It's All About Your Sketch: Democratising Sketch Control in Diffusion Models, Subhadeep Koley, Ayan Kumar Bhunia, Deeptanshu Sekhri, Aneeshan Sain, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song 235 FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation, Pengchong Qiao, Lei Shang, Chang Liu, Baigui Sun, Xiangyang Ji, Jie Chen 236 In-N-Out: Faithful 3D GAN Inversion with Volumetric Decomposition for Face Editing, Yiran Xu, Zhixin Shu, Cameron Smith, Seoung Wug Oh, Jia-Bin Huang 237 Video Prediction by Modeling Videos as Continuous Multi- Dimensional Processes, Gaurav Shrivastava, Abhinav Shrivasta