This image shows an open program guide, specifically for the "MAIN CONFERENCE," highlighting various research topics and session details from what appears to be the CVPR (Conference on Computer Vision and Pattern Recognition) 2024. The page is densely packed with paper titles, authors, and affiliations, listing research contributions in areas such as "Diffusion Models," "Image Synthesis," "Video Generation," and "Neural Rendering," among others. The background indicates the program guide is being viewed in a well-lit space possibly near a window, with a view of the floor and surroundings partially visible. The content reveals the advanced and technical nature of the conference, being a hub for cutting-edge research in computer vision and related fields. Text transcribed from the image: PROGRAM GUIDE 236 DetDiffusion Synergizing Generative and Perceptive for Enha Data Generation and Perception, Yibo Rano Kai Chen, Kaigiang Zhou, Yingjie Cal, Lanquig Hong Z LE Lihui Jiang Dit-Yan Yeung, Qiang Xu, Kai Zhang 239 Structure-Guided Adversarial Training of Diffusion Models, Yang Haotian Qian, Zhilong Zhang, Jingwei Liu, Bin Cui 240 Learning Adaptive Spatial Coherent Correlations for Speech- Preserving Facial Expression Manipulation, Tianshui Chen, Janman Lin Zhijing Yang Chunmei Qing, Liang Lin 241 On the Content Bias in Fréchet Video Distance, Songwei Ge Aniruddha Mahapatra, Gaurav Parmar, Jun-Yan Zhu, Jia-Bin Huang 242 Residual Learning in Diffusion Models, Junyu Zhang, Dacchang Liu, Eunbyung Park, Shichao Zhang, Chang Xu 243 A Unified Approach for Text- and Image-guided 4D Scene Generation, Yufeng Zheng, Xueting Li, Koki Nagano, Sifei Liu, Otmar Hilliges, Shalini De Mello 244 VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models, Haoxin Chen, Yong Zhang, Xiaodong Cun Menghan Xia, Xintao Wang, Chao Weng, Ying Shan 245 Neural implicit Morphing of Face Images, Guilherme Schardong, Tiago Novello, Hallison Paz, lurii Medvedev, Vinícius da Silva, Luiz Velho, Nuno Gonçalves 246 One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion Schedule Flaws and Enhancing Low-Frequency Controls, Minghui Hu, Jianbin Zheng, Chuanxia Zheng, Chaoyue Wang, Dacheng Tao, Tat-Jen Cham 247 Video Interpolation with Diffusion Models, Siddhant Jain, Daniel Watson, Eric Tabellion, Aleksander Holynski, Ben Poole, Janne Kontkanen 248 DiffSHEG: A Diffusion-Based Approach for Real-Time Speech- driven Holistic 3D Expression and Gesture Generation, Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, Qifeng Chen 249 TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models, Yushi Huang, Ruihao Gong, Jing Liu, Tianlong Chen, Xianglong Liu 250 Improving Training Efficiency of Diffusion Models via Multi-Stage Framework and Tailored Multi-Decoder Architecture, Huijie Zhang, Yifu Lu, Ismail Alkhouri, Saiprasad Ravishankar, Dogyoon Song, Qing Qu 251 Scaling Laws of Synthetic Images for Model Training... for Now, Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, Yonglong Tian 252 BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models, Fengyuan Shi, Jiaxi Gu, Hang Xu, Songcen Xu, Wei Zhang, Limin Wang 253 MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers, Haoyu Ma, Shahin Mahdizadehaghdam, Bichen Wu, Zhipeng Fan, Yuchao Gu, Wenliang Zhao, Lior Shapira, Xiaohui Xie 254 Pose Adapted Shape Learning for Large-Pose Face Reenactment, Gee-Sern Jison Hsu, Jie-Ying Zhang, Huang Yu Hsiang, Wei-Jie Hong 255 PRDP: Proximal Reward Difference Prediction for Large-Scale Reward Finetuning of Diffusion Models, Fei Deng, Qifei Wang, Wei Wei, Tingbo Hou, Matthias Grundmann 256 Discriminative Probing and Tuning for Text-to-Image Generation, Leigang Qu, Wenjie Wang, Yongqi Li, Hanwang Zhang, Liqiang Nie, Tat-Seng Chua 257 Towards Automated Movie Trailer Generation, Dawit Mureja Argaw, Mattia Soldan, Alejandro Pardo, Chen Zhao, Fabian Caba Heilbron, Joon Son Chung, Bernard Ghanem 258 CDFormer: When Degradation Prediction Embraces Diffusion Model for Blind Image Super-Resolution, Qingguo Liu, Chenyi Zhuang, Pan Gao, Jie Qin 259 FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Condition, Sicheng Mo, Fangzhou Mu, Kuan Heng Lin, Yanli Liu, Bochen Guan, Yin Li, Bolei Zhou 260 RealCustom: Narrowing Real Text Word for Real-Time Open- Domain Text-to-Image Customization, Mengqi Huang, Zhendong Mao, Mingcong Liu, Qian He, Yongdong Zhang 261 VidToMe: Video Token Merging for Zero-Shot Video Editing, Xirui Li, Chao Ma, Xiaokang Yang, Ming-Hsuan Yang 22 CVPR 2024 | PROGRAM GUIDE MAIN CONFERENCE WEDNESDAY, J 262 Layout-Agnostic Scene Text Image Synthesis with Di Xiaoliang Dai, Ankit Ramchandani, Guan Pang. Dimit Models, Qilong Zhangli, Jindong Jiang, Di Liu, Licheng Y 263 3D Multi-frame Fusion for Video Stabilization, Zhan P Metaxas, Praveen Krishnan Ye, Weiyue Zhao, Tianqi Liu, Huiqiang Sun, 264 DyBluRF: Dynamic Neural Radiance Fields f Zhiguo Cao Blurry nigun Monocular Video, Huiqiang Sun, Xingyi Li, Liao Shen, Xiny Xian, Yu Li, Tianyu Yang, Yunfei Liu, Dongxu Yue, Cover Feng-Lin Liu Maom Lin, D Dong ang Maximization Inversion for Zero-Shot Video B 265 A Video is Worth 256 Bases: Spatial-Temporal 266 StrokeFaceNeRF: Stroke-based Facial Appearance Editing Neural Radiance Field, Xiao-Juan Li, Dingxi Zhang, Shu-Yuc 267 Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models, Jiayi Guo, Xingqian Xu, Yifan Pu, Zanlin Ni, Chaofei w 268 One-dimensional Adapter to Rule Them All: Concepts Diffusion Manushree Vasu, Shiji Song, Gao Huang, Humphrey Shi Models and Erasing Applications, Mengyao Lyu, Yuhong Yang Haiwen Hong, Hui Chen, Xuan Jin, Yuan He, Hui Xue, Jungong H 269 Hierarchical Patch Diffusion Models for High-Resolution Video Generation, Ivan Skorokhodov, Willi Menapace, Aliaksandr Guiguang Ding Siarohin, Sergey Tulyakov 270 Taming the Tail in Class-Conditional GANS: Knowledge Shaim via Unconditional Training at Lower Resolutions, Saeed Khorra Mingqi Jiang, Mohamad Shahbazi, Mohamad H. Danesh, Li Fu Inpainting, Haiwei Chen, Yajie Zhao Shengwu Xiong, Yaxiong Chen, Yi Rong PROGRAM GUIDE Yang, Ruiyuan Gaa 286 PIA: Your Persona Modules in Text-to Yanhong Zeng, Yo 287 Codebook Transf Image Modeling, Xutao Li, Guotao 288 Generating Non- Rectification, Ya Cohen-Or, Hui H 289 Fast ODE-based Zhenyu Zhou, D Deformable Om Guidance, Yan 271 Don't Look into the Dark: Latent Codes for Pluralistic Image 272 Content-Style Decoupling for Unsupervised Makeup Transfer without Generating Pseudo Ground Truth, Zhaoyang Sun, 273 Generative Rendering: Controllable 4D-Guided Video Generation vi 2D Diffusion Models, Shengqu Cai, Duygu Ceylan, Matheus Gadelha Chun-Hao Paul Huang, Tuanfeng Yang Wang, Gordon Wetzstein 274 VideoSwap: Customized Video Subject Swapping with Interactive Semantic Point Correspondence, Yuchao Gu, Yipin Zhou, Bichen Wu, Licheng Yu, Jia-Wei Liu, Rui Zhao, Jay Zhangjie Wu, David Junhao Zhang, Mike Zheng Shou, Kevin Tang 275 Rethinking the Objectives of Vector-Quantized Tokenizers for Image Synthesis, Yuchao Gu, Xintao Wang, Yixiao Ge, Ying Shan, Mike Zheng Shou 276 Dysen-VDM: Empowering Dynamics-aware Text-to-Video Diffusion with LLMs, Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang, Tat-Seng Chua 277 Geometry-aware Reconstruction and Fusion-refined Rendering for Generalizable Neural Radiance Fields, Tianqi Liu, Xinyi Ye, 290 291 Learning Disem to-Image Gene Chen, Yuqian 292 SwiftBrush: O Variational Sc 293 Towards Und Diffusion for Wang, Tingfe 294 SimDA: Simp Generation, 295 Unlocking P Synthesis, Karteek Ala 296 Shadow-En Shaorong> 297 Exploiting Min Shi, Zihao Huang, Zhiyu Pan, Zhan Peng, Zhiguo Cao 278 DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing, Jia-Wei Liu, Yan-Pei Cao, Jay Zhangjie Wu, Weijia Mao, Yuchao Gu, Rui Zhao, Jussi Keppo, Ying Shan, Mike Zheng Shou 279 High-fidelity Person-centric Subject-to-Image Synthesis, Yibin Wang, Weizhong Zhang, Jianwei Zheng, Cheng Jin 280 Relation Rectification in Diffusion Model, Yinwei Wu, Xingyi Yang, Xinchao Wang 281 Diffusion Handles Enabling 3D Edits for Diffusion Models by *Lifting Activations to 3D, Karran Pandey, Paul Guerrero, Matheus Gadelha, Yannick Hold-Geoffroy, Karan Singh, Niloy J. Mitra 282 LeftRefill: Filling Right Canvas based on Left Reference through Generalized Text-to-Image Diffusion Model, Chenjie Cao, Yunuo Cai, Qiaole Dong, Yikai Wang, Yanwei Fu Prediction Ming-Hsua 298 StyleCiner 283 FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance Head-pose and Facial Expression Features, Andre Rochow, Max Schwarz, Sven Behnke 284 Tailored Visions: Enhancing Text-to-Image Generation with Personalized Prompt Rewriting, Zijie Chen, Lichao Zhang, Fangsheng Weng, Lili Pan, Zhenzhong Lan 285 MMA-Diffusion: MultiModal Attack on Diffusion Models, Yijun Pre-traine Amirsama 299 MotionEd Diffusion Zuxuan W 300 DanceCa and Dar Han, Zh 301 Diversit Compre Pil Heo 302 DiffMor Image Xingar 303 Stegod to-Ima Hurni, 304 Groun Refoc 305 VecF * Vikas Matt Evan 306 Sing Gen 307 Orth Rya 308 Lov Far 309 Tex Co Dh 310 4D Sa G 311 In черервал 12/0