This photograph showcases a research poster presented at CVPR (Conference on Computer Vision and Pattern Recognition) 2024 in Seattle, Washington. The poster is titled "Zero-Reference Low-Light Enhancement" and is authored by Wenjing Wang, Huan Yang, and Jianlong Yang. It discusses a novel method for enhancing low-light images using a zero-reference approach, which learns solely from normal light images, thereby reducing the need for supervision. **Introduction Section:** - **Objective:** Improve the robustness of data usage during training, illumination-specific hyper-parameters, and unseen scenarios. - **Methodology:** The researchers designed an illumination-invariant prior serving as a bridge between normal light and low-light images. **Method Section:** - The training framework includes predicting a physical quadruple prior and reconstructing the prior back to images using an encoder-decoder architecture. - The solution to detail degradation involves bypassing the standard decoded method. **Diagrams and Examples:** - Illustrated tables show the process of training on normal light images and inferring on low-light images. - Additional visual comparisons depict the effectiveness of their method in enhancing details in low-light conditions compared to images without specified components. The poster is affixed to a black frame, with some attendees partially visible in the background, highlighting an engaging conference environment. Text transcribed from the image: CVPR JUNE 17-21, 2024 SEATTLE, WA Zero-Reference Low-Light Enhancement Wenjing Wang1 Huan Yang2 Jianlong Yang2 Introduction Zero-Reference Low-Light Enhancement: learn solely with normal light images, reducing the need for supervision Our aim: improve the robustness to - Data usage during training - Illumination-specific hyper-parameters - Unseen scenarios Our methodology: design an illumination-invariant prior that serves as a bridge between normal light and low-light images Training on normal light images Reconstruction Loss ┌───────┬───────┬───────────────┐ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ ├───────┼───────┼───────────────┤ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───────┴───────┴───────────────┘ Normal Physical Prior-to-Img Output Light Quadruple Framework Input Prior Illumination Invariant Features Inference on low-light images ┌───────┬───────┬───────────────┐ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ ├───────┼───────┼───────────────┤ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───────┴───────┴───────────────┘ Low-light Physical Prior-to-Img Output Input Quadruple Framework Prior Method Training framework - Predict a physical quadruple prior - Reconstruct the prior back to image ─── SD Encoder Solution of detail degradation: bypass ┌──────────────────────────────────────────────────────────────────────────┐ │ │ │ ┌───────────────┬───────────────┬───────────────┬───────────────┐ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───────────────┴───────────────┴───────────────┴───────────────┘ │ │ │ │ ┌───────────┬───────────────┬───────────────┬───────────────┬────────┐ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ └───────────┴───────────────┴───────────────┴───────────────┴────────┘ │ │ │ └──────────────────────────────────────────────────────────────────────────┘ w/o H w/o C Low-light enhancement effects for different