This image showcases a research poster presented at the CVPR (Conference on Computer Vision and Pattern Recognition) held in Seattle, WA, from June 17 to 21, 2024. The poster, titled "Low-Res Leads the Way: Improving Generalization for Super-Resolution by Self-Supervised Learning" by authors Haoqi Chen, Wenbo Li, Jinjin Gu, Jingjing Ren, Haozhe Sun, Xuan Zou, Zhensong Zhang, Youliang Yan, and Lei Zhu, details advanced techniques in the field of image super-resolution. The research is a collaboration between several institutions, including The Hong Kong University of Science and Technology, Huawei Noah’s Ark Lab, Tsinghua University, and the University of Sydney. The poster is divided into several sections, including background, abstract, methodology, and qualitative comparisons. It explores the innovative LiWay method, which aims to enhance super-resolution models' generalization, especially by incorporating self-supervised learning with low-frequency data generation techniques. The methodology section illustrates a detailed pipeline for the LiWay approach, emphasizing pretext tasks and learning processes. Additionally, numerous graphical comparisons demonstrate the effectiveness of the proposed method on various datasets, showcasing qualitative improvements in image resolution. QR codes are provided for accessing the full paper and additional resources. This detailed and visually rich poster captures the significant strides taken in super-resolution technology, highlighting both the theoretical framework and practical efficacy of the proposed solutions. Text transcribed from the image: CVPR JUNE 17-21, 2024 165 SEATTLE, WA HUAWEI Low-Res Leads the Way: Improving Generalization for Super-Resolution by Self-Supervised Learning Haoyu Chen', Wenbo Li², Jinjin Gu³, Jingjing Ren', Haoze Sun, Xueyi Zou², Zhensong Zhang, Youliang Yan², Lei Zhu¹,5* 'The Hong Kong University of Science and Technology (Guangzhou) Huawei Noah's Ark Lab The University of Sydney Tsinghua University The Hong Kong University of Science and Technology urce Code: SSM PSNR SSMPSESSIM 04 02180286571-437 +0.004 (423) 87 4311634-36003 +1.25 5404L6-122 6.977-689-0.306 8-43009344450306975-827-0.002 Background For image super-resolution (SR), bridging the gap between the performance on synthetic datasets and real-world degradation scenarios remains a challenge. Abstract This work introduces a novel "Low-Res Leads the Way" (LWay) training framework, merging Supervised Pre-training with Self-supervised Learning to enhance the adaptability of SR models to real-world images. mted Samples 322 baseline + 12.02 11.M 316- 0.25 0.50 0.75 100 125 150 Training Iterations led SL space on synthetic data Unseen High quality Real-world Image Low fidelity SSL space on real test data SL space on synthetic data PSSL space on real test data Paper Link Low quality High fidelity Ground Truth High quality High fidelity LWay combine the benefits of supervised learning (SL) on synthetic data and self-supervised learning (SSL) on the unseen test Trainable Frozen HR LR Step 1: LR Reconstruction Pre-training Target LR Degradation Encoder Reconstructor R E Degradation Embedding e Off-the-shelf SR Network S Franzen Parameters Trainable Parameters Degradation Encoder Degradation Embedding e Ci Reconstructed LR CLPIPS DWT Reconstructed Target LR Reconstructed Target LR High-frequency weight CLPIPS Real-ESAGAN Real-ESRGAN+ Way 69RGAN LWay SavR-GAN-LWay CVPR JUNE 17-21, 2024 SEATTLE, WA BSRGAN BSRGAN LWay FeMaSR FeMSR LWay StableSR StableSR-LWay SwiniR-GAN SwiniR-GAN LWP Step 2: Zero-shot Self-supervised Learning SR The proposed training pipeline LWay consists of two steps. Target LR Step 1, we pre-train a LR reconstruction network to capture degradation embedding from LR images. This embedding is then applied to HR images, regenerating LR content. Step 2, for test images, a pre-trained SR model generates SR outputs, which are then degraded by the fixed LR reconstruction network. We iteratively update the SR model using a self-supervised learning loss applied to LR images, with a focus on high-frequency details through weighted loss. This refinement process enhances the SR model's generalization performance on unseen images. Qualitative comparisons on real-world datasets. The content within the blue box represents a zoomed-in image. images, achieve high quality and LR BSRGAN SL Space iteration →SSL Space HR high fidelity SR results The SR model advances through the proposed fine-tuning iterations, moving from the supervised learning (SL) space of synthetic degradation to the self-supervised learning (SSL) space learned from test images. LR ZSSR DASR LDM DiffBIR StableSR DARSR CAL GAN LWay (Ours) Qualitative comparisons on two old films.