This image showcases a scientific poster presented at a conference. The poster, titled "Exact Fusion via Feature Distribution Matching for Few-shot Image Generation," is authored by Yingbo Zhou, Yutong Ye, Pengyu Zhang, Xian Wei, and Mingsong Chen from the MoE Eng. Research Center at East China Normal University. The poster provides a comprehensive overview of the research purpose, challenges, and contributions. It discusses the problem of generating images from a few samples and introduces a novel method called Exact Fusion via Feature Distribution Matching (FDM) for few-shot image generation. The methodology includes detailed diagrams and flowcharts illustrating the training and inference phases. Additionally, the empirical study is depicted with graphs demonstrating data distribution and feature values. In the "Experiments" section, the poster presents experimental results, including visual examples and performance metrics, highlighting the effectiveness of the proposed FDM method. The analysis includes quantitative charts and comparative tables. The poster is displayed on a black stand against a backdrop of an exhibition hall with bright overhead lighting and geometric ceiling design. The presentation is by the CVPR (Conference on Computer Vision and Pattern Recognition), as noted by the top right corner's logo. The poster is marked with the number 348 at the upper right corner, indicating its placement among other presentations. Text transcribed from the image: CHINA 0 软件工程学院 Problem Definition Exact Fusion via Feature Distribution Matching for Few-shot Image Generation Background Yingbo Zhou, Yutong Ye, Pengyu Zhang, Xian Wei, Mingsong Chen* MoE Eng. Research Center of SW/HW Co-Design Tech. and App., East China Normal University Given & (e, 3 or 5) images sampled of the unseen category, using the model trained with seen data to generate realistic and diverse images for this category. Main Challenges • How to train one model with limited data for good performance - How to transfer the knowledge of seen data to unseen categories with a few available images; - How to generate more new realistic and diverse images for downstream classification tasks. Related Work Optimization-based methods employing meta-learning optimization algorithm to update model parameters, - Transformation-based method: retraining the transformed features obtained by the pre-trained model to generate new images for unseen categories: - Fusion-based method using episodic training style to fuse different features with the same label and making the model generalize the transfer ability to unseen categories in the end-to-end leaming process. Training Phase Ses Generation Contributions We propose a novel fusion-based framework called F2DGAN, which is the first attempt to fuse differant features with variational feature leaming and feature distribution matching for few-shot image generation ➤ We design a feature distribution matching module with an image-matching reconstruction loss, which can fuse consistent semantics exactly with histogram matching We devise a variational feature learning module with a feature reconstruction loss, which further guarantees the diversity of fused deep semantics in feature space. Methodology FOSEM SFLM G Experimental Settings Experiments Datasets Flowers, Animal Faces and VGGFaces Metrics: FID and LPIPS Pardierity Saw Category 43.2 Encoder Eames Category Inference Phase Generator Faxed Fatture P. Fate Extrateste Decoller Variational Feature Patios Matching Fai Feature Distribution Matching Module (FDMM) a) Empirical Cumulative Distribution Function: F(x)=1 b) Histogram Matching: M = +7-((-) denotes the stop-gradient operation c) Fusion with Feature Distribution Matching F + Min CVPR SEATTLE, WA JUNE 12,2004 Empirical Study Oview Framework of FSC Motivation 1. Optimization-based methods: poor generation quality and diversity, 2. Transformation-based methods: complex image transformation with unstable training process; 3. Fusion-based methods: existing fusion-based methods ignore the entanglement and absence of semantics, resulting in fuzzy details and poor diversity of the generation 4. Observation: one can clearly cbserve that different images of the same class have different feature distributions, and the accumulation of feature values of different images can better reflect the feature distribution of their categories fantages with the same and Sand Cam different future distributions from one single Edd ever dierbution of tow image and all the images in the same category matters 1544 友友一 A on future debution matching note. We turn and do to the gr the sorting algoritmutin Variational Feature Learning Module (VFLM) a) Variational Evidence Lower Bound: ELBO Eogp(siz))-D((z)\\(z)) b) Fusion with Variational Features: F = 2 + (s) + - =(4) 0(-) is the MSE loss. ++ ■Optimization Objective a) Image-matching Reconstruction Loss: b) Feature Reconstruction Loss: d) The discriminator optimization function: L= + c) The generator optimization function: L=++++ pettive generation ty dusty on the datase Generation as data augmentation achieves SOTA accuracy doenstaan classificat To highlight the effectiveness of our method, we disable the coresponding components for ablation studies In addition, exploring the generation performance under different -shot image generation sutings manifes F2DGAN has good generalization ability. Acknowledgements This wat was supported by the Natural Science Foundation of China 22721) and Digitas Silk Pad Shunga t Jon Lab of Trustworthy Inteligent Satwa 25 348