This image showcases an academic poster titled "Exact Fusion via Feature Distribution Matching for Few-shot Image Generation" presented by researchers from the Research Center of SMWH/CV-Co-Design Tech. and App., East China Normal University. The poster, number 348, is displayed in a conference setting, most likely the CVPR (Conference on Computer Vision and Pattern Recognition) given the top-right logo. The main sections of the poster include: 1. **Title and Authors**: The title is prominently displayed along with the names of the contributing researchers. 2. **Background**: An introductory section outlining the problem definition, main challenges, and motivation for the research. 3. **Methodology**: Detailed diagrams explaining the proposed model and algorithms, such as the Feature Distribution Matching (FDM) and the Variational Feature Learning (VFLM) methods. 4. **Contributions**: A summary of the novel contributions made by the research. 5. **Experiments**: Results from empirical studies, showcasing data and visual comparisons to highlight the effectiveness of the proposed methods. 6. **References and Acknowledgements**: Citations to related work and acknowledgements of supporting institutions, including the Natural Science Foundation of China. The poster includes various charts, tables, and images that depict experimental results, data analysis, and the structure of the proposed models, giving a comprehensive overview of their research findings and contributions to the field of few-shot image generation. Text transcribed from the image: CHINA 0 软件工程学院 Problem Definition Exact Fusion via Feature Distribution Matching for Few-shot Image Generation Background Yingbo Zhou, Yutong Ye, Pengyu Zhang, Xian Wei, Mingsong Chen* MoE Eng. Research Center of SW/HW Co-Design Tech. and App., East China Normal University Given & (e, 3 or 5) images sampled of the unseen category, using the model trained with seen data to generate realistic and diverse images for this category. Main Challenges • How to train one model with limited data for good performance - How to transfer the knowledge of seen data to unseen categories with a few available images; - How to generate more new realistic and diverse images for downstream classification tasks. Related Work Optimization-based methods employing meta-learning optimization algorithm to update model parameters, - Transformation-based method: retraining the transformed features obtained by the pre-trained model to generate new images for unseen categories: - Fusion-based method using episodic training style to fuse different features with the same label and making the model generalize the transfer ability to unseen categories in the end-to-end leaming process. Training Phase Ses Generation Contributions We propose a novel fusion-based framework called F2DGAN, which is the first attempt to fuse differant features with variational feature leaming and feature distribution matching for few-shot image generation ➤ We design a feature distribution matching module with an image-matching reconstruction loss, which can fuse consistent semantics exactly with histogram matching We devise a variational feature learning module with a feature reconstruction loss, which further guarantees the diversity of fused deep semantics in feature space. Methodology FOSEM SFLM G Experimental Settings Experiments Datasets Flowers, Animal Faces and VGGFaces Metrics: FID and LPIPS Pardierity Saw Category 43.2 Encoder Eames Category Inference Phase Generator Faxed Fatture P. Fate Extrateste Decoller Variational Feature Patios Matching Fai Feature Distribution Matching Module (FDMM) a) Empirical Cumulative Distribution Function: F(x)=1 b) Histogram Matching: M = +7-((-) denotes the stop-gradient operation c) Fusion with Feature Distribution Matching F + Min CVPR SEATTLE, WA JUNE 12,2004 Empirical Study Oview Framework of FSC Motivation 1. Optimization-based methods: poor generation quality and diversity, 2. Transformation-based methods: complex image transformation with unstable training process; 3. Fusion-based methods: existing fusion-based methods ignore the entanglement and absence of semantics, resulting in fuzzy details and poor diversity of the generation 4. Observation: one can clearly cbserve that different images of the same class have different feature distributions, and the accumulation of feature values of different images can better reflect the feature distribution of their categories fantages with the same and Sand Cam different future distributions from one single Edd ever dierbution of tow image and all the images in the same category matters 1544 友友一 A on future debution matching note. We turn and do to the gr the sorting algoritmutin Variational Feature Learning Module (VFLM) a) Variational Evidence Lower Bound: ELBO Eogp(siz))-D((z)\\(z)) b) Fusion with Variational Features: F = 2 + (s) + - =(4) 0(-) is the MSE loss. ++ ■Optimization Objective a) Image-matching Reconstruction Loss: b) Feature Reconstruction Loss: d) The discriminator optimization function: L= + c) The generator optimization function: L=++++ pettive generation ty dusty on the datase Generation as data augmentation achieves SOTA accuracy doenstaan classificat To highlight the effectiveness of our method, we disable the coresponding components for ablation studies In addition, exploring the generation performance under different -shot image generation sutings manifes F2DGAN has good generalization ability. Acknowledgements This wat was supported by the Natural Science Foundation of China 22721) and Digitas Silk Pad Shunga t Jon Lab of Trustworthy Inteligent Satwa 25 348