This image displays a detailed research poster titled "Learning Continual Compatible Representation for Re-indexing Free Lifelong Person Re-identification." The poster, presented at CVPR (Computer Vision and Pattern Recognition) Conference in Seattle, WA, from June 17-21, 2024, showcases the work by Zhenyu Cui, Jiahuan Zhou, Xun Wang, Manyu Zhu, and Yixin Peng from ByteDance Inc and Wangxuan Institute of Computer Technology, Peking University. The poster elaborates on innovative techniques for Lifelong Person Re-identification (L-ReID), emphasizing the Continual Compatible Representation (C²R) scheme. The methodology addresses the re-indexing challenge by employing strategies such as Continual Compatible Transfer (CCT), Balanced Compatible Distillation (BCD), and Balanced Anti-forgetting Distillation (BAD). The 'Methods' section outlines these approaches, and detailed diagrams demonstrate various stages and workflows involved. The 'Experiments' section contains comparative analysis tables reflecting the superior performance of the proposed C²R on several benchmark L-ReID datasets. Graphs further illustrate the stable SOTA (state-of-the-art) performance across different training stages. The poster summarizes its findings in the 'Conclusion' section, proposing C²R for practical L-ReID applications and highlighting the balance achieved between anti-forgetting and compatibility in new feature accumulation. Additional QR codes link to a lab homepage, GitHub page, and a WeChat page for more in-depth information and related works. Text transcribed from the image: Byte Dance Introduction: Learning Continual Compatible Representation for Re-indexing Free Lifelong Person Re-identification Zhenyu Cui', Jiahuan Zhou, Xun Wang2, Manyu Zhu², Yuxin Peng1* "Wangxuan Institute of Computer Technology, Peking University 2ByteDance Inc Lifelong Person Re-identification (L-ReID): Aims to learn from sequentially collected data to match a person across different scenes. Once the model is updated, historical images in the gallery are re-calculated to obtain new features for testing (Re-indexing). Feature Replacing RelD w/Re-indexing RelD w/o Re-indexing DO Privacy O Gallery Features Query Fest • Limitations: Re-indexing is infeasible when raw images in the gallery are unavailable due to data privacy concerns, resulting in incompatible retrieval between the query and the gallery features calculated by different models and limitation on L-ReID performance. Goal: Achieve effective L-ReID Method: • Overview: A Continual Compatible Representation (C2R) scheme is proposed for RFL-ReID to continuously update old features in the gallery to make it compatible with new query features. > Continual Compatible Transfer (CCT): Update old gallery features continuously and transfer old feature to the new feature space, adaptively capturing the knowledge from different domains. ➤ Balanced Compatible Distillation (BCD): Achieve the compatibility between the transferred features and the new ones, preserving the relationship between the old and the transferred features in a unified feature space. ➤ Balanced Anti-forgetting Distillation (BAD): Eliminate the accumulated forgetting of old knowledge during the continuous transfer, balancing the old and the new discriminative information. ( Training at Stepe Continual Compatible Transfer (CCT) Balanced Compatible Distillation (BCD) Balanced Anti-forgetting Distillation (BAD) Experiments: The CVPR M SEATTLE, WA JUNE 17-21, 2024 • Task Method mAP MAP R@I R@I Market-1501 CUHK-SYSU DukeMTMC-ReID MSMT17-V2 mAP ROI MAP ROI The experiment on the benchmark L-ReID datasets verify the superiority of our proposed C²R. Joint Train 68.1 85.2 81.4 83.8 60.4 75.7 24.6 48.9 Average CUHK03 MAP R@1 mAP R@I 42.7 43.6 55.4 67.5 SPD [28] 35.6 61.2 61.7 64.0 27.5 47.1 5.2 15.5 44.3 42.2 34.4 46.4 LwF [11] 563 56.3 77.1 72.9 75.1 29.6 46.5 6.0 16.6 36.1 37.5 40.2 50.6 CRE CRL [40] 58.0 78.2 72.5 75.1 28.3 45.2 6.0 15.8 37.4 39.8 40.5 50.8 L-ReID AKATIO AKA [18] 58.1 77.4 72.5 74.8 28.7 45.2 6.1 38.7 16.2 40.4 40.8 50.8 MEGE [20] 39.0 01.0 61.6 73.3 76.6 16.9 30.3 4.6 13.4 36.4 37.1 34.0 43.8 PatchKD [27] 68.5 85.7 75.6 78.6 33.8 50.4 17.0 6.5 34.1 36.8 43.7 53.7 RFL-ReID Ours 58.0 40.0 39.1 58.0 LWF [11] 52.2 36.1 AKA [18] 49.0 55.6 38.8 CVS* [30] 78.4 57.8 PatchKD [27] 61.4 Ours 62.7 79.7 64.4 86.8 76.7 69.0 79.5 33.2 48.6 35.6 17.4 6.6 36.2 44.2 53.7 40.7 7.8 15.3 2.6 7.1 23.3 23.9 22.6 29.0 37.6 38.6 7.6 13.8 3.1 8.3 26.5 26.5 22.4 27.7 49.7 19.3 30.0 4.6 20.8 59.0 26.7 66.3 34.4 5.1 41.7 24.7 24.7 27.3 11.5 12.8 36.0 37.6 36.2 6.8 15.7 37.2 37.6 39.5 34.3 44.4 48.2 Our C2R achieves stable SOTA performance at each training stages. Contal Compatible without re-indexing raw images in the gallery by continuously updating old features in the Regrestative gallery to make it compatible with new query features. • Contribution: 日日" ➤Focus on a practical but challenge problem on L-ReID task, called Re-indexing Free Lifelong Person Re-identification (RFL-ReID). Propose a Continual Compatible Representation (C2R) scheme to prevent re-indexing to raw images in the old gallery in L-ReID. A Continual Compatible Transfer (CCT) and two balanced distillation modules are proposed to achieve L-ReID without re- indexing raw images in the gallery. Transfer at Stage Pipeline: (c) Testing at Stage ID: 112V Query Training Stage: CCT, BCD, and BAD are jointly optimized. Transferring Stage: CCT network is employed to update the old feature set after each training stage in L-ReID. ➤ Testing Stage: Query features calculated by the new model is used to directly match the updated gallery features to achieve re- indexing free lifelong person re-identification. ➤Three stages are executed sequentially when facing new data. Conclusion • . • We propose C2R for the practical & challenge task, RFL-ReID, tackling the data privacy issue in lifelong person re-identification. Two balanced distillation module are designed to balance the anti- forgetting of old knowledge with the compatibility to the new model. Extensive experiments on several benchmark L-ReID datasets demonstrate the effectiveness of our method. More Information Go! Related Works Go! [Lab homepage] [GitHub page] MIPL [WeChat page] [1] Zhenyu Cui, et al., "Continual Vision-Language Retrieval via Dynamic Knowledge Rectification", 8th AAAI Conference on Artificial Intelligence (AAAI), 2024. [2] Zhenyu Cui, et al., "DMA: Dual Modality-Aware Alignment for Visible-Infrared Person Re-Identification", IEEE Transactions on Information Forensics and Security (TIFS), 2024.