In the image, there is a short poster displayed on a wall, featuring the words "UCLA Duke" at the top. Below this, there are two figures standing next to each other, likely representing dogs. The poster appears to be showcasing a method of backdoor attack to a constrained learning model, with the figures possibly acting as guards. The method involves using simulated images and manipulating the background to confuse the model, with the goal of improving the accuracy of the model's predictions. The poster's design suggests that this method has been applied to distinguish dogs from other common 2D images, demonstrating its effectiveness in various scenarios. Text transcribed from the image: O UCLA Duke UNIVERSITY Code available at https://github.com/jzhang538/CorruptEncoder Overview Data Poi Data poisoning based backdoor attack to contrastive learning (CL): An attacker embeds backdoor into an encoder via injecting poisoned images into the unlabeled pre-training dataset. A downstream classifier built based on a backdoored encoder predicts an attacker-chosen class (called target class) for any image embedded with an attacker-chosen trigger. Attacker's knowledge: The attacker can collect some reference images that include reference objects from the target class and some unlabeled background images. The attacker can not manipulate the pre-training. Ou Our rand size the max imag Resu arou shou shou Figure 1. Reference image vs Reference object. exclu Key idea: CL maximizes the feature similarity between two randomly cropped views of an image. If one view includes a reference object and the other includes the trigger, then maximizing their feature similarity would learn an encoder that produces similar feature vectors for the reference object and any trigger-embedded image. A downstream classifier would predict the target class for the reference object and any trigger-embedded image. Poisoned b₁ (0,0%) = Oh Image Poisoned Image 'Maximize 'Maximize Cor ¦ Feature 'Similarity ¦ Feature 'Similarity and refer (b) CorruptEncoder Ev Clea (a) Existing Attack Figure 2. Comparing existing attacks with CorruptEncoder. Limitation of existing attacks: Two randomly cropped views of a poisoned image are both from the same reference image->fails to build strong correlations between the trigger and images in the target class. accu imag Atta that built