This photograph captures a research poster titled "Tactile-Augmented Radiance Fields," authored by Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, and Andrew Owens from the University of Michigan, Yale University, and UC Berkeley. The poster is part of a presentation, likely at an academic or professional conference, focusing on the intersection of visual and tactile sensory data. Highlights of the poster include: 1. **Visual-Tactile Scenes**: Introduction to the concept of Tactile-Augmented Radiance Fields (TaRFr), which integrate vision and touch into a shared 3D space. This section features visual and tactile capture setups and corresponding images. 2. **A 3D Visual-Tactile Dataset**: Presentation of representative examples from a dataset that combines visual and tactile data, comparing it to previous datasets to underscore advancements. 3. **Capturing Vision and Touch Signals**: Detailed overview of the capturing setup and process, including a visual camera and tactile sensor, illustrating how visual and tactile frames are recorded and labeled. 4. **Imputing Missing Touch**: Description of methods to predict tactile sensations from visual cues, showcasing touch samples and the qualitative results obtained from the algorithms. 5. **Downstream Tasks**: - **Tactile Localization**: Explores how different parts of an image/scene relate to tactile sensations. - **Material Classification**: Bar graph comparison of material classification results between various methods. 6. **Related Works**: A list of relevant works by other researchers in the field, emphasizing the poster’s foundation on existing studies and contributions. Logos from the University of Michigan, Yale University, and UC Berkeley are present at the top of the poster, alongside a QR code, signaling potential access to additional resources or information. The content integrates visuals, textual descriptions, and data graphs, providing a comprehensive overview of the research and its findings. Text transcribed from the image: UNIVERSITY OF MICHIGAN MY Berkeley UNIVERSITY OF CALIFORNIA Tactile-Augmented Radiance Fields Visual-Tactile Scenes Tactile-Augmented Radiance Fields (TaRF) bring vision and touch into a shared 3D space. Vision Touch Sample Vision Touch Pred. Yiming Dou Fengyu Yang Yi Liu Antonio Loquercio Andrew Owens University of Michigan Yale University UC Berkeley A 3D Visual-Tactile Dataset Representative examples 回 Downstream Tasks Tactile Localization Which parts of the image/scene feel like the touch signal? Query Heatmap Query Heatmap Query Heatmap Comparison to previous datasets Capturing Vision and Touch Signals Capturing setup Dataset ObjectFolder 2.0 VisGel ObjectFolder Real 3.7k. SSVTP 4.6k Samples Aligned Scenario 12k × Object Tabletop Robot Object Source Synthetic Robot Tabletop Robot Touch and Go 13.9k Sub-scene Human Label correspondences TaRF (Ours) 19.3k Full scene Human Touch & Go OF 2.0 SSVTP OF Real VisGel TaRF (ours) Material Classification Visual Camera [R t Visual frames Tactile Sensor Touch Samples Imputing Missing Touch Qualitative Results Condition Measured Ours 100T Touch and Go Touch and Go + ObjectFolder 90 Touch and Go + TaRF 80 VisGel 70 Tactile frames Capturing process Visual image with Image recorded by known camera pose vision-based touch sensor Solve for relative pose M ||7(KR | t],X) - 1||| min R.,t M IT 3 i=1 Projection matrix Method Overview (R,T) NeRF X 3D point in visual image K: Intrinsics of tactile sensor u: R: Relative rotation M Pixel in touch image. Number of correspondences RGB Depth t Relative translation Gaussian Noise Latent Diffusion Est. Touch 60 59.0 54.7 54.6 50 Material 77.3 88.7 87.3 Hard/Soft 79 Related Works 1. Zhong, Shaohong, Alessandro Albini, Oiwi Parker Jones, Perla Maiolino, an ing a NeRF: Leveraging neural radiance fields for tactile sensory data gene 2. Gao, Ruohan, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li F and Jiajun Wu. "Objectfolder 2.0: A multisensory object dataset for sim2rea