This image shows a detailed academic poster titled "Tactile-Augmented Radiance Fields," presented by authors Yiming Dou, Fengyu Yang, Yi Liu, Antonio Loquercio, and Andrew Owens from the University of Michigan, Yale University, and UC Berkeley. The focus of the research is incorporating tactile sensory data into 3D visual scenes, enabling a more immersive and comprehensive understanding of the environment.

Key sections of the poster include:

1. **Visual-Tactile Scenes**: Explanation of how Tactile-Augmented Radiance Fields (TaRFs) integrate vision and touch into a shared 3D space, illustrated by visual and tactile images.

2. **Capturing Vision and Touch Signals**: Detailed breakdown of the capturing setup involving a visual camera and a tactile sensor, including the process for capturing visual and tactile frames and solving for relative pose.

3. **A 3D Visual-Tactile Dataset**: Presentation of representative examples, comparisons to previous datasets, and methodologies for capturing quality visual-tactile data.

4. **Imputing Missing Touch**: Methods for interpolating missing tactile data, qualitative results demonstrating the effectiveness of the approach, and an overview of the technical methods used.

5. **Downstream Tasks**: Examples of tasks benefitting from this integration, such as tactile localization and material classification, with illustrative heatmaps and query results.

6. **Related Works**: A list of referenced works for further reading on the subject matter.

The poster also includes logos of the involved institutions (University of Michigan, Yale University, UC Berkeley) and relevant graphical data representations, such as charts, heatmaps, and visual examples, to support the research findings.
Text transcribed from the image:
UNIVERSITY OF
MICHIGAN
MY
Berkeley
UNIVERSITY OF CALIFORNIA
Tactile-Augmented Radiance Fields
Visual-Tactile Scenes
Tactile-Augmented Radiance Fields (TaRF) bring vision and
touch into a shared 3D space.
Vision Touch Sample
Vision Touch Pred.
Yiming Dou Fengyu Yang Yi Liu Antonio Loquercio Andrew Owens
University of Michigan Yale University UC Berkeley
A 3D Visual-Tactile Dataset
Representative examples
回
Downstream Tasks
Tactile Localization
Which parts of the image/scene feel like the touch signal?
Query
Heatmap
Query Heatmap Query Heatmap
Comparison to previous datasets
Capturing Vision and Touch Signals
Capturing setup
Dataset
ObjectFolder 2.0
VisGel
ObjectFolder Real 3.7k.
SSVTP
4.6k
Samples Aligned Scenario
12k
×
Object
Tabletop Robot
Object
Source
Synthetic
Robot
Tabletop Robot
Touch and Go
13.9k
Sub-scene
Human
Label correspondences
TaRF (Ours)
19.3k
Full scene
Human
Touch & Go OF 2.0
SSVTP
OF Real
VisGel TaRF (ours)
Material Classification
Visual
Camera
[R t
Visual frames
Tactile
Sensor
Touch Samples
Imputing Missing Touch
Qualitative Results
Condition Measured Ours
100T
Touch and Go
Touch and Go + ObjectFolder
90
Touch and Go + TaRF
80
VisGel
70
Tactile frames
Capturing process
Visual image with
Image recorded by
known camera pose vision-based touch sensor
Solve for relative pose
M
||7(KR | t],X) - 1|||
min
R.,t M
IT 3
i=1
Projection matrix
Method Overview
(R,T) NeRF
X
3D point in visual image
K: Intrinsics of tactile sensor u:
R: Relative rotation
M
Pixel in touch image.
Number of correspondences
RGB
Depth
t
Relative translation
Gaussian Noise
Latent Diffusion
Est. Touch
60
59.0
54.7
54.6
50
Material
77.3
88.7
87.3
Hard/Soft
79
Related Works
1. Zhong, Shaohong, Alessandro Albini, Oiwi Parker Jones, Perla Maiolino, an
ing a NeRF: Leveraging neural radiance fields for tactile sensory data gene
2. Gao, Ruohan, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li F
and Jiajun Wu. "Objectfolder 2.0: A multisensory object dataset for sim2rea