The image shows a prominent research poster displayed at a conference, showcasing work titled "Construction via Diffusion," as part of the "Highlight" section. The poster is affiliated with institutions ETH Zurich, the Computer Vision and Learning Group (VLG), and Meta AI, as indicated by the logos. It is positioned at booth number 182. The poster presents detailed research on reconstructing global motion and local body pose through a diffusion-based approach. The academic work is credited to authors whose names appear at the top of the poster. Visual diagrams and equations outline the methodology and experimental results, comparing different methods and showcasing significant improvements in motion plausibility and accuracy without foot skating errors. Illustrations and comparative motion reconstructions highlight the effectiveness of their proposed method, which is claimed to be 30 times faster than the previous state-of-the-art method HuMoR during inference. Experiment results on synthetic and real datasets are displayed with accompanying figures and metrics demonstrating the efficiency and improved performance of their approach. The environment is well-lit and appears to be a professional academic event, possibly a symposium or a conference, with multiple research posters around. Text transcribed from the image: Highlight nstruction via Diffusion ETH Zürich 00 Meta adlecek?, Siyu Tang', Federica Bogo² fusing Global and Local Motion global trajectory local body pose Ro=DR(Rt, t, CR) (Ro, Po) = Dp((Ro, Pt), t, cp) Inference iteration = 1 MPOP MROR Pt PoseNet Pov 康 VLG Training on: AMASS Computer Vision and Learning Croup CVPR SEATTLE, WA Experiments Test on: AMASS (synthetic noise + occlusions), PROX (RGBD/RGB), EgoBody (RGB) Evaluation metrics: Accuracy: MPJPE Physical plausibility: acceleration + foot skating + foot-floor penetration 46.96 Method R R₁ GMPJPE -vis -occ -all VPoser-t 33.0 242.6 109.2 0.219 HuMor [67] 42.4 167.9 88.0 0.68 0.230 MDM++ 36.2 71.9 49.2 0.94 0.102 Ours 21.8 57.4 34.8 0.95 0.078 Contt Skat RGB-D RGB 1.8 1.9 Method Skating Accel Dist! Skating! Accel Dist LEMO [100] 0.176 HuMoR [67] 0.117 PhaseMP [72] Ours 0.038 1.8 34.22 54.76 0.139 23 35.41 0.180 1.8 3.36 0.116 22 9.73 TrajNet Results on PROX: No trajectory-pose correlation → foot skating Results on AMASS: 182 >30% improvement over accuracy >67% (RGB-D)/>17% (RGB) improvement over foot skating 上海科技大学 University of Zurich Motivation An N-Poir e events gented by a 10 le and pr mdp fron Mi cover partal Inear velochy and I same with a but inear soles Applinga soliny w for these part stationer Contributions 1. Al ser bra ut semide tens, this her the goonie 2.A10of angle bet ne praneration the ingred unec ability ting der 14Monctond precin and stone de anted by the ne Agonyingined each se What is an event camera? Messaan of andra briges durs et Advantages high temporal restored to b power consumption, hippie ban Multiple Solutions The proposed soler tuft of One my respond to floping ang The second compond to fping the edito Dandipue by checking the Characte ling Global Motion Reconstruction ajNet with local body pose global motion at inference time Inference iteration > 1 TrajControl Pt PoseNet Po ME Po R- TrajNet RGB 1114 Skating 0.116 0.165 ||Accel↓ → TrajControl improves motion plausibility 2.2 Input 2.7 on PROX dataset HUMOR Ours GT HMR 30x times faster than HuMoR during inference!