This image features a research poster presentation from the Applied Research Center (ARC) at the National University of Singapore (NUS). The poster, titled "DynVideo-E: Harnessing Dynamic NeRF for Large Motion Human-Centric Video Editing," presents a novel approach in the field of video-3D representation. Focusing on challenging human-centric video editing, the poster highlights a method to edit videos where both subject and background experience large motion and viewpoint changes, while maintaining long-term consistency. The research team, consisting of Jia-Wei Liu, Yan-Pei Cao, Jay Zhangjie Wu, Weijia Mao, and others, showcases their experimental results through a series of images demonstrating the algorithm's ability to dynamically adapt the human figure and the background style. Detailed comparisons reveal how their model significantly outperforms the state-of-the-art techniques by 50% to 95% in terms of human preference in challenging datasets. The poster is divided into clear sections, illustrating the problem definition, methodological approach, experimental results, and the significant improvements achieved. Key points are made through visual examples, with a focus on the high temporal and motion consistency of the DynVideo-E edits. Text transcribed from the image: DynVideo-E: Harnessing Dynamic NeRF for Lan Jia-Wei Liu¹, Yan-Pei Cao¹,³, Jay Zhangjie Wu¹, Weijia M ¹ Show Lab, ² National Uni Highlight: Video-3D Representation for Challenging Human-Centric Video Editing Problem Definition: Given a human-centric video and a subject and background image, how to edit it? Challenges: Long-term consistency for large motion and viewpoint changes! Ref Subject Ref Style Background Ref Style - Video-3D Representation with a set of image-based 3D dynamic human editing designs. - Significantly outperforms SOTAs on two challenging datasets by 50% ~ 95% for human perference. Experimental Results DynVideo-E edits foreground and background with high temporal and motion consistency!