The image shows a detailed academic poster presented by members of the NUS Applied Research Center, titled "DynVideo-E: Harnessing Dynamic NeRF for Large Human-Centric Video Editing". The poster is authored by Jia-Wei Liu, Yan-Pei Cao, Jay Zhangjie Wu, Weijia Mao, and a Show Lab group from the National University of Singapore. The poster highlights the problem of maintaining long-term consistency in human-centric video and introduces a method using Video-3D Representation for challenging human-centric video editing. It displays results demonstrating the method's capacity to edit the foreground and background images, while also explaining the challenge of large motion and viewpoint changes. The poster includes visual examples of edited video frames with different backgrounds and artistic styles and reports experimental results showing improvement in human performance by 50%-95% compared to the state-of-the-art methods. The caption indicates that the tool, DynVideo-E, achieves high temporal and motion consistency in its edits. Text transcribed from the image: NUS ARC National University of Singapore Applied Research Center DynVideo-E: Harnessing Dynamic NeRF for La Jia-Wei Liu¹, Yan-Pei Cao¹, Jay Zhangjie Wu¹, Weijia Ma ¹ Show Lab, ² National Uni Highlight: Video-3D Representation for Challenging Human-Centric Video Editing Problem Definition: Given a human-centric video and a subject and background image, how to edit it? Challenges: Long-term consistency for large motion and viewpoint changes! Video-3D Representation with a set of image-based 3D dynamic human editing designs. Significantly outperforms SOTAs on two challenging datasets by 50%~95% for human perference. Experimental Results DynVideo-E edits foreground and background with high temporal and motion consistency!