A research presentation poster titled "DynVideo-E: Harnessing Dynamic NeRF for Lan..." is displayed, featuring a study conducted by researchers from ARC, NUS, and the National University of Singapore. The poster belongs to the NUS Arc department and includes the names Zhuohang Li, Changgong Zhang, Jinyu Liu, Yanbei Cao, Zhangjie Wu, Weijia Mao, Steve Lin, and Shoou-I Yu. The research focuses on enhancing video editing by separating foreground and background images, and addressing the challenge of maintaining high temporal and motion consistency in image-based 3D human editing designs. The central section of the poster showcases a series of visual results representing different stages of this editing process, highlighting the transition from original visuals to edited versions. The "Experimental Results" section at the bottom demonstrates various applications and scenarios where the DynVideo-E approach improves user experiences with increased preferences due to the high-quality edited outputs. The design appears to be aimed at engaging conference attendees or academic peers under the bright conference hall lights. Text transcribed from the image: AUSAARC Show Lab, National Uni Yan- Cao Jay Zhangjie Wu, Weijia A DyVideo E Hamessing Dynamic NeRF for Lar Chulorging Humas-Centric Video Editing and background image, how to edit it? V or and viewpoint change Background Red Style d f image-based 30 dynamic human editing designs wing datasets by 50%-95% for human perference. Experiment Resuts A DynVideo-E edits foreground and background with high temporal and motion consistency