ModalNeRF: Neural Modal Analysis and Synthesis for Free‐Viewpoint Navigation in Dynamically Vibrating Scenes

Recent advances in Neural Radiance Fields enable the capture of scenes with motion. However, editing the motion is hard; no existing method allows editing beyond the space of motion existing in the original video, nor editing based on physics. We present the first approach that allows physically‐bas...

Full description

Saved in:
Bibliographic Details
Published in:Computer graphics forum Vol. 42; no. 4
Main Authors: Petitjean, Automne, Poirier‐Ginter, Yohan, Tewari, Ayush, Cordonnier, Guillaume, Drettakis, George
Format: Journal Article
Language:English
Published: Oxford Blackwell Publishing Ltd 01-07-2023
Wiley
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advances in Neural Radiance Fields enable the capture of scenes with motion. However, editing the motion is hard; no existing method allows editing beyond the space of motion existing in the original video, nor editing based on physics. We present the first approach that allows physically‐based editing of motion in a scene captured with a single hand‐held video camera, containing vibrating or periodic motion. We first introduce a Lagrangian representation, representing motion as the displacement of particles, which is learned while training a radiance field. We use these particles to create a continuous representation of motion over the sequence, which is then used to perform a modal analysis of the motion thanks to a Fourier transform on the particle displacement over time. The resulting extracted modes allow motion synthesis, and easy editing of the motion, while inheriting the ability for free‐viewpoint synthesis in the captured 3D scene from the radiance field. We demonstrate our new method on synthetic and real captured scenes.
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.14888