V2: DynaMoN: Motion-Aware Fast And Robust Camera Localization for Dynamic Neural Radiance Fields

V1: DynaMoN: Motion-Aware Fast And Robust Camera Localization for Dynamic NeRF

Accepted at IEEE RA-L and ICRA 2025

Nicolas Schischka* (1), Hannah Schieber* (2,4), Mert Asim Karaoglu* (1,3), Melih Gorgulu (1),
Florian Grotzner (1), Alexander Ladikos (3), Daniel Roth (4), Nassir Navab (1,5) and Benjamin Busam (1)

* indicates equal contribution

Technical University of Munich,
Munich, Germany 1
Friedrich-Alexander Universitat Erlangen-Nürnberg,
Erlangen, Germany 2
ImFusion GmbH,
Munich, Germany 3
Technical University of Munich, TUM University Hospital, Munich, Germany 4
Johns Hopkins University, Baltimore, MD, USA 4

Preprint

IEEE RA-L

Code


COLMAP + HexPlane
DynaMoN
Compare
COLMAP +
HexPlane
Before
Ours After



We present DynaMoN, a motion aware fast and robust camera localization approach for novel view synthesis. DynaMoN can handle not only the motion of known objects using semantic segmentation masks but also that of unknown objects using a motion segmentation mask. Furthermore, it retrieves the camera poses faster and more robust compared to classical SfM approaches enabling a more accurate 4D scene representation. Compared to the state-of-the-art, DynaMoN outperforms other dynamic camera localization approaches and shows better results for novel view synthesis.


Abstract


The accurate reconstruction of dynamic scenes with neural radiance fields is significantly dependent on the estimation of camera poses. Widely used structure-from-motion pipelines encounter difficulties in accurately tracking the camera trajectory when faced with separate dynamics of the scene content and the camera movement. To address this challenge, we propose Dyna mic Mo tion-Aware Fast and Robust Camera Localization for Dynamic N eural Radiance Fields (DynaMoN). DynaMoN utilizes semantic segmentation and generic motion masks to handle dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis. Our novel iterative learning scheme switches between training the NeRF and updating the pose parameters for an improved reconstruction and trajectory estimation quality. The proposed pipeline shows significant acceleration of the training process. We extensively evaluate our approach on two real-world dynamic datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset. DynaMoN improves over the state-of-the-art both in terms of reconstruction quality and trajectory accuracy. We plan to make our code public to enhance research in this area.



Architecture



Results

Visual Improvements

DynaMoN enables better visual results in dynamic scenes.



Citation

@misc{schischka2024dynamon,
   title={DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields},
   author={Nicolas Schischka and Hannah Schieber and Mert Asim Karaoglu and Melih Görgülü and Florian Grötzner and Alexander Ladikos and Daniel Roth and Nassir Navab and Benjamin Busam},
   year={2024},
   eprint={2309.08927},
   archivePrefix={arXiv},
   primaryClass={cs.CV}
}