D-NeRF: Neural Radiance Fields for Dynamic Scenes

We propose D-NeRF, a method for synthesizing novel views, at an arbitrary point in time, of dynamic scenes with complex non-rigid geometries. We optimize an underlying deformable volumetric function from a sparse set of input monocular views without the need of ground-truth geometry nor multi-view images.

Model

model

The proposed architecture consists of two main blocks: a deformation network $\Psi_t$ mapping all scene deformations to a common canonical configuration; and a canonical network $\Psi_x$ regressing volume density and view-dependent RGB color from every camera ray.

Time and Space Conditioning

Visualization of the Learned Scene Representation

BibTex

@inproceedings{pumarola2020d,
    title={{D-NeRF: Neural Radiance Fields for Dynamic Scenes}},
    author={Pumarola, Albert and Corona, Enric and Pons-Moll, Gerard and Moreno-Noguer, Francesc},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2020}
}

Publications

  • D-NeRF: Neural Radiance Fields for Dynamic Scene
    • D-NeRF: Neural Radiance Fields for Dynamic Scene
    • A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer
    • Conference in Computer Vision and Pattern Recognition (CVPR), 2021.

Acknowledgments

This work is supported in part by a Google Daydream Research award and by the Spanish government with the project HuMoUR TIN2017-90086-R, the ERA-Net Chistera project IPALM PCI2019-103386 and MarĂ­a de Maeztu Seal of Excellence MDM-2016-0656. Gerard Pons-Moll is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 409792180 (Emmy Noether Programme, project: Real Virtual Humans)