In computer graphics, dynamic novel view synthesis is a technique that allows for the creation of realistic images of moving objects in a scene from any viewpoint. This is achieved by adding a time dimension to traditional neural rendering approaches, which previously assumed static scenes. One popular method for dynamic novel view synthesis is NeRF (Neural Radiance Fields), which uses neural networks to learn scene representations that can be used to render photorealistic images of a scene from any viewpoint. However, NeRF assumes a fixed camera pose and does not handle moving objects or deformations in the scene.
To address these limitations, researchers have proposed various methods for dynamic novel view synthesis. One approach is to use a time dimension to condition the neural field on explicit time input or a time embedding. Another approach is to learn a deformation field to map into a canonical space, where every 4D point in space and time maps to a 3D point in a canonical NeRF. This can be improved by distinguishing between foreground and background objects or leveraging depth priors.
Another approach to dynamic novel view synthesis is particle-based methods, which use a more explicit representation than typical NeRF-based approaches. MD-Splatting builds on 3D Gaussian Splatting, which renders a large number of Gaussian ‘splats’ with their state including color, position, and covariance matrix. This allows for real-time rendering of novel views with state-of-the-art performance.
In summary, dynamic novel view synthesis is an important technique in computer graphics that allows for the creation of realistic images of moving objects in a scene from any viewpoint. Various methods have been proposed to achieve this, including NeRF, time-conditioned neural fields, deformation fields, and particle-based methods. By leveraging these techniques, researchers can create more realistic and dynamic scenes in computer graphics.
Computer Science, Computer Vision and Pattern Recognition