In this article, we present a novel method for generating new views of an input image, called GeNVS (Generative Novel View Synthesis). Our approach leverages the power of diffusion models, which are like a team of artists working together to create a painting. The key innovation is that our model integrates appearance attributes from the reference image into the diffusion process, allowing it to generate high-quality images with accurate lighting and shading.
To understand how GeNVS works, let’s break it down into smaller components:
- Reference Image: The input image we want to modify is called the reference image. Think of it as a blueprint for our artistic team.
- Diffusion Models: These are like different artists working together to create a painting. They take the reference image and gradually add new details, such as lighting and shading, to generate a new view.
- Appearance Attributes: We extract appearance attributes from the reference image, such as color and texture, and feed them into the diffusion process. This helps the model generate images that are not only visually plausible but also accurate representations of the original image.
- 3D-Aware Diffusion Models: GeNVS uses a special type of diffusion model called a 3D-aware diffusion model. These models take into account the 3D structure of the scene, allowing them to generate images that are not only visually realistic but also consistent with the 3D layout of the scene.
- Multi-Reference Images: Our method can seamlessly support multiple reference images as input, allowing us to generate novel views from different angles and lighting conditions. This is like having a team of artists working from different perspectives to create a stunning landscape painting.
- Finetuning on Multi-View Images: We finetune our diffusion models on multi-view images to enhance the quality of the generated novel views. Think of it as fine-tuning each artist’s skills to create an even more realistic and detailed painting.
- Free-Form Portraits: GeNVS can generate novel views from free-form portraits without any quality degradation. This is like having a painter work on a portrait from any angle, capturing the subject’s likeness and expression with remarkable accuracy.
- Quantitative Evaluation: We conduct a thorough evaluation of GeNVS using various metrics, including image quality, diversity, and alignment. The results show that our method outperforms existing state-of-the-art methods in terms of novel view synthesis quality.
In summary, GeNVS is a powerful tool for generating new views of an input image, leveraging the strengths of diffusion models and incorporating appearance attributes from the reference image to generate high-quality images with accurate lighting and shading. Our method can handle multiple reference images, finetune on multi-view images for improved quality, and generate novel views from free-form portraits without any degradation in quality.