Bridging the gap between complex scientific research and the curious minds eager to explore it.

Computer Science, Computer Vision and Pattern Recognition

Generating Novel 3D Models with Image Data via GAN-based Approach

Generating Novel 3D Models with Image Data via GAN-based Approach

Radiance fields and implicit surfaces are powerful tools for creating photorealistic images and 3D models. However, these techniques have been limited to specific applications and have not been integrated into a single model. In this article, we present 3DGEN, a generative model that unifies radiance fields and implicit surfaces, allowing for the generation of new objects from a dataset of only 2D images.

Background

Radiance fields are a type of neural network that can represent complex scenes or objects in a volumetric representation. These networks can be used to render views from any angle and produce photorealistic images. Implicit surfaces, on the other hand, are a way of representing 3D shapes using a mathematical function that defines the surface of an object.
Our proposed model, 3DGEN, combines these two techniques by learning an underlying distribution of radiance fields and implicit surfaces from a dataset of 2D images. This allows for the generation of new objects from this class, as well as the ability to render views from any angle and export to a mesh-based representation.

Method

Our method involves training a neural network on a large dataset of 2D images to learn an underlying distribution of radiance fields and implicit surfaces. During inference, we use this learned distribution to generate new objects by sampling from the network. We also provide an algorithm for rendering views from any angle and exporting to a mesh-based representation.

Results

We evaluate our model on several datasets of 2D images and show that it is able to generate high-quality 3D models with minimal artifacts. We also demonstrate the ability to render views from any angle and export to a mesh-based representation, making our model applicable to 3D content creation.

Conclusion

In this article, we presented 3DGEN, a generative model that unifies radiance fields and implicit surfaces for 3D generation. Our proposed model allows for the generation of new objects from a dataset of only 2D images, as well as the ability to render views from any angle and export to a mesh-based representation. We believe this work has the potential to democratize 3D content creation and open up new possibilities for creating realistic digital objects.