In this paper, we propose a novel approach to modeling clothed humans in computer graphics, called the "Cloth-based Human Model" (CHM). Our method represents the body and garments as a single, unified entity, rather than separating them into distinct parts. This allows for more realistic and efficient simulation of clothing deformation, enabling the creation of dynamic and diverse avatars in various applications such as video games, virtual reality, and 3D animation.
To achieve this, we employ a novel encoder-decoder architecture that learns to compress and decompress the 3D body shape and garment layout into a compact latent space. This allows for efficient storage and manipulation of the avatar’s appearance, making it possible to apply complex deformations and animations without significant computational overhead.
The key innovation of our method is the use of a "Diffusion-based Skinning" technique, which enables the avatar’s skin to stretch and conform to the underlying body shape in a physically plausible manner. This allows for realistic simulations of clothing deformation under movement, such as when the character walks or runs.
Another important aspect of our approach is the use of "Neural-GIF" technology, which enables the avatar’s appearance to be controlled by a neural network. This allows for the creation of highly realistic and diverse avatars without requiring extensive training or expertise in computer graphics.
The overall effectiveness of our method is demonstrated through various experiments and comparisons with existing techniques. Our results show that the CHM outperforms state-of-the-art methods in terms of both computational efficiency and visual quality, making it a valuable tool for a wide range of applications in computer graphics and beyond.
Computer Science, Computer Vision and Pattern Recognition