In this paper, the authors propose a new approach to representing scenes using neural radiance fields (NRF). NRF is a way of representing a 3D scene as a function that maps each point in space to its corresponding color and brightness. This allows for efficient and accurate rendering of scenes from any viewpoint.
The authors improve upon previous methods by incorporating an eikonal loss function, which ensures that the resulting NRF is differentiable and can be used to compute the zero-level surface of the SDF (Scene Depth Function). This results in a more reasonable geometric representation of objects with smooth geometry.
To ensure privacy, the authors propose a privacy-preserving argumentation. They retain only color variation details and discard complete RGB data from color images. Only the gradient magnitude information ∥g∥ is retained during image uploads, making it difficult to reverse engineer RGB data due to many-to-one mapping relationships.
The authors demonstrate the effectiveness of their approach through experiments on several datasets. They show that their method can achieve results comparable to existing methods while preserving privacy.
In summary, this paper presents a new approach to representing scenes using NRFs, which offers efficient and accurate rendering from any viewpoint. The proposed privacy-preserving argumentation ensures the highest level of privacy protection by retaining only necessary information and making it difficult to reverse engineer RGB data. The authors demonstrate the effectiveness of their approach through experiments on several datasets.
Computer Science, Computer Vision and Pattern Recognition