We propose an inverse rendering pipeline that simultaneously reconstructs scene geometry, lighting, and spatially-varying material from a set of multi-view images. Specifically, the proposed pipeline involves volume and physics-based rendering, which are performed separately in two steps: exploration and exploitation. During the exploration step, our method utilizes the compactness of neural radiance fields and a flexible differentiable volume rendering technique to learn an initial volumetric field. Here, we introduce a novel cascaded tensorial radiance field method on top of the Canonical Polyadic (CP) decomposition to boost model compactness beyond conventional methods. In the exploitation step, a shading pass that incorporates a differentiable physics-based shading method is applied to jointly optimize the scene's geometry, spatially-varying materials, and lighting, using image reconstruction loss. Experimental results demonstrate that our proposed inverse rendering pipeline, IRCasTRF, outperforms prior works in inverse rendering quality. The final output is highly compatible with downstream applications like scene editing and advanced simulations. Further details are available on the project page: https://ircasrf.github.io/.
|Proceedings of the ACM International Conference on Multimedia
- Inverse rendering
- Canonical Polyadic decomposition
- Neural radiance fields
- Novel view synthesis
- Bidirectional reflectance distribution function