TY - GEN
T1 - Accurate Light Field Depth Estimation via an Occlusion-Aware Network
AU - Guo, Chunle
AU - Jin, Jing
AU - Hou, Junhui
AU - Chen, Jie
N1 - Funding Information:
This work was supported in part by the HIRP project under Grant 9231332, in part by the Hong Kong RGC under Grant 9048123, and in part by the Basic Research General Program of Shenzhen Municipality under Grant JCYJ20190808183003968. *Junhui Hou ([email protected]) is the corresponding author.
PY - 2020/7
Y1 - 2020/7
N2 - Depth estimation is a fundamental problem for light field based applications. Although recent learning-based methods have proven to be effective for light field depth estimation, they still have troubles when handling occlusion regions. In this paper, by leveraging the explicitly learned occlusion map, we propose an occlusion-aware network, which is capable of estimating accurate depth maps with sharp edges. Our main idea is to separate the depth estimation on non-occlusion and occlusion regions, as they contain different properties with respect to the light field structure, i.e., obeying and violating the angular photo consistency constraint. To this end, three modules are involved in our network: the occlusion region detection network (ORDNet), the coarse depth estimation network (CDENet), and the refined depth estimation network (RDENet). Specifically, ORDNet predicts the occlusion map as a mask, while under the guidance of the resulting occlusion map, CDENet and REDNet focus on the depth estimation on non-occlusion and occlusion areas, respectively. Experimental results show that our method achieves better performance on 4D light field benchmark, especially in occlusion regions, when compared with current state-of-the-art light-field depth estimation algorithms.
AB - Depth estimation is a fundamental problem for light field based applications. Although recent learning-based methods have proven to be effective for light field depth estimation, they still have troubles when handling occlusion regions. In this paper, by leveraging the explicitly learned occlusion map, we propose an occlusion-aware network, which is capable of estimating accurate depth maps with sharp edges. Our main idea is to separate the depth estimation on non-occlusion and occlusion regions, as they contain different properties with respect to the light field structure, i.e., obeying and violating the angular photo consistency constraint. To this end, three modules are involved in our network: the occlusion region detection network (ORDNet), the coarse depth estimation network (CDENet), and the refined depth estimation network (RDENet). Specifically, ORDNet predicts the occlusion map as a mask, while under the guidance of the resulting occlusion map, CDENet and REDNet focus on the depth estimation on non-occlusion and occlusion areas, respectively. Experimental results show that our method achieves better performance on 4D light field benchmark, especially in occlusion regions, when compared with current state-of-the-art light-field depth estimation algorithms.
KW - Deep neural network
KW - Depth estimation
KW - Light fields
KW - Occlusion
UR - http://www.scopus.com/inward/record.url?scp=85090381601&partnerID=8YFLogxK
U2 - 10.1109/ICME46284.2020.9102829
DO - 10.1109/ICME46284.2020.9102829
M3 - Conference proceeding
AN - SCOPUS:85090381601
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
BT - 2020 IEEE International Conference on Multimedia and Expo, ICME 2020
PB - IEEE Computer Society
T2 - 2020 IEEE International Conference on Multimedia and Expo, ICME 2020
Y2 - 6 July 2020 through 10 July 2020
ER -