TY - GEN
T1 - Foreground-based depth map generation for 2D-to-3D conversion
AU - Lee, Ho Sub
AU - Cho, Sung In
AU - Bae, Gyu Jin
AU - Kim, Young Hwan
AU - Kim, Hi Seok
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/7/27
Y1 - 2015/7/27
N2 - This paper proposes a foreground-based approach to generating a depth map which will be used for 2D-to-3D conversion. For a given input image, the proposed approach determines if the image is an object-view (OV) scene or a non-object-view (NOV) scene, depending on the existence of foreground objects which are clearly distinguishable from the background. If the input image is an OV scene, the proposed approach extracts a foreground using block-wise background modeling and performs segmentation using adaptive background region selection and color modeling. Then, it performs segment-wise depth merging and cross bilateral filtering (CBF) to generate a final depth map. On the other hand, for the NOV scene, the proposed approach uses a conventional color-based depth map generation method [9] which has simple operations but provides a 3D depth map of good quality. Human beings are usually more sensitive to depth map quality, and 3D images, for OV scenes than for NOV scenes. With the proposed approach, it is possible to improve the quality of a depth map for OV scenes than using the conventional methods only. The performance of the proposed approach was evaluated through the subjective evaluation after 2D-to-3D conversion using a 3D display, and the proposed one provided the best depth quality and visual comfort among the benchmark methods.
AB - This paper proposes a foreground-based approach to generating a depth map which will be used for 2D-to-3D conversion. For a given input image, the proposed approach determines if the image is an object-view (OV) scene or a non-object-view (NOV) scene, depending on the existence of foreground objects which are clearly distinguishable from the background. If the input image is an OV scene, the proposed approach extracts a foreground using block-wise background modeling and performs segmentation using adaptive background region selection and color modeling. Then, it performs segment-wise depth merging and cross bilateral filtering (CBF) to generate a final depth map. On the other hand, for the NOV scene, the proposed approach uses a conventional color-based depth map generation method [9] which has simple operations but provides a 3D depth map of good quality. Human beings are usually more sensitive to depth map quality, and 3D images, for OV scenes than for NOV scenes. With the proposed approach, it is possible to improve the quality of a depth map for OV scenes than using the conventional methods only. The performance of the proposed approach was evaluated through the subjective evaluation after 2D-to-3D conversion using a 3D display, and the proposed one provided the best depth quality and visual comfort among the benchmark methods.
KW - 2D-to-3D conversion
KW - background modeling
KW - depth map generation
KW - foreground extraction
KW - scene classification
UR - http://www.scopus.com/inward/record.url?scp=84946203974&partnerID=8YFLogxK
U2 - 10.1109/ISCAS.2015.7168857
DO - 10.1109/ISCAS.2015.7168857
M3 - Conference contribution
AN - SCOPUS:84946203974
T3 - Proceedings - IEEE International Symposium on Circuits and Systems
SP - 1210
EP - 1213
BT - 2015 IEEE International Symposium on Circuits and Systems, ISCAS 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - IEEE International Symposium on Circuits and Systems, ISCAS 2015
Y2 - 24 May 2015 through 27 May 2015
ER -