TY - JOUR
T1 - Temporal Incoherence-Free Video Retargeting Using Foreground Aware Extrapolation
AU - Cho, Sung In
AU - Kang, Suk Ju
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2020
Y1 - 2020
N2 - Video retargeting is a method of adjusting the aspect ratio of a given video to the target aspect ratio. However, temporal incoherence of video contents, which can occur frequently by video retargeting, is the most dominant factor that degrades the quality of retargeted videos. Current methods to maintain temporal coherence use the entire frames of the input videos; however, these methods cannot be implemented as on-time systems because of their tremendous computational complexity. As far as we know, there is no existing on-time video retargeting method that can avoid spatial distortion while perfectly maintaining temporal coherence. In this paper, we propose a novel on-time video retargeting method that can perfectly maintain temporal coherence and prevent the spatial distortion by using only two consecutive input frames. In our method, the maximum a posteriori-based foreground aware-block matching is used for the extrapolation that extends the side area of a given video to adjust its aspect ratio to the target. To maintain the temporal coherence of the extended area, the result of block matching for backward warping-based extrapolation of the start frame after the scene change occurs, is reused for the other frames until the next scene change occurs. In addition, we propose a scene scenario-adaptive fallback scheme to prevent severe distortions that can occur with reusing block matching results or extrapolation-based side extension. The simulation results showed that the proposed method greatly improved the bidirectional similarity value, which can measure the quality of video retargeting, by up to 10.26 compared with the existing on-time video retargeting methods.
AB - Video retargeting is a method of adjusting the aspect ratio of a given video to the target aspect ratio. However, temporal incoherence of video contents, which can occur frequently by video retargeting, is the most dominant factor that degrades the quality of retargeted videos. Current methods to maintain temporal coherence use the entire frames of the input videos; however, these methods cannot be implemented as on-time systems because of their tremendous computational complexity. As far as we know, there is no existing on-time video retargeting method that can avoid spatial distortion while perfectly maintaining temporal coherence. In this paper, we propose a novel on-time video retargeting method that can perfectly maintain temporal coherence and prevent the spatial distortion by using only two consecutive input frames. In our method, the maximum a posteriori-based foreground aware-block matching is used for the extrapolation that extends the side area of a given video to adjust its aspect ratio to the target. To maintain the temporal coherence of the extended area, the result of block matching for backward warping-based extrapolation of the start frame after the scene change occurs, is reused for the other frames until the next scene change occurs. In addition, we propose a scene scenario-adaptive fallback scheme to prevent severe distortions that can occur with reusing block matching results or extrapolation-based side extension. The simulation results showed that the proposed method greatly improved the bidirectional similarity value, which can measure the quality of video retargeting, by up to 10.26 compared with the existing on-time video retargeting methods.
KW - extrapolation
KW - fallback
KW - MAP-based block matching
KW - Video retargeting
UR - http://www.scopus.com/inward/record.url?scp=85081720924&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.2977171
DO - 10.1109/TIP.2020.2977171
M3 - Article
AN - SCOPUS:85081720924
SN - 1057-7149
VL - 29
SP - 4848
EP - 4861
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
M1 - 9025780
ER -