Automatic Video Object Segmentation Based on Visual and Motion Saliency

Qinmu Peng*, Yiu Ming CHEUNG

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

We present an approach to extract the salient object automatically in videos. Given an unannotated video sequence, the proposed method first computes the visual saliency to identify object-like regions in each frame based on the proposed weighted multiple manifold ranking algorithm. We then compute motion cues to estimate the motion saliency and localization prior. Finally, adopting a new energy function, we estimate a superpixel-level object labeling across all frames, where 1) the data term depends on the visual saliency and localization prior, and 2) the smoothness term depends on the constraints in time and space. Compared to the existing counterparts, the proposed approach automatically segments the persistent foreground object meanwhile preserving the potential shape. Experiments show its promising results on the challenging benchmark videos in comparison with the existing counterparts.

Original languageEnglish
Article number8721129
Pages (from-to)3083-3094
Number of pages12
JournalIEEE Transactions on Multimedia
Volume21
Issue number12
DOIs
Publication statusPublished - Dec 2019

Scopus Subject Areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

User-Defined Keywords

  • graph model
  • manifold ranking
  • Object segmentation
  • visual saliency

Fingerprint

Dive into the research topics of 'Automatic Video Object Segmentation Based on Visual and Motion Saliency'. Together they form a unique fingerprint.

Cite this