A Framework for Background Detection in Video.pdf
文本预览下载声明
A Framework for Background Detection in Video
1,2 2 1 1,2
Laiyun Qing , Weiqiang Wang , Tiejun Huang and Wen Gao
1Graduate School of Chinese Academy of Sciences, Beijing, P. R. China, 100039)
2Institute of Computing Technology, Chinese Academy of Sciences, Beijing, P. R. China,
100080)
{lyqing, wqwang, tjhuang, wgao}@
Abstract. This paper presents a framework for background detection in video.
Key frames are extracted to capture background change in video and to reduce
the magnitude of the data. Then we analyze the content of the key frames to
determine whether there is interesting background in them. Key frames are
extracted with a time-constrained clustering algorithm. Background detection in
a key frame is done with color and texture cues. The illumination varies much
in natural scenes. To deal with the varying illumination, color is modeled with
three sub-models: strong light, normal light and weak light. The connectivity of
background pixels is used to reduce the computing cost of texture.
Experimental results show that background can be detected by using the
framework simply and efficiently.
1 Introduction
Automatic event detection in video becomes more important with increasing volume
of digital video. Many projects on event detection focus on tracking foreground, such
as face and car [1][2][3]. The main information used is motion between images. But
in some cases, you would like to find a clip of football game in a video stream. Maybe
a good retrieval representation in this case is the grass background plus a football. In
this paper, we
显示全部