TY - GEN
T1 - Continuous Background Update and Object Detection with Non-static Cameras
AU - Yijia, Zhao
AU - Casares, Mauricio
AU - Velipasalar, Senem
PY - 2008
Y1 - 2008
N2 - Detecting moving objects is an important part of tracking. Most of the previous work on moving object detection concentrates on fixed cameras. Methods using moving cameras seldom deal with the problem of robustly and continuously updating the background model during all times including the periods when the camera is not static. We propose a method to build and continuously update a background model, and to detect foreground objects not only when the camera is static but also when it is zooming in/out or panning/tilting. For instance, the model built for the zoomed in (out) portion of a video is warped to the reference frame of the model of the zoomed out (in) portion to immediately incorporate changes that occurred in the background, such as objects that are placed or removed. This way, changes are incorporated to the model without requiring a learning period each time camera zooms in/out. This method addresses the problems of detecting moving objects during the zooming in and zooming out periods, detecting objects that are placed in the scene while the camera is non-static and gradually incorporating an overall illumination change to the scene model. We present different experiments covering three different scenarios to demonstrate the success of the proposed method in addressing these issues.
AB - Detecting moving objects is an important part of tracking. Most of the previous work on moving object detection concentrates on fixed cameras. Methods using moving cameras seldom deal with the problem of robustly and continuously updating the background model during all times including the periods when the camera is not static. We propose a method to build and continuously update a background model, and to detect foreground objects not only when the camera is static but also when it is zooming in/out or panning/tilting. For instance, the model built for the zoomed in (out) portion of a video is warped to the reference frame of the model of the zoomed out (in) portion to immediately incorporate changes that occurred in the background, such as objects that are placed or removed. This way, changes are incorporated to the model without requiring a learning period each time camera zooms in/out. This method addresses the problems of detecting moving objects during the zooming in and zooming out periods, detecting objects that are placed in the scene while the camera is non-static and gradually incorporating an overall illumination change to the scene model. We present different experiments covering three different scenarios to demonstrate the success of the proposed method in addressing these issues.
UR - http://www.scopus.com/inward/record.url?scp=60849106265&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=60849106265&partnerID=8YFLogxK
U2 - 10.1109/AVSS.2008.30
DO - 10.1109/AVSS.2008.30
M3 - Conference contribution
AN - SCOPUS:60849106265
SN - 9780769533414
T3 - Proceedings - IEEE 5th International Conference on Advanced Video and Signal Based Surveillance, AVSS 2008
SP - 309
EP - 316
BT - Proceedings - IEEE 5th International Conference on Advanced Video and Signal Based Surveillance, AVSS 2008
T2 - IEEE 5th International Conference on Advanced Video and Signal Based Surveillance, AVSS 2008
Y2 - 1 September 2008 through 3 September 2008
ER -