TY - GEN
T1 - Frame-level temporal calibration of video sequences from unsynchronized cameras by using projective invariants
AU - Velipasalar, Senem
AU - Wolf, Wayne
PY - 2005
Y1 - 2005
N2 - This paper describes a new method for temporally calibrating multiple cameras by image processing operations. Existing multi-camera algorithms assume that the input sequences are synchronized either by genlock or by time stamp information and a centralized server. Yet, hardware-based synchronization increases installation cost. Hence, using image information is necessary to align frames from the cameras whose clocks are not synchronized. Our method uses image processing to find the frame offset between sequences so that they can be aligned. We track foreground objects, extract a point of interest for each object as its current location, and find the corresponding location of the object in the other sequence by using projective invariants in P2. Our algorithm recovers the frame offset by matching the tracks in different views, and finding the most reliable match out of the possible track pairs. This method does not require information about intrinsic or extrinsic camera parameters, and thanks to information obtained from multiple tracks, is robust to possible errors in background subtraction or location extraction. We present results on different sequences from the PETS2001 database, which show the robustness of the algorithm in recovering the frame offset.
AB - This paper describes a new method for temporally calibrating multiple cameras by image processing operations. Existing multi-camera algorithms assume that the input sequences are synchronized either by genlock or by time stamp information and a centralized server. Yet, hardware-based synchronization increases installation cost. Hence, using image information is necessary to align frames from the cameras whose clocks are not synchronized. Our method uses image processing to find the frame offset between sequences so that they can be aligned. We track foreground objects, extract a point of interest for each object as its current location, and find the corresponding location of the object in the other sequence by using projective invariants in P2. Our algorithm recovers the frame offset by matching the tracks in different views, and finding the most reliable match out of the possible track pairs. This method does not require information about intrinsic or extrinsic camera parameters, and thanks to information obtained from multiple tracks, is robust to possible errors in background subtraction or location extraction. We present results on different sequences from the PETS2001 database, which show the robustness of the algorithm in recovering the frame offset.
UR - http://www.scopus.com/inward/record.url?scp=33846977023&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33846977023&partnerID=8YFLogxK
U2 - 10.1109/AVSS.2005.1577313
DO - 10.1109/AVSS.2005.1577313
M3 - Conference contribution
AN - SCOPUS:33846977023
SN - 0780393856
SN - 9780780393851
T3 - IEEE International Conference on Advanced Video and Signal Based Surveillance - Proceedings of AVSS 2005
SP - 462
EP - 467
BT - IEEE Conference on Advanced Video and Signal Based Based Surveillance - Proceedings of AVSS 2005
T2 - IEEE Conference on Advanced Video and Signal Based Surveillance, AVSS 2005
Y2 - 15 September 2005 through 16 September 2005
ER -