This paper describes a new method for temporally calibrating multiple cameras by image processing operations. Existing multi-camera algorithms assume that the input sequences are synchronized either by genlock or by time stamp information and a centralized server. Yet, hardware-based synchronization increases installation cost. Hence, using image information is necessary to align frames from the cameras whose clocks are not synchronized. Our method uses image processing to find the frame offset between sequences so that they can be aligned. We track foreground objects, extract a point of interest for each object as its current location, and find the corresponding location of the object in the other sequence by using projective invariants in P2. Our algorithm recovers the frame offset by matching the tracks in different views, and finding the most reliable match out of the possible track pairs. This method does not require information about intrinsic or extrinsic camera parameters, and thanks to information obtained from multiple tracks, is robust to possible errors in background subtraction or location extraction. We present results on different sequences from the PETS2001 database, which show the robustness of the algorithm in recovering the frame offset.