Wireless embedded smart cameras provide flexibility in camera deployment in terms of the locations and number of the cameras. However, these battery-powered embedded vision sensors have very limited energy, memory, and processing power. Energy consumption and latency are two major concerns in wireless embedded camera networks. In multi-camera tracking applications, the amount of data exchanged between cameras has an effect on the tracking accuracy, the energy consumption of the camera nodes and the latency. In this paper, we provide a detailed quantitative analysis of the accuracy-latency-energy tradeoff for overlapping and non-overlapping camera setups when different-sized data packets are transferred in a wireless manner. The experiments have been performed with an actual wireless embedded smart camera network employing CITRIC motes, and performing tracking of objects.