Testing IP Camera Latency

Published Sep 26, 2014 04:00 AM

*** **** **** ******* ****** ** cameras?

** ****** * ****** ** ************, like **:

** **** ******, ** ***** ****:

  • ******* ******* ******* ** *** ****
  • *** ******* ** *******
  • ********** ** ******* ****** ********* *******
  • ********** ** ******* ******* ***** *** hosted *****

No ****** ******* ****** / ******

*** **** ******* ********** **** ** is *********** *** ********* ** **** latency *** ********** ******* ** *****. Latency ** ******* * ******* ** the *** ** *** ****** *** can ** ************* ******** ** *********** or ****** ** *** ***** **** of **** ******. *** ********, * camera ***** ** '*** *******' ** itself *** **** ********* ** * certain ******* ** ******** ** ****** machine, ******* ***** ******** ************.

** **** ******, ** ** ***** measurements **** *** *********. *******, **** in **** **** **** ******* *** vary ********* ** **** *****.

Key ********

**** *** *** *** ******** **** the ****:

  • ** * ******, ******* ****** ***** system, ******* **** ** ****** ** VMS ********* **** *** - *****.
  • **** *** *** ******* ****** ** spiking ******* *******.
  • *** ********, ******* **** **** ******* requested ****** ******* ** ******** *************.
  • ******* ****** / *** ************ *** cause ******.
  • ****** ***** ******* ******* (*******) *** far ******* **** ***** *******, ** 2-3 *******.
  • *****, ********, ***'* ***** *** *** load ** **** *********, *** ************* drive *******.

Variances ** *******

*** *********** ********** ** ***** *** that ********** ** ******* ******** ** some ************ ** ******* *** ***** but *** ******. *** ********, ************************* ** ********* *** ******** ****** of *******, ***** *** **** ************ ***. *** **** *****:

Load *** *******

** **** *****, ** **** *** impact ** ******* **** ************ ******* from * ****** *** *** ********* spike ** *******:

*** ***** * ****** **** ** impacted ******* ** **** *** ********* available ** *** ****** *** *** demands ** *** ****** (*********, ******** streams, ***.). ** ****, **** ** not ****** ** ********.

Hosted ***** *******

*******, ****** ***** **** ** *** cloud *** **** ********* **** **, had *********** ******* ******. ********************** ***** *** **** *** *** the *+ ****** ******* ********.

** ******, *******'* ******* *** ****, depending ** ******* ********* ******** ********* from *** ******'* ****, **** ** Dropcam's *******, ********** ** ********** ********, ***.

Comments (41)
JH
John Honovich
Sep 26, 2014
IPVM

We can test other combinations / scenarios. Let us know what ideas you have.

TS
Tim Sisk
Sep 26, 2014
IPVMU Certified

Nicely Done...Interesting piece

(1)
Avatar
Joe Mirolli
Sep 26, 2014
IPVMU Certified
Looking forward to a PTZ test version!
(3)
Avatar
Luis Carmona
Sep 26, 2014
Geutebruck USA • IPVMU Certified

In a large building or campus, a camera stream might go through 2 or even 3 switches before reaching the recordering server. I wonder how much it might increase when you put another switch in between.

JH
John Honovich
Sep 26, 2014
IPVM

I doubt that's anything significant compared to the hundreds of milliseconds essentially inherent in this application.

For instance, pinging from Hawaii to the East Coast is ~150ms and that's dozens of hops and thousands of miles.

So even if a few switches in a building added tens of milliseconds, it's probably not a factor.

(4)
GW
George Whittaker
Sep 26, 2014

when the cpu spikes, couldnt it affect the stopwatch too??

JH
John Honovich
Sep 26, 2014
IPVM

If the stopwatch was impacted, we / you would see it on the live side but the stopwatch did not lock up / slow down, etc.

PV
Pat Villerot
Sep 26, 2014

What an interesting way to test this visually. 400ms isn't awful, but visually noticeable. I'm pretty sure anyone can live with that.

John, since you solicited for testing scenarios:

Testing over a 802.11 bridge connection might be interesting since RF is a shared medium and collisions are simply part of the game. Also, testing over a small mesh to observe the latency build up between each node might be worthwhile testing. Both of the above would be even more illustrative if you're testing in a dense urban area where collisions are ever more common. PTZ control can be infuriating as the latency builds up. A mesh is the best opportunity to see this in action.

GW
George Whittaker
Sep 26, 2014

could changing the stopwatch display to show the system time, and enabling timestamp overlay (from the vms) on the recorded frame provide additional information?

JH
John Honovich
Sep 26, 2014
IPVM

I am not sure what that would tell us additionally.

However, I did find an online stopwatch goes to 3 decimal places which I think would be useful. After ASIS, Derek can add some additional test runs to see what that reveals.

GW
George Whittaker
Sep 27, 2014

timestamp probably bad idea anyway, might slow it down. thinking about it, does that mean whenever your vms has to burn a timestamp that it must decode and re-encode every frame even if it is not on live view? that might take a lot of cpu.

Avatar
Carl Lindgren
Sep 27, 2014

John,

Why did you test using 1/10 second granularity? That's an awfully wide leeway (3 frames). When I tested encoders for our VMS evaluations, I used a stopwatch that displayed time in 1/100's of a second. I believe that is a much more accurate measurement.

The point is that a reading of, say 7.0 seconds "Live" could actually be anywhere between 7.00 to 7.099 seconds and a reading of 7.3 seconds could be anywhere between 7.30 seconds and 7.399 seconds. 7.3-7.099 = 0.201 and 7.399-7.0 = 0.399 so the actual latency could vary over 98% and still measure the same.

JH
John Honovich
Sep 27, 2014
IPVM

Carl, as I mentioned in the thread above, going forward we will use an online stopwatch with 3 decimal precision.

That said, your theoretical observation is not in line with the many test runs we did. For example, for Exacq, it was .3 every single time over dozens of tests. If the actual latency varied as much as you opined, we would have seen runs where the stopwatch reported .2 or .4 but we did not.

Avatar
Carl Lindgren
Sep 27, 2014

Wht would that be? Your test could have yielded: 300ms, 330ms, 380ms, 398ms etc. And, as you stated, every one would have displayed .3 seconds.

By the way, with a 3-digit stopwatch, I would bet that the third digit (milliseconds) will be just a blur. Unless, of course, you use a fast shutter.

John, of course our testing was aimed at PTZ control. We found that >200ms made PTZs tough to control when trying to follow fast-moving objects like people running and moving vehicles. Even a 20-30ms difference was noticeable.

The best systems yielded latencies between 140ms and 170ms while the worst were much higher. Pelco Endura encoders, for instance, yielded ~330ms while Avigilon encoders yielded >500ms. Dallmeier and IndigoVision were both better than 150ms.

We also tested standalone encoders and the lowest latencies were yielded by Bosch and Axis. On a related note, we also tested codecs, since the Bosch X1600XF encoder could run baseline, main and high. Adding 'B' frames and running higher-level codecs increased latency appreciably.

JH
John Honovich
Sep 27, 2014
IPVM

Originally above, you claimed a variance of nearly 200ms, i.e.g, "7.3-7.099 = 0.201 and 7.399-7.0 = 0.399"

Now you are claiming a variance of 1/2 that - 100ms (300ms, 330ms, 380ms, 398ms etc.)

There is some variance but, even as you now acknowledge it is less than 100ms.

Again, if the variance of latency was significant, we would have had some runs where the stopwatch returned .4 or .2 but it did not.

Avatar
Carl Lindgren
Sep 27, 2014

Agreed: late night - math error. I should have said 7.301 to 7.399. Still, 3 frames at 30fps.

Avatar
Carl Lindgren
Sep 27, 2014

However, 300ms is not good latency when it comes to controlling PTZs. My comment to Avigilon was that with their >500ms latency (and their system control "runon", whereby upon release of the joystick, the PTZ continued moving for at least another second (500ms + 500ms), we would have trouble following a little old lady using a walker.

JH
John Honovich
Sep 27, 2014
IPVM

I am not saying 300ms is good latency. I am saying that was what it was in our tests. As the VMS video above shows, it could certainly be even worse than 300ms.

Btw, as I think we both would agree, PTZ control is a lot more complicated than fixed camera latency because it depends how VMSes process and respond to the PTZ commands being sent, which can add even more latency to the operation.

Avatar
Carl Lindgren
Sep 27, 2014

We never concerned ourselves with fixed camera latency. After all, nothing is so time-critical that even a second would matter. In fact, IndigoVision playback can be up to 4 seconds behind "Live". I believe that is due to their 4-second GOP size. We've been using the system for over a year and never had an issue with the delay preventing us from responding quickly to an event.

(1)
Avatar
Carl Lindgren
Sep 27, 2014

How would you measure control latency? That was an issue I struggled with and gave up. In any event, it didn't appear to be an issue during our tests. Other than Avigilon's runon, we never observed an issue that wasn't directly related to video latency.

That said, we only tested analog fixed camera latency through encoders - both manufacturers' own and third party. Since we use the same encoders for both fixed and PTZ analog cameras, I believe our testing was relevant.

With IndigoVision deployed, we have taken the opportunity to test IP PTZs, but only by feel. IV's 9000-series 4SIF, 11000-series 720p and 12000-series 1080p PTZs all have acceptable, though unquantified, latency. My guess, based on our experiences during VMS/encoder tests, is that all three exhibit well under 200ms bi-directional latency.

We've also tested Bosch, Pelco, Sony, Vitek and JVC PTZs. The Bosch, Sony and Pelco PTZs exhibited control issues, whereby motion was not smooth and/or the PTZs also exhibited runon after release of the joystick. The best control, and overall best operation, is/was exhibited by the JVC and IndigoVision's own PTZs. Obviously, IV works with IV, but we were surprised at the poor showing of the other three.

JH
John Honovich
Sep 27, 2014
IPVM

How would you measure control latency?

I am not sure how you can easily segment control latency from video encoding latency. I guess I would measure latency of a stationary PTZ first to get the baseline of video encoding latency and then try to measure what the latency was when panning the PTZ, subtracting the two. I am not sure if that would work though as I have not tried it (though we will in a future test round).

Knowing how the control commands are handled is tough because it is not easy to inspect. It might be that PTZ X is slow or inconsistent in sending out the commands. It could be that VMS Y is unoptimized / poor in receiving / processing requests from PTZ X, but good for its own PTZ Y. Worse, it could be a combination of both.

Avatar
Joe Mirolli
Sep 27, 2014
IPVMU Certified
Let's cut to the most important piece here, Can this delay get you out of a red light traffic ticket?
Avatar
Carl Lindgren
Sep 27, 2014

Joe,

Interesting question. I wonder if anyone has ever tried to challenge a "red light scamera" ticket on that basis?

JH
John Honovich
Sep 27, 2014
IPVM

As long as the red light and the car are both captured simultaneously / synchronously on the same camera, it does not matter if the delay to record was 5 seconds. The video will still fairly show where the car was when the light turned red.

Avatar
Carl Lindgren
Sep 27, 2014

John,

It was my understanding that red light cameras (scameras) are typically set for a delay between the red light and photo capture. The local jurisdiction was allowed to choose the length of that delay and, as I recall, some were setting it so tight that drivers who were past the trigger when the light turned red were given tickets, even though the light was yellow when they actually entered the intersection.

I seem to recall that there was a big stink about that and a number of tickets were thrown out of court until the jurisdiction lengthened the delay between light changes and photo capture. Of course, there was another big stink raised when it was discovered that jurisdictions were not following accepted standards for length of yellow light versus speed limit. In some cases, it was even proven that jurisdictions deliberately shortened yellow lights in order to maximize income.

JH
John Honovich
Sep 27, 2014
IPVM

"some were setting it so tight that drivers who were past the trigger when the light turned red were given tickets, even though the light was yellow when they actually entered the intersection."

I certainly believe that. I am just emphasizing that it is not a video latency issue but a (bad/manipulative) policy decision.

Avatar
Joe Mirolli
Sep 27, 2014
IPVMU Certified
Agreed on the red light stop camera, how about with a rolling stop, in DE they get you for failure to come to a complete stop before turning right on red. Latency could play a part with that, especially since most are connected wirelessly... I would assume the camera companies that implenet would be smart enough to realize they need full fps. I would love to find an out for these, they give you the opportunity to challenge the ticket but never surrender wasting everyone's time and money.
(1)
(1)
JH
John Honovich
Sep 27, 2014
IPVM

I could see frame rate being an issue.

However, if its 30fps and you are going 20mph, in 1/30th of a second, you'll only travel 1 foot (see mph to fps converter).

MD
Miles Davies
Sep 30, 2014

Siqura have cameras with a low latency mode, would be intresting to see how they fair.

JH
John Honovich
Oct 02, 2014
IPVM

Miles, thanks for sharing. Siqura's specs list regular latency at 130ms and low latency mode at 90ms. This, of course, is just camera side, and excludes network / VMS / display.

Given that they are only listing a 40ms gain, I would not expect it to make a major improvement on overall end to end latency.

(1)
WF
Wahid Faizzad
Oct 07, 2014

I wonder how the latency impacts the safety of cyclists in Scandinavia. Cameras are installed on trucks and the truck drivers rely on live video from the camera which is connected to a monitor inside the truck. This should help truck drivers to cover the blind spots of the right side of the trucks, so accidents can be avoided while the truck is turning to right direction at an intersection.

What about testing analog system latency?

YM
Yann McCready
Oct 09, 2014

Btw, the connection type can also affect latency so it might be a factor to test in future tests.

For example, using multicast (directly from the camera to the viewing station) will have the shortest latency for some VMS. I know that is the case for Genetec at least, possibly others too.

Disclaimer, I work for Genetec.

JH
John Honovich
Oct 09, 2014
IPVM

Yann, thanks.

I believe that, since it does not go 'through' the VMS / recorder.

That said, ~99% of systems do go 'through'.

SM
Steve Mitchell
Oct 13, 2014

All good stuff in this article/thread. One of the great things about IP video is that you can work with the video as data (rather than eletrical signals) and do all kinds of fun things with it. Programmers have the ability to read frames off the imager or the network, then stuff them into a buffer--giving them a chance to "get it right" and provide the best quaility video. As well as do things otherwise quite difficult like transcode. A lot of sins can be overcome by buffering. These buffers occur on the camera, in the VMS/recorder, and at the point of rendering to a display.

Unfortunately along with this flexibility and power comes higher overall latency, as the latency introduced by buffering accumulates throughout the system. It comes into play most commonly when PTZ is involved, there's a real-time requirement (like using the system for 'video conferencing') or in some cases if you're scrutinizing the time stamps on individual frames of video (where in the architecture those time stamps are generated and how it relates to the latency of the video becomes important).

I think a well designed system needs a "low latency" mode that explicitely minimizes the various buffers for live viewing or PTZ. The result might be some lost frames, but lower overall latency.

CE
Carlos Espinoza
Oct 16, 2014

I have been working with DVRs with ethernet connection since 1995 (someone remember ASL Remote Watch Pro?) and that equipment in 2002 (Remote Watch Xperience) was the only one that has almost no latency. We tested in those days equipment from Philips, Kalatel, Geovision, etc and all of them had problems with latency a real problem for PTZs...

UI
Undisclosed Integrator #1
Oct 26, 2014

Thought I would share an observation.

While checking out a Vivotek ip 8362 I had direct connected it to my computer. Figured that would eliminate any network delays. Latency was 2 to 3 seconds with browser direct to camera. Pretty high.

I then connected the camera to the same computer/browser over a bench level network switch and the latency dropped to well less than half a second. Not sure I understand yet why the direct connect is slower nor am I that concerned. I just thought I might share the observation as I had expected a direct connect to pc would be as fast or faster than a network connect via switch.

HL
Horace Lasell
Nov 26, 2014

This is a great topic. While important in fixed installations, latency is even more critical in mobile applications, particularly in high magnification mobile systems requiring dynamic aim and focus. High latencies can mean that these functions will never converge. For example, suppose your feedback loop is closed with 1/2 second latency (e.g. operator sees video 1/2 second late while trying to focus). Any control adjustment cannot be sensed until 1/2 second after it occurs. If multiple control adjustments are required (e.g. the first human input does not achieve adequate aim or focus), the process may not converge for several seconds. Over that period, if platform motion changes the focal or aim relationships, then the process will end up always chasing the desired result (lagging), or else it may be unstable (overshoot). These conditions lead to sub standard performance (poor focus, poor scene framing) or else to under utilization (more time spent aiming and focusing instead of capturing critical scene elements).

Analog video typically has negligible latency. In contrast to H.264, use of analog video for human pointing and focusing greatly improves optical system utilization and quality. If analog is unavailable, good results may be achieved with raw uncompressed digital video streams. However, this can present challenges because it may be desirable to distribute and archive video in H.264 format, but pulling dual streams can adversely affect latency. For applications in which latency is a critical limiter, this argues in favor of a single raw uncompressed digital video stream from the sensor, with downstream H.264 encoding, even though this approach tends to be a cost driver.

Beyond this, different H.264 encoders have a range of latencies which, if not appreciated, can also lead to costly replacements after the fact.

UM
Undisclosed Manufacturer #2
Oct 28, 2020

Hi John, any chance to see a new test?

JH
John Honovich
Oct 28, 2020
IPVM

I don't see any reason for major changes in latency, except for cloud, which is something we check regularly on new tests, e.g., Verkada 2020 Cameras Image Quality Test

Avatar
John Nowacki
Nov 14, 2020
Director/ Chairman @ JN-SECURITY-SERVICES-AUST-NZ

Hi John H. very good article and testing methods,

could you please perform the same tests on an Linux based VMS / server platform,

it will be interesting to gauge the latency difference between an

WIN based VMS operation Vs. an Linux VMS based operation.

all the best

JN

MF
Michael Friends
Nov 17, 2021

Very useful document.