Testing IP Camera Latency

By: John Honovich, Published on Sep 26, 2014

How much does latency impact IP cameras?

We tested a number of combinations, like so:

In this report, we break down:

  • Average latency metrics in our test
  • Key drivers of latency
  • Variations in latency across different systems
  • Variations in latency between local and hosted video

*** **** **** ******* impact ** *******?

** ****** * ****** of ************, **** **:

** **** ******, ** break ****:

  • ******* ******* ******* ** our ****
  • *** ******* ** *******
  • ********** ** ******* ****** different *******
  • ********** ** ******* ******* local *** ****** *****

[***************]

No ****** ******* ****** / ******

*** **** ******* ********** that ** ** *********** and ********* ** **** latency *** ********** ******* or *****. ******* ** clearly * ******* ** the *** ** *** system *** *** ** significantly ******** ** *********** or ****** ** *** given **** ** **** system. *** ********, * camera ***** ** '*** latency' ** ****** *** when ********* ** * certain ******* ** ******** or ****** *******, ******* could ******** ************.

** **** ******, ** do ***** ************ **** our *********. *******, **** in **** **** **** numbers *** **** ********* on **** *****.

Key ********

**** *** *** *** findings **** *** ****:

  • ** * ******, ******* loaded ***** ******, ******* from ** ****** ** VMS ********* **** *** - *****.
  • **** *** *** ******* factor ** ******* ******* numbers.
  • *** ********, ******* **** many ******* ********* ****** latency ** ******** *************.
  • ******* ****** / *** combinations *** ***** ******.
  • ****** ***** ******* ******* (Dropcam) *** *** ******* than ***** *******, ** 2-3 *******.
  • *****, ********, ***'* ***** and *** **** ** each *********, *** ************* drive *******.

Variances ** *******

*** *********** ********** ** found *** **** ********** in ******* ******** ** some ************ ** ******* and ***** *** *** others. *** ********, ************************* ** ********* *** periodic ****** ** *******, while *** **** ************ ***. *** **** video:

Load *** *******

** **** *****, ** show *** ****** ** opening **** ************ ******* from * ****** *** the ********* ***** ** latency:

*** ***** * ****** will ** ******** ******* on **** *** ********* available ** *** ****** and *** ******* ** the ****** (*********, ******** streams, ***.). ** ****, this ** *** ****** to ********.

Hosted ***** *******

*******, ****** ***** **** to *** ***** *** then ********* **** **, had *********** ******* ******. This****************** ***** *** **** run *** *** *+ second ******* ********.

** ******, *******'* ******* can ****, ********* ** factors ********* ******** ********* from *** ******'* ****, load ** *******'* *******, congestion ** ********** ********, ***.

Comments (37)

We can test other combinations / scenarios. Let us know what ideas you have.

Nicely Done...Interesting piece

Looking forward to a PTZ test version!

In a large building or campus, a camera stream might go through 2 or even 3 switches before reaching the recordering server. I wonder how much it might increase when you put another switch in between.

I doubt that's anything significant compared to the hundreds of milliseconds essentially inherent in this application.

For instance, pinging from Hawaii to the East Coast is ~150ms and that's dozens of hops and thousands of miles.

So even if a few switches in a building added tens of milliseconds, it's probably not a factor.

when the cpu spikes, couldnt it affect the stopwatch too??

If the stopwatch was impacted, we / you would see it on the live side but the stopwatch did not lock up / slow down, etc.

What an interesting way to test this visually. 400ms isn't awful, but visually noticeable. I'm pretty sure anyone can live with that.

John, since you solicited for testing scenarios:

Testing over a 802.11 bridge connection might be interesting since RF is a shared medium and collisions are simply part of the game. Also, testing over a small mesh to observe the latency build up between each node might be worthwhile testing. Both of the above would be even more illustrative if you're testing in a dense urban area where collisions are ever more common. PTZ control can be infuriating as the latency builds up. A mesh is the best opportunity to see this in action.

could changing the stopwatch display to show the system time, and enabling timestamp overlay (from the vms) on the recorded frame provide additional information?

I am not sure what that would tell us additionally.

However, I did find an online stopwatch goes to 3 decimal places which I think would be useful. After ASIS, Derek can add some additional test runs to see what that reveals.

timestamp probably bad idea anyway, might slow it down. thinking about it, does that mean whenever your vms has to burn a timestamp that it must decode and re-encode every frame even if it is not on live view? that might take a lot of cpu.

John,

Why did you test using 1/10 second granularity? That's an awfully wide leeway (3 frames). When I tested encoders for our VMS evaluations, I used a stopwatch that displayed time in 1/100's of a second. I believe that is a much more accurate measurement.

The point is that a reading of, say 7.0 seconds "Live" could actually be anywhere between 7.00 to 7.099 seconds and a reading of 7.3 seconds could be anywhere between 7.30 seconds and 7.399 seconds. 7.3-7.099 = 0.201 and 7.399-7.0 = 0.399 so the actual latency could vary over 98% and still measure the same.

Carl, as I mentioned in the thread above, going forward we will use an online stopwatch with 3 decimal precision.

That said, your theoretical observation is not in line with the many test runs we did. For example, for Exacq, it was .3 every single time over dozens of tests. If the actual latency varied as much as you opined, we would have seen runs where the stopwatch reported .2 or .4 but we did not.

Wht would that be? Your test could have yielded: 300ms, 330ms, 380ms, 398ms etc. And, as you stated, every one would have displayed .3 seconds.

By the way, with a 3-digit stopwatch, I would bet that the third digit (milliseconds) will be just a blur. Unless, of course, you use a fast shutter.

John, of course our testing was aimed at PTZ control. We found that >200ms made PTZs tough to control when trying to follow fast-moving objects like people running and moving vehicles. Even a 20-30ms difference was noticeable.

The best systems yielded latencies between 140ms and 170ms while the worst were much higher. Pelco Endura encoders, for instance, yielded ~330ms while Avigilon encoders yielded >500ms. Dallmeier and IndigoVision were both better than 150ms.

We also tested standalone encoders and the lowest latencies were yielded by Bosch and Axis. On a related note, we also tested codecs, since the Bosch X1600XF encoder could run baseline, main and high. Adding 'B' frames and running higher-level codecs increased latency appreciably.

Originally above, you claimed a variance of nearly 200ms, i.e.g, "7.3-7.099 = 0.201 and 7.399-7.0 = 0.399"

Now you are claiming a variance of 1/2 that - 100ms (300ms, 330ms, 380ms, 398ms etc.)

There is some variance but, even as you now acknowledge it is less than 100ms.

Again, if the variance of latency was significant, we would have had some runs where the stopwatch returned .4 or .2 but it did not.

Agreed: late night - math error. I should have said 7.301 to 7.399. Still, 3 frames at 30fps.

However, 300ms is not good latency when it comes to controlling PTZs. My comment to Avigilon was that with their >500ms latency (and their system control "runon", whereby upon release of the joystick, the PTZ continued moving for at least another second (500ms + 500ms), we would have trouble following a little old lady using a walker.

I am not saying 300ms is good latency. I am saying that was what it was in our tests. As the VMS video above shows, it could certainly be even worse than 300ms.

Btw, as I think we both would agree, PTZ control is a lot more complicated than fixed camera latency because it depends how VMSes process and respond to the PTZ commands being sent, which can add even more latency to the operation.

We never concerned ourselves with fixed camera latency. After all, nothing is so time-critical that even a second would matter. In fact, IndigoVision playback can be up to 4 seconds behind "Live". I believe that is due to their 4-second GOP size. We've been using the system for over a year and never had an issue with the delay preventing us from responding quickly to an event.

How would you measure control latency? That was an issue I struggled with and gave up. In any event, it didn't appear to be an issue during our tests. Other than Avigilon's runon, we never observed an issue that wasn't directly related to video latency.

That said, we only tested analog fixed camera latency through encoders - both manufacturers' own and third party. Since we use the same encoders for both fixed and PTZ analog cameras, I believe our testing was relevant.

With IndigoVision deployed, we have taken the opportunity to test IP PTZs, but only by feel. IV's 9000-series 4SIF, 11000-series 720p and 12000-series 1080p PTZs all have acceptable, though unquantified, latency. My guess, based on our experiences during VMS/encoder tests, is that all three exhibit well under 200ms bi-directional latency.

We've also tested Bosch, Pelco, Sony, Vitek and JVC PTZs. The Bosch, Sony and Pelco PTZs exhibited control issues, whereby motion was not smooth and/or the PTZs also exhibited runon after release of the joystick. The best control, and overall best operation, is/was exhibited by the JVC and IndigoVision's own PTZs. Obviously, IV works with IV, but we were surprised at the poor showing of the other three.

How would you measure control latency?

I am not sure how you can easily segment control latency from video encoding latency. I guess I would measure latency of a stationary PTZ first to get the baseline of video encoding latency and then try to measure what the latency was when panning the PTZ, subtracting the two. I am not sure if that would work though as I have not tried it (though we will in a future test round).

Knowing how the control commands are handled is tough because it is not easy to inspect. It might be that PTZ X is slow or inconsistent in sending out the commands. It could be that VMS Y is unoptimized / poor in receiving / processing requests from PTZ X, but good for its own PTZ Y. Worse, it could be a combination of both.

Let's cut to the most important piece here, Can this delay get you out of a red light traffic ticket?

Joe,

Interesting question. I wonder if anyone has ever tried to challenge a "red light scamera" ticket on that basis?

As long as the red light and the car are both captured simultaneously / synchronously on the same camera, it does not matter if the delay to record was 5 seconds. The video will still fairly show where the car was when the light turned red.

John,

It was my understanding that red light cameras (scameras) are typically set for a delay between the red light and photo capture. The local jurisdiction was allowed to choose the length of that delay and, as I recall, some were setting it so tight that drivers who were past the trigger when the light turned red were given tickets, even though the light was yellow when they actually entered the intersection.

I seem to recall that there was a big stink about that and a number of tickets were thrown out of court until the jurisdiction lengthened the delay between light changes and photo capture. Of course, there was another big stink raised when it was discovered that jurisdictions were not following accepted standards for length of yellow light versus speed limit. In some cases, it was even proven that jurisdictions deliberately shortened yellow lights in order to maximize income.

"some were setting it so tight that drivers who were past the trigger when the light turned red were given tickets, even though the light was yellow when they actually entered the intersection."

I certainly believe that. I am just emphasizing that it is not a video latency issue but a (bad/manipulative) policy decision.

Agreed on the red light stop camera, how about with a rolling stop, in DE they get you for failure to come to a complete stop before turning right on red. Latency could play a part with that, especially since most are connected wirelessly... I would assume the camera companies that implenet would be smart enough to realize they need full fps. I would love to find an out for these, they give you the opportunity to challenge the ticket but never surrender wasting everyone's time and money.

I could see frame rate being an issue.

However, if its 30fps and you are going 20mph, in 1/30th of a second, you'll only travel 1 foot (see mph to fps converter).

Siqura have cameras with a low latency mode, would be intresting to see how they fair.

Miles, thanks for sharing. Siqura's specs list regular latency at 130ms and low latency mode at 90ms. This, of course, is just camera side, and excludes network / VMS / display.

Given that they are only listing a 40ms gain, I would not expect it to make a major improvement on overall end to end latency.

I wonder how the latency impacts the safety of cyclists in Scandinavia. Cameras are installed on trucks and the truck drivers rely on live video from the camera which is connected to a monitor inside the truck. This should help truck drivers to cover the blind spots of the right side of the trucks, so accidents can be avoided while the truck is turning to right direction at an intersection.

What about testing analog system latency?

Btw, the connection type can also affect latency so it might be a factor to test in future tests.

For example, using multicast (directly from the camera to the viewing station) will have the shortest latency for some VMS. I know that is the case for Genetec at least, possibly others too.

Disclaimer, I work for Genetec.

Yann, thanks.

I believe that, since it does not go 'through' the VMS / recorder.

That said, ~99% of systems do go 'through'.

All good stuff in this article/thread. One of the great things about IP video is that you can work with the video as data (rather than eletrical signals) and do all kinds of fun things with it. Programmers have the ability to read frames off the imager or the network, then stuff them into a buffer--giving them a chance to "get it right" and provide the best quaility video. As well as do things otherwise quite difficult like transcode. A lot of sins can be overcome by buffering. These buffers occur on the camera, in the VMS/recorder, and at the point of rendering to a display.

Unfortunately along with this flexibility and power comes higher overall latency, as the latency introduced by buffering accumulates throughout the system. It comes into play most commonly when PTZ is involved, there's a real-time requirement (like using the system for 'video conferencing') or in some cases if you're scrutinizing the time stamps on individual frames of video (where in the architecture those time stamps are generated and how it relates to the latency of the video becomes important).

I think a well designed system needs a "low latency" mode that explicitely minimizes the various buffers for live viewing or PTZ. The result might be some lost frames, but lower overall latency.

I have been working with DVRs with ethernet connection since 1995 (someone remember ASL Remote Watch Pro?) and that equipment in 2002 (Remote Watch Xperience) was the only one that has almost no latency. We tested in those days equipment from Philips, Kalatel, Geovision, etc and all of them had problems with latency a real problem for PTZs...

Thought I would share an observation.

While checking out a Vivotek ip 8362 I had direct connected it to my computer. Figured that would eliminate any network delays. Latency was 2 to 3 seconds with browser direct to camera. Pretty high.

I then connected the camera to the same computer/browser over a bench level network switch and the latency dropped to well less than half a second. Not sure I understand yet why the direct connect is slower nor am I that concerned. I just thought I might share the observation as I had expected a direct connect to pc would be as fast or faster than a network connect via switch.

This is a great topic. While important in fixed installations, latency is even more critical in mobile applications, particularly in high magnification mobile systems requiring dynamic aim and focus. High latencies can mean that these functions will never converge. For example, suppose your feedback loop is closed with 1/2 second latency (e.g. operator sees video 1/2 second late while trying to focus). Any control adjustment cannot be sensed until 1/2 second after it occurs. If multiple control adjustments are required (e.g. the first human input does not achieve adequate aim or focus), the process may not converge for several seconds. Over that period, if platform motion changes the focal or aim relationships, then the process will end up always chasing the desired result (lagging), or else it may be unstable (overshoot). These conditions lead to sub standard performance (poor focus, poor scene framing) or else to under utilization (more time spent aiming and focusing instead of capturing critical scene elements).

Analog video typically has negligible latency. In contrast to H.264, use of analog video for human pointing and focusing greatly improves optical system utilization and quality. If analog is unavailable, good results may be achieved with raw uncompressed digital video streams. However, this can present challenges because it may be desirable to distribute and archive video in H.264 format, but pulling dual streams can adversely affect latency. For applications in which latency is a critical limiter, this argues in favor of a single raw uncompressed digital video stream from the sensor, with downstream H.264 encoding, even though this approach tends to be a cost driver.

Beyond this, different H.264 encoders have a range of latencies which, if not appreciated, can also lead to costly replacements after the fact.

Login to read this IPVM report.

Related Reports

Converged vs Dedicated Networks For Surveillance Tutorial on Feb 12, 2020
Use the existing network or deploy a new one? This is a critical choice in...
Fever Camera Sales From Integrators Surveyed on Jun 01, 2020
Fever cameras are the hottest trend in video surveillance currently but how...
Video Analytics 101 on Mar 16, 2020
This guide teaches the fundamentals of video surveillance...
Injes Tiny Temperature Terminal Tested on Jul 17, 2020
While temperature terminals have trended bigger, the Injes DFace801 is...
Facial Recognition: Weak Sales, Anti Regulation, No Favorite, Says Security Integrators on Jul 07, 2020
While facial recognition has gained greater prominence, a new IPVM study of...
Uniview Deep Learning Camera Tested on Jul 14, 2020
Uniview's intrusion analytics have performed poorly in our shootouts. Now,...
Verkada: "IPVM Should Never Be Your Source of News" on Jul 02, 2020
Verkada was unhappy with IPVM's recent coverage declaring that reading IPVM...
Last Chance - Spring 2020 IP Networking Course - Register Now on May 06, 2020
This is the last chance to register for the only networking course designed...
Directory of 201 "Fever" Camera Suppliers on Aug 04, 2020
This directory provides a list of "Fever" scanning thermal camera providers...
FDA Gives Guidance on 'Coronavirus' Thermal Fever Detection Systems on Mar 30, 2020
The US FDA has given IPVM guidance on the use of thermal fever detection...
IR Surveillance Camera Guide on Feb 06, 2020
Integrated infrared (IR) cameras are everywhere in 2020, but not all IR is...
Every VMS Will Become a VSaaS on Feb 21, 2020
VMS is ending. Soon every VMS will be a VSaaS. Competitive dynamics will be...
Worse: ZKTeco 8" Worse Temperature Results Than 5" Tested on Jun 16, 2020
While ZKTeco USA's CEO objected to IPVM's SpeedFace+ 5" test results, saying...
Dynamic vs Static IP Addresses Tutorial on Apr 16, 2020
While many cameras default to DHCP out of the box, that does not mean you...
Vehicle Gate Access Control Guide on Mar 19, 2020
Vehicle gate access control demands integrating various systems to keep...

Recent Reports

Dangerous Hikvision Fever Camera Showcased by Chilean City on Aug 07, 2020
Deploying a fever camera outdoors, in the rain, with no black body, is...
"Grand Slam" For Pelco's PE Firm, A Risk For Motorola on Aug 07, 2020
The word "Pelco" and "grand slam" have not been said together for many years....
FLIR Stock Falls, Admits 'Decelerating' Demand For Temperature Screening on Aug 07, 2020
Is the boom going to bust for temperature screening? FLIR disappointed...
VSaaS Will Hurt Integrators on Aug 06, 2020
VSaaS will hurt integrators, there is no question about that. How much...
Dogs For Coronavirus Screening Examined on Aug 06, 2020
While thermal temperature screening is the surveillance industry's most...
ADT Slides Back, Disappointing Results, Poor Commercial Performance on Aug 06, 2020
While ADT had an incredible start to the week, driven by the Google...
AHJ / Authority Having Jurisdiction Tutorial on Aug 06, 2020
One of the most powerful yet often underappreciated characters in all of the...
SIA Coaches Sellers on NDAA 889B Blacklist Workarounds on Aug 05, 2020
Last month SIA demanded that NDAA 899B "must be delayed". Now that they have...
ADI Returns To Growth, Back To 'Pre-COVID Levels' on Aug 05, 2020
While ADI was hit hard in April, with revenue declining 21%, the company's...
Exposing Fever Tablet Suppliers and 40+ Relabelers on Aug 05, 2020
IPVM has found 40+ USA and EU companies relabeling fever tablets designed,...
Directory of 201 "Fever" Camera Suppliers on Aug 04, 2020
This directory provides a list of "Fever" scanning thermal camera providers...
Face Masks Increase Face Recognition Errors Says NIST on Aug 04, 2020
COVID-19 has led to widespread facemask use, which as IPVM testing has shown...
Dahua Loses Australian Medical Device Approval on Aug 04, 2020
Dahua has cancelled its medical device registration after "discussions" with...
Google Invests in ADT, ADT Stock Soars on Aug 03, 2020
Google has announced a $450 million investment in the Florida-based security...