Measuring Minimum And Maximum Bandwidth Consumption

I'm about to start an experiment where the minimum and maximum bandwidth consumption of a couple of different camera models will be measured. For that, I'm considering two different scenarios:

1) Minimum bandwidth consumption: Cameras placed indoors with a Lux value above 1000 and facing a white wall with no movement whatsoever.

2) Maximum bandwidth consumption: Cameras placed indoors with a Lux value below 5 and facing a white wall while a mini laser stage lighting device projects erratic pulses of light on the wall. (Pretty much the same idea used by you, John, in your experiments).

Frame rate values, CODECS, and image quality values will be varied during the experiment. My question: am I missing something or the experiment above is the way to go?

Kind regards, Tiago

If you are only going to do 2, those would be it. The strobe light test (as long as the strobe is moving fast) should simulate the bandwidth impact of a high motion scene.

The third one I would add is low light. Go under 1 lux and measure bandwidth. This will show how the camera's gain settings and encoding change in this extreme conditions. A lot of camera's bandwidth consumption blows up in this scenario and can create nasty surprises. When doing so, make sure you adjust the shutter speeds to be normalized and/or appropriate for your scenes. See our indoor camera shootout for charts and examples of bandwidth tests.

Here's one chart from that report:


Hi John,

great tip about adjusting the shutter speeds. I had not thought about that before! I will also reduce the lux to as close as possible to 1. The room where the tests were planned to take place has some influence of sunlight.

Thanks for the link to the IPVM report. Great stuff!



You also might try covering a very dark area or pure black with AGC disabled. My experience is that high brightness also tends to kick up the bitrate, although low light bitrates using AGC are even higher, due to AGC noise.

If you turn off the AGC in a 'very dark area', your camera will capture nothing. The bandwidth will definitely decrease but the image will be totally worthless. See our AGC test report.

And if you aim the camera at a white wall you also capture nothing and the image will also be totally worthless. Ipso facto.

Aiming the camera at a while wall is a proxy for putting it in an empty hallway, stairwell or office, all legitimate surveillance use cases. Turning off gain control in a 'very dark scene' is essentially the equivalent of turning the camera off.

Curious ... Are there other effects in a "static" environment that might cause a white-wall to not be a good proxy for that environment ? e.g. Flourescent light strobing, convection air currents and light refraction, insects or other critters ?

There can be some environmental differences that have an impact on bandwidth consumption. Light strobing, probably not. Air currents, maybe but only if they make some object move or sway (like curtains). Light refraction from an external light source, yes. Insects, if there were a lot of them.

Yes, but it is also likely to run at a higher bitrate than a camera with no light and no noise (typically amplified by AGC). He wants the minimum. I contend my method will give him that, while all white won't.

Why turn off the AGC then? Just leave the lens cap on then :) How about that method?

Lens cap on plus AGC = a boatload of noise, accompanied by high bitrate. Lens cap on plus AGC Off = minimum bitrate. Yes, I agree.

To be fair, "Sens up" and DSS should also be "Off".

Gentlemen, thanks for the discussion. For the sake of curiosity, I'll also try the settings suggested by Carl. Once I've finished the experiment, I'll report the results back in this thread.

I would like to mention that the color of the wall is irrelevant to me AS LONG AS it is uniform across the camera's field of view. As far as I understand a video frame with resolution H X W containing a vast majority of pixels with the same color (that is, with the same bit representation for the color depth) will be easier to compress than a video frame containing different colors.

I understand John's concern regarding a very dark area making the camera not to capture "anything" since there is no or little light hitting the camera's sensor. On the other hand I also understand that the camera sensor tends to create a very noisy (grainy) image at low light conditions, which may result in a video frame being hard to compress. Therefore, I'll experiment with Carl's observation as well to be sure I'm capturing both minimum and maximum bandwidth consumption.

Once more, thanks for raising the concerns above.

I don't know if this suits your purposes (what kind of "maximum" you are looking for), but if the particular cameras offer it you could also send completely uncompressed (raw) video. It's been a few years since I fiddled with this, but it was something we used to do with new models of IP cameras to get a "worst case" continuous maximum bandwidth consumption.

Mostly we did it not for assessing bandwidth use, but for stress testing network gear and VMS systems.

Many low end newtork switches have low processing capacity, such as a switch with eight 1 GB ports and a 2 GB processing capability. You could only max out any one port at a time with a continuous high-bandwidth stream. As you already know, most computer networks have bursty traffic patterns, which is not the case with most surveillance video.

Ray, do you recall what IP camera sends uncompressed raw video and what setting you need to enable for this?

Hi Ray, thanks for your reply.

We are actually interested in measuring compressed video streams (since that is the most common practice in video surveillance). Our goal is to find out what the worst and best case scenarios are with respect to bandwidth consumption. The reason for that is regarding network bandwidth and storage requirements.

For those interested, I hereby share the results of our experiment.

Our main conclusion is that scene complexity has a strong impact on bit rate. Our worst case scenario, a pitch dark wall with a mini laser stage lighting device projecting erratic pulses of light, generated a bit rate considerably greater than the best case scenario, plenty of light and no activity whatsoever on the wall. M-JPEG bit rate was between 4 and 6 times greater whereas H.264 was between 12 and 29 times. It is worth mentioning here that H.264 outperformed M-JPEG regarding bit rate in all tests. Above 90% bandwidth savings in the best case scenario and above 40% in the worst case one.

It is also interesting to highlight that the different camera models used in our experiment had some impact on bit rate. The Axis M series had a greater bit rate than the P series. Our suspicious is that the smaller image sensors present in the M series has something to do with that: M5014 and M1054 (1/4” progressive scan RGB CMOS), P1346 (Progressive scan RGB CMOS 1/3”), and P1347 (Progressive scan RGB CMOS 1/2.5”). The M series generated more noise, which was visible during the experiment, and as a result of that the captured scene becomes harder to compress (resulting therefore in a greater bandwidth consumption).

If anyone require further information/explanation, do not hesitate to ask me.

Tiago, awesome, thanks so much for sharing! We've seen the same pattern for M vs P series (like in our indoor shootout). I am actually beginning to wonder how general a pattern is this of entry level, smaller imager camera, color only cameras vs professional, high end ones. Is there a consistent significant difference in bitrate? It something that I definitely want us to try out in future testing.