If you are only going to do 2, those would be it. The strobe light test (as long as the strobe is moving fast) should simulate the bandwidth impact of a high motion scene.
The third one I would add is low light. Go under 1 lux and measure bandwidth. This will show how the camera's gain settings and encoding change in this extreme conditions. A lot of camera's bandwidth consumption blows up in this scenario and can create nasty surprises. When doing so, make sure you adjust the shutter speeds to be normalized and/or appropriate for your scenes. See our indoor camera shootout for charts and examples of bandwidth tests.
Here's one chart from that report:
You also might try covering a very dark area or pure black with AGC disabled. My experience is that high brightness also tends to kick up the bitrate, although low light bitrates using AGC are even higher, due to AGC noise.
And if you aim the camera at a white wall you also capture nothing and the image will also be totally worthless. Ipso facto.
Yes, but it is also likely to run at a higher bitrate than a camera with no light and no noise (typically amplified by AGC). He wants the minimum. I contend my method will give him that, while all white won't.
Lens cap on plus AGC = a boatload of noise, accompanied by high bitrate. Lens cap on plus AGC Off = minimum bitrate. Yes, I agree.
To be fair, "Sens up" and DSS should also be "Off".
Gentlemen, thanks for the discussion. For the sake of curiosity, I'll also try the settings suggested by Carl. Once I've finished the experiment, I'll report the results back in this thread.
I would like to mention that the color of the wall is irrelevant to me AS LONG AS it is uniform across the camera's field of view. As far as I understand a video frame with resolution H X W containing a vast majority of pixels with the same color (that is, with the same bit representation for the color depth) will be easier to compress than a video frame containing different colors.
I understand John's concern regarding a very dark area making the camera not to capture "anything" since there is no or little light hitting the camera's sensor. On the other hand I also understand that the camera sensor tends to create a very noisy (grainy) image at low light conditions, which may result in a video frame being hard to compress. Therefore, I'll experiment with Carl's observation as well to be sure I'm capturing both minimum and maximum bandwidth consumption.
Once more, thanks for raising the concerns above.
I don't know if this suits your purposes (what kind of "maximum" you are looking for), but if the particular cameras offer it you could also send completely uncompressed (raw) video. It's been a few years since I fiddled with this, but it was something we used to do with new models of IP cameras to get a "worst case" continuous maximum bandwidth consumption.
Mostly we did it not for assessing bandwidth use, but for stress testing network gear and VMS systems.
Many low end newtork switches have low processing capacity, such as a switch with eight 1 GB ports and a 2 GB processing capability. You could only max out any one port at a time with a continuous high-bandwidth stream. As you already know, most computer networks have bursty traffic patterns, which is not the case with most surveillance video.
For those interested, I hereby share the results of our experiment.
Our main conclusion is that scene complexity has a strong impact on bit rate. Our worst case scenario, a pitch dark wall with a mini laser stage lighting device projecting erratic pulses of light, generated a bit rate considerably greater than the best case scenario, plenty of light and no activity whatsoever on the wall. M-JPEG bit rate was between 4 and 6 times greater whereas H.264 was between 12 and 29 times. It is worth mentioning here that H.264 outperformed M-JPEG regarding bit rate in all tests. Above 90% bandwidth savings in the best case scenario and above 40% in the worst case one.
It is also interesting to highlight that the different camera models used in our experiment had some impact on bit rate. The Axis M series had a greater bit rate than the P series. Our suspicious is that the smaller image sensors present in the M series has something to do with that: M5014 and M1054 (1/4” progressive scan RGB CMOS), P1346 (Progressive scan RGB CMOS 1/3”), and P1347 (Progressive scan RGB CMOS 1/2.5”). The M series generated more noise, which was visible during the experiment, and as a result of that the captured scene becomes harder to compress (resulting therefore in a greater bandwidth consumption).
If anyone require further information/explanation, do not hesitate to ask me.
Tiago, awesome, thanks so much for sharing! We've seen the same pattern for M vs P series (like in our indoor shootout). I am actually beginning to wonder how general a pattern is this of entry level, smaller imager camera, color only cameras vs professional, high end ones. Is there a consistent significant difference in bitrate? It something that I definitely want us to try out in future testing.