Sacrifice Image Or Viewing Quality? Pick

Let's imagine a situation in which you are monitoring live video streams (on a monitor with a 4x4 layout, for example) and that pane size and CODEC cannot be varied. CPU performance and bandwidth constraints begin to affect the display of these video streams. What would you rather change in this situation?

(A) Compression level (i.e., by increasing it)


(B) Frame rate (i.e., by reducing it)

I would say (B), but I'm not strongly convinced that it is the "obvious" choice. What do you guys think?

Hi Tiago,

Have you seen our Multistreaming article? I assume you're asking that question because you have to set the cameras to statically use either option always, but with multistreaming feature you can have the VMS automatically (or manually in some) switch to the right combination of bitrate and/or FPS depending on the client viewing selection.

In addition, when you're in a 4 camera layout (depending on monitor size and camera resolution) you may want a higher resolution to improve quality, however, reducing FPS may mean you don't see incidents in realtime. In contrast, if you're using a standard 13 inch monitor utilizing 4 cameras' higher resolution the bandwidth is wasted and could be used to throttle FPS counts.

There is one case where you can not ADD streams.... that's when you have puchased a NVR with limited in and out bandwitdth capabilities linked to its CPU performances

Can be an Atom, Celeron, I3,I5 or I7 with limited RAM, more and more fanless : systems tells you , for instance, 480 fps in recording and 90 Mbits/Sec for 16 cameras...

In that case, it's preferable to have 1 single stream for both Viewing and Recording instead of one FHD for recording and one or thow others for remote viewing..

Same for bandwidth and switchs : better use one 50 Mbits stream instead of 2 x 25 Mbits which takes more bandwidths resources

But when you need 2 streams and have to choose, here it's simple : law requires to record 6 (large FOV) or 12 fps ( Zoom on thin FOV with moves) but doesn't oblige you to have a mini live view quality

Tiago, can you elaborate on the setup a bit? Marc, this law you mentioned, which state or is it across the board? and how could it not have a minimum quality? if you're using jpeg lowest res and quality than you'd get into the difficulty of recognizing the object of the investigation, wouldn't you?

Hi Sarit,

thanks for your reply and for mentioning the article. I had already read it and I found it very insightful, by the way. I made the question above actually having multi-streaming in mind.

To give you more details, I'm trying to come up with a design for an automatic switchover of streams. The situation I described above is supposed to depict a worst case scenario, meaning that we squeezed everything we could in order to reduce CPU and bandwidth consumption, but the system would still struggle to properly display 16 simultaneous video streams. The question is more like then: "Ok, what can we sacrifice more without considerably compromising the surveillance of video streams?"

It is worth mentioning that I'm not considering to add more CPU power or increase bandwidth to solve the problem. Therefore, the idea is to take the good and bad from the situation described above and make the best of it :)

As you mentioned, I'm also afraid that by reducing FPS we may miss one or more incidents that might be happening in the scene, but if we are lucky we will still get some few shots with good image quality. On the other hand, by increasing compression level we may not miss any incident, but it might be hard to see details of what it is happening due to the high levels of compression applied to the video streams.

Sarit : I'm in France , that's the French law. saying you should record 6 or 12 fps with a quality sufficient to provide 90x60 pixels on face so 400 pixels per meter for identification purpose.. then if you compress 80% for sure , you will get your pixels Gray ! same with a too large GOP filming a running robber, you would get a nice blurry video exportation.., same with a bad WDR, a too slow shutter, consequences will be the same : bad quality

So in France we never touch frame rate .. but try to tuneup frame size, compression, cropping, recording period (some cameras less, some cameras more) and so on

When you bandwidth is too slow , record on SD locally or on NAS on motion detect (motion isn't accurate but will just save some GB)and view with the lowest quality.

Some systems now can has client software connecting directing to cameras for viewing while NVR records ...that way you save streams and cpu ..

Thanks Tiago. In that case I have a few more questions that may help us get to a few solutions:

1. Are these cameras using any I/O, temper triggers or unused substreams that are enabled in the cameras?

2. Are the cameras using motion detection?

3. Does the FOV allow you to block out trees or areas via Privacy masks to remove some pixelation that's not needed?

4. Are these really lighted/darkened areas or outside (I've seen where some ACTI cameras in long school hallway with lots of windows will pull more resources although there is no activity during summer break for example)?

5. Also, do you know if your cameras offer VBR/CBR modifications?

Ultimately, if you've squeezed the VMS/Cameras completely, maybe next we can look at FOV/scene or an external device that can help with this...

Hi Sarit,

Thanks for your help so far. Here are the answers:

1) No, they are not.

2) No, the cameras continuosly stream video.

3) It allows, but for the sake of my situation depicted above I'm considering to receive the full frame.

4) For the tests we are running, we are basically facing the IP cameras to a large screen-television that displays a high motion video. There are lighted/darkened areas, but they vary throughout the hight motion video.

I would also refrain from using external hardware (i.e., an external video decoder) to help in displaying the 16 video streams :) Sorry for complicating things, it is more an hypothetical scenario, but its answer I'm somehow interested in knowing it.

I see...would you consider these 4 cameras critical? meaning their (final destination, not the TV tests) FOV would likely show criminal activity that would need to be used later? If so, it may be better to use higher quality and lower the FPS.

Just a small correction... there are actually 16 cameras streaming video and not 4. I would say yes to your first question since we have a variaty of clients who might find critical that they don't miss anything.

You mentioned that the FOV would be shown later. Bear in mind that my situation does not involve recording, but simply monitoring live video. For recording purposes, I agree with you that a few higher quality images shall be better than a bunch of pixelated images. Now, for viewing live video... I'm not so sure about that.


I see...would you consider these 4 cameras critical? meaning their (final destination, not the TV tests) FOV would likely show criminal activity that would need to be used later?

This is the key question in providing answers - what is it these cameras are looking at?

The answer would vary depending on the specific scene I'm watching

Examples for retail:

Public View Entrance: I would want higher resolution to identify incoming customers

Point of sale: I would want Higher FPS to capture transactions (I already know who the cashier is) - I want to see as many frames as possible if they pocket money or give away merchandise

Receiving Door: Higher Resolution to determine if the employees are carrying merchandise out

Merchandise Aisle: Higher FPS to determine if the shopper is concealing merchandise

So you are reproducing video experimentation labo

Mjpeg is the lowest bandwith consumer ... when the bandwidwth isn't an issue but the CPU is

Hi Marc,

in my hypothetical situation the CODEC used is irrelevant. I'm more concerned about compression level and frame rate.

I've gone through some Excel sheets in order to find some bitrate comparisons I made some time ago with some IP cameras (mostly Axis cameras).

I've then made an analysis on how great bitrate can be reduced (in terms of percentage) when frame rate is decreased as well as compression level is increased. A frame rate of 25 FPS and a very low compression level are the basis for the comparison. The scene consisted of cameras placed indoors with a Lux value below 5 and facing a white wall while a mini laser stage lighting device projects erratic pulses of light on the wall. Below are my findings that I would like to share with you.

It is worth mentioning that with a very high compression level (equivalent to the Axis compression value of 90), the image quality becomes quite poor (with many artifacts).

Thank you Tiago. Which Axis model are you using? also, how's the sharpness set? btw, the charts are confusing me a bit, I would like to see both the framerate and the compression rate charted on the same graph to show the "sweet spot" or how the two I correct to assume that when viewing the Compression level chart the FPS is constant at 25? if so the Framerate chart does not show birate output at 25FPS. And maybe it's just me but I would opt to using a scatter chart and flip it to bandwidth consumption, not reduction...

Also, we're using VBR or CBR here?

Hi Sarit, the AXIS models used were: P1347, P1346, M5014, M1054. Sharpness was left at its default (i.e., 50) and VBR was used. A max shutter speed of 1/30s was also set during our experiment.

It's a bit hard for me to chart both frame rate and compression level on the same graph. The way they should be understood although is as follows:

1) Frame rate graph
A max frame rate of 25 FPS was used in our experiment (our basis for comparison). Therefore, to show how much bitrate can be reduced by decreasing the frame rate, we considered 20 FPS, 10 FPS, 5 FPS, and 1 FPS as well. We then compared how great (in terms of percentage) the reduction was. The result was 20%, 60%, 80%, and 96%, respectively. It is important here to mention that independent of the compression level used, the same reduction was observed. For example, by reducing frame rate from 25 to 1, a reduction of approximately 96% was observed in all IP cameras tested and proportionally to the compression level used in both FPS. That is, a very high compression level with a frame rate of 25 FPS was compared with a very high compression level with a frame rate of 1 FPS; a high compression level with a frame rate of 25 FPS was compared with a high compression level with a frame rate of 1 FPS, and so on.

2) Compression level graph
A very low compression level (equivalent to the AXIS compression value of 10) was used in our experiment (our basis for comparison). Therefore to show how bitrate can be reduced by increasing the compression level, we considered low (30), medium (50), high (70), and very high (90) compression levels as well. We then compared how great (in terms of percentage) the reduction was. The results vary depending on the CODEC used. M-JPEG streams profit considerable more than H.264 streams. It is worth here saying that you should not view this graph with the FPS being constant at 25 FPS. You should see as follows: when the compression level is increased from very low to low, for instance, a reduction of approximately 28% (H.264) and 47% (M-JPEG) is observed independent of the frame rate used. That is, these percentages were observed independently whether the frame rate was 25 FPS, 20 FPS, 10 FPS, and so on.

Should you be interested in knowing more details, I can send you the Excel sheet with my calculations for your better understanding.