Have you seen our Multistreaming article? I assume you're asking that question because you have to set the cameras to statically use either option always, but with multistreaming feature you can have the VMS automatically (or manually in some) switch to the right combination of bitrate and/or FPS depending on the client viewing selection.
In addition, when you're in a 4 camera layout (depending on monitor size and camera resolution) you may want a higher resolution to improve quality, however, reducing FPS may mean you don't see incidents in realtime. In contrast, if you're using a standard 13 inch monitor utilizing 4 cameras' higher resolution the bandwidth is wasted and could be used to throttle FPS counts.
There is one case where you can not ADD streams.... that's when you have puchased a NVR with limited in and out bandwitdth capabilities linked to its CPU performances
Can be an Atom, Celeron, I3,I5 or I7 with limited RAM, more and more fanless : systems tells you , for instance, 480 fps in recording and 90 Mbits/Sec for 16 cameras...
In that case, it's preferable to have 1 single stream for both Viewing and Recording instead of one FHD for recording and one or thow others for remote viewing..
Same for bandwidth and switchs : better use one 50 Mbits stream instead of 2 x 25 Mbits which takes more bandwidths resources
But when you need 2 streams and have to choose, here it's simple : law requires to record 6 (large FOV) or 12 fps ( Zoom on thin FOV with moves) but doesn't oblige you to have a mini live view quality
Tiago, can you elaborate on the setup a bit? Marc, this law you mentioned, which state or is it across the board? and how could it not have a minimum quality? if you're using jpeg lowest res and quality than you'd get into the difficulty of recognizing the object of the investigation, wouldn't you?
thanks for your reply and for mentioning the article. I had already read it and I found it very insightful, by the way. I made the question above actually having multi-streaming in mind.
To give you more details, I'm trying to come up with a design for an automatic switchover of streams. The situation I described above is supposed to depict a worst case scenario, meaning that we squeezed everything we could in order to reduce CPU and bandwidth consumption, but the system would still struggle to properly display 16 simultaneous video streams. The question is more like then: "Ok, what can we sacrifice more without considerably compromising the surveillance of video streams?"
It is worth mentioning that I'm not considering to add more CPU power or increase bandwidth to solve the problem. Therefore, the idea is to take the good and bad from the situation described above and make the best of it :)
As you mentioned, I'm also afraid that by reducing FPS we may miss one or more incidents that might be happening in the scene, but if we are lucky we will still get some few shots with good image quality. On the other hand, by increasing compression level we may not miss any incident, but it might be hard to see details of what it is happening due to the high levels of compression applied to the video streams.
Sarit : I'm in France , that's the French law. saying you should record 6 or 12 fps with a quality sufficient to provide 90x60 pixels on face so 400 pixels per meter for identification purpose.. then if you compress 80% for sure , you will get your pixels Gray ! same with a too large GOP filming a running robber, you would get a nice blurry video exportation.., same with a bad WDR, a too slow shutter, consequences will be the same : bad quality
So in France we never touch frame rate .. but try to tuneup frame size, compression, cropping, recording period (some cameras less, some cameras more) and so on
When you bandwidth is too slow , record on SD locally or on NAS on motion detect (motion isn't accurate but will just save some GB)and view with the lowest quality.
Some systems now can has client software connecting directing to cameras for viewing while NVR records ...that way you save streams and cpu ..
Thanks Tiago. In that case I have a few more questions that may help us get to a few solutions:
1. Are these cameras using any I/O, temper triggers or unused substreams that are enabled in the cameras?
2. Are the cameras using motion detection?
3. Does the FOV allow you to block out trees or areas via Privacy masks to remove some pixelation that's not needed?
4. Are these really lighted/darkened areas or outside (I've seen where some ACTI cameras in long school hallway with lots of windows will pull more resources although there is no activity during summer break for example)?
5. Also, do you know if your cameras offer VBR/CBR modifications?
Ultimately, if you've squeezed the VMS/Cameras completely, maybe next we can look at FOV/scene or an external device that can help with this...
Thanks for your help so far. Here are the answers:
1) No, they are not.
2) No, the cameras continuosly stream video.
3) It allows, but for the sake of my situation depicted above I'm considering to receive the full frame.
4) For the tests we are running, we are basically facing the IP cameras to a large screen-television that displays a high motion video. There are lighted/darkened areas, but they vary throughout the hight motion video.
I would also refrain from using external hardware (i.e., an external video decoder) to help in displaying the 16 video streams :) Sorry for complicating things, it is more an hypothetical scenario, but its answer I'm somehow interested in knowing it.
I see...would you consider these 4 cameras critical? meaning their (final destination, not the TV tests) FOV would likely show criminal activity that would need to be used later? If so, it may be better to use higher quality and lower the FPS.
So you are reproducing video experimentation ...in labo
Mjpeg is the lowest bandwith consumer ... when the bandwidwth isn't an issue but the CPU is
I've gone through some Excel sheets in order to find some bitrate comparisons I made some time ago with some IP cameras (mostly Axis cameras).
I've then made an analysis on how great bitrate can be reduced (in terms of percentage) when frame rate is decreased as well as compression level is increased. A frame rate of 25 FPS and a very low compression level are the basis for the comparison. The scene consisted of cameras placed indoors with a Lux value below 5 and facing a white wall while a mini laser stage lighting device projects erratic pulses of light on the wall. Below are my findings that I would like to share with you.
It is worth mentioning that with a very high compression level (equivalent to the Axis compression value of 90), the image quality becomes quite poor (with many artifacts).
Thank you Tiago. Which Axis model are you using? also, how's the sharpness set? btw, the charts are confusing me a bit, I would like to see both the framerate and the compression rate charted on the same graph to show the "sweet spot" or how the two interrelate..am I correct to assume that when viewing the Compression level chart the FPS is constant at 25? if so the Framerate chart does not show birate output at 25FPS. And maybe it's just me but I would opt to using a scatter chart and flip it to bandwidth consumption, not reduction...
Also, we're using VBR or CBR here?