There is a factually correct response to this and it should not be based on other users' numbers. For example, someone might say 2Mb/s, another person might say 4Mb/s and they may certainly be right for their scenarios and camera models. The big issue is that the 'right' answer depends on your specific conditions.
You should use VBR, you should use a Cap but what the cap should be depends on a few factors:
- What is the frame rate? The average and max required bit rate increases as frame rate increases (see: Testing Bandwidth vs Frame Rate).
- What are the scenes you are monitoring? Cameras covering busy areas (e.g., intersection) need higher caps than those monitoring empty ones (e.g., stairwells). Even within your 1000+ cameras, some likely will need higher caps than others.
- What camera models are you using? Even if all the camera use H.264, all use the same profile, all the same resolution, all the same compression, bit rates consumed will vary depending on the sensor used and the image settings of the camera.
The main reason for using a cap is for night time. Whether you are using IR and even if your night time scenes are moderately bright, you are almost always going to see spikes in bandwidth consumption at night, compared to the day, that are worthless in terms of increasing usable video quality (see: Tested: Lowering Bandwidth at Night is Good).
What you what to do is set the cap such that it is higher than whatever each camera needs during the day but lower than what bandwidth spikes at night. Here is one real world example - Camera A consumes max of 2Mb/s during the day but spikes to a steady 8Mb/s at night. You could easily set the cap at 3Mb/s and save a lot of bandwidth / storage.
As for your second question:
"Also, for 30 days of video, how large of a file on average are you seeing given these resolutions?"
If you can describe camera models, scenes, etc., then we can better estimate. Otherwise, abstractly, you can reasonably have a 20x difference in storage consumption for a given resolution, simply because of differences in frame rate, compression level, cap used, camera type used, scenes monitored, etc. I am not even counting motion vs continuous recording which also would have an impact.
For 4Mb /sec / camera which is a correct/high overage for FHD you can get
4Mbit /8 = 0,500 Mo/s x 3600 x 12 x 24 x 30 =1,3 TB / continuous recording per Month for one camera
then for 1000, before collapsing you can get 13000 TB which means you have reached Peta Bytes levels. Congralutations! and before talking Raid Hard disk redundancy.
Of course you get less or more, but this is realistic, combining some outdoor and PTZ cameras whith 6- 8 Mbits and some indoor with less
I n video better be a little bit pessimistic instead of too much optimistic. (and keep also a 30% free storage left if your budget allows it)
For 1000 cameras, (A) should be testing this in his own environment, not using rough rules of thumb like 4Mb/s. The amount off could be huge one way or the other, and the time invested to test and measure this accurately will be many times paid back from more accurate storage sizing / use.
OK I understand your points , so all answers could be " it depends..."
On my side I think we can give sometimes rough values (but argued) and then detail and test when it's possible, but in real life, projects never come with enough time to pre-test on site and collect all parameters.
Seneca | IPVMU Certified | 04/24/15 04:04pm
'A'.. the correct answer is 'it depends'.
The most accurate answer is what John describes and you have to do your homework to get the answer. If you know the camera vendors...then take advantage of their bitrate estimators. I like the Axis one best because you can dial in the scene complexity and day/night variables to play with. Others are not that flexible.
If you want to have a decent starting point before you do your homework, then what Marc says is a reasonable thing to do. We do that all the time to give customers a cost starting point before we gather more details.
That takes care of the Server side.
The Client side is a different story all together. Always consider how you will be looking at the video. It is not recommended to do viewing on the Server doing the recording because of the performance hit. Of course if you have a BIG CPU and decent video card...then you can balance the activity.
From the extensive client testing we have been doing... using a lower FPS provides a very good bang for the buck. Lower is around 10FPS...or less. Viewing at 30FPS is extremely stressful to any client machine no matter what the resolution (assuming H264).
Of course you could always work it the other way as I've seen deployed. Estimate the storage and then change the bitrate to fit. Let's see....30fps, 1080p image, 24/7 recording, 30 days. Check! Set bitrate manually to 300k, Check! Only 95gb per camera needed, just don't look at the video because it's useless.