Bandwidth is one of the most fundamental, complex and overlooked aspects of video surveillance.
Many simply assume it is a linear function of resolution and frame rate. However, this assumption misses many factors and failing to consider them properly could result in overloaded networks or shorter storage duration than expected.
In this guide, we take a look at these issues, broken down into fundamental topics common between cameras, and practical performance/field issues which vary depending on camera performance, install location, and more.
Resolution: Does doubling pixels double bandwidth?
Framerate: Is 30 FPS triple the bandwidth of 10 FPS?
Compression: How do compression levels impact bandwidth?
Codec: How does codec choice impact bandwidth?
Smart codecs: How do these new technologies impact bandwidth?
Practical Performance/Field Issues
Scene complexity: How much do objects in the FOV impact bitrate?
Field of view: Do wider views mean more bandwidth?
Low light: How do low lux levels impact bandwidth?
WDR: Is bitrate higher with WDR on or off?
Sharpness: How does this oft-forgotten setting impact bitrate?
Color: How much does color impact bandwidth?
Manufacturer model performance: Same manufacturer, same resolution, same FPS. Same bitrate?
The most basic commonly missed element is scene complexity. Contrast the 'simple' indoor room to the 'complex' parking lot:
Even if everything else is equal (same camera, same settings), the 'complex' parking lot routinely requires 300%+ more bandwidth than the 'simple' indoor room because there is more activity and more details. Additionally, scene complexity may change by time of day, season of the year, weather, and other factors, making it even more difficult to fairly assess.
Excellent article Ethan !!... keep up the good work. If only the much publisized 4G cellular connections can keep up in terms of bandwidth with IP Megapixel cameras live streaming, I would have a lot of satisfied customers .... I've heard rumors that in reality 4G is 3.8G at best (misleading marketing used by the carriers)...
Depends on which 4G: HSDPA, WiMax Advanced, or LTE Advanced. HSDPA is 3G rebadged by AT&T and was a transitional tech between 3G and 4G. There were a few transitional network standards badged as 3.5G, 3.75G, etc. Generally the transitional standards didn't meet one or more requirements of the 4G label.
This long-standing rule of thumb is essentially accounting for the packet overhead of TCP/IP.
In simplistic terms, if you have a 1Gbps network, and you have 5GB of data to transfer across the network, you should see a transfer time of 40 seconds (5GB = 40Gb). However you'll likely see a transfer time of at least 50 seconds, because that data has to be broken into packets and transmitted in chunks. The process of adding the destination "address" to the data packets itself creates more data to transfer. You also will have contention with other devices on the network.
The rule of thumb has been a given network can transfer actual data at a rate of about 80% it's rated capacity max.
With network cameras you also have to be mindful that even with CBR settings, the bandwidth output of the camera is not always going to be exact. A camera with a 3Mbps CBR setting might actually end up transmitting 3.1Mbps or more at times.
If you have a dedicated network for cameras/recorder only, I would use no more than 70% of the rated network speed for alloted bandwidth. EG: a 100Mbps network would have 70Mbps of "usable" bandwidth. If you have all cameras capped at 5Mbps, you should be able to support 14 cameras on that network without running into major traffic contention issues.
If the network is shared, it adds a lot more variables.
I was surprised that on same brand, same setups, diferents models could have double of bandwith!. the question on my mind is: where to save money? where to be more efficient? specially on large installations (+100, +500... cameras).
On this example, Axis Q1604 uses 488 kb/s and Axis M3004 uses 1,328 kb/s, having same image quality, but using almost 3x times less bandwith, so I could install more cameras on same network/link or switch port, more eficient on the use of bandwith and also and important too, less space on hard drive to be used on recorded video.
However Q1604 cost 3x times more, around $860 vs. M3004 at $260. (estimated costs)
so, do I should save money on the camera, have more budget to install more cameras, but then need to spend more on the network (more links at 100/1000bps) and more hard drives on the recorder?
It will be nice to have some tool, where we can put this many inputs, and base on the numbers of cameras, discover where is the break point of spend more at the camera side, to save on the network/storage side.
Gigabit switches are common and economical these days. A 1Gbps link supports more than (for example) 200 4Mbps streams. That's a lot of cameras and a lot of video. Network topologies can be designed to balance the bandwidth across links without too much additional cost. But the storage architecture complexity and cost from 200 4Mbps streams would be considerable. So while more network equipment is necessary to support more cameras, I think you'll find that the networking cost pales in comparison to the cost of storage--especially if long retention times are desired.
Ethan, I'm starting the new course next week. Having completed the readings I have a question in advance: are these Bandwidth and Addresssing lessons the same regardless of the transport (i.e. wired or wireless)?
Wireless doesn't show up until lesson 9, so I'm confirming that the principles in the lessons preceding it don't change based on wired or wireless.
The basic bandwidth lessons are the same whether it's wired or wireless. The big difference with wireless (bandwidth) is often you have less of it and wireless bandwidth is less stable. For example, your wired LAN connection is generally not going to drop from 1Gb/s to 400Mb/s but your wireless bandwidth could bounce up and down depending on the weather, what other radios are turned on nearby, if trees grow, etc. We cover those elements in class 9.
Of course, actual test result reports like yours are more informative than research reports. I hope that in the future there might be an opportunity for a consensus on a few common use cases for bit rate reduction testing.
"H.265 has been the "next big thing" in CODECs for several years, claiming 50% savings over H.264, but ... has had limited benefit over H.264 in similar scenes, about 10-15% on average, with H.264 Smart CODEC cameras generally providing bigger bandwidth savings than H.265."
where the '50% savings' come from? Is it labratory result?
If it is exactly claimed 50% savings, why current situation is not?
Is it from the limitation of software algorithm and/or processing power?