Subscriber Discussion

How Many IP Cameras On A 100mb/S Connection?

BH
Bohan Huang
Mar 03, 2013

Commonly, an HD IP Camera (30FPS 720p/1080p, 15FPS 3M or 8FPS 5M) will deliver ~9Mbps of data to your typical VMS (8Mbps main stream + 1Mbps sub stream).

Now, according to the 70% throughput design rule - one can expect to reliably run 7 such cameras on a 100Mbps switch dedicated to the cameras - utilizing <70Mbps of bandwidth.

However, during our internal tests, we found typical 100Mbps switches struggle to achieve <50Mbps of bandwidth.

In other words, we notice frame drops/stuttering/freezing of video feeds with just 5 cameras (we set them to CBR streams and monitored the network throughput to verify there were no spikes).

Are there any networking experts out there that can throw light on this issue?

If we were to extrapolate this to gigabit networks - does that mean we can only expect to achieve 500Mbps on the average switch?

JH
John Honovich
Mar 03, 2013
IPVM

Bohan, what are the network switches you are using? Consumer, SMB, enterprise ones, etc.?

Have you checked the logs / stats from the switch or are you observing this just from the VMS client?

Btw, setting an HD feed to 8Mb/s CBR strikes me as overkill. 4Mb/s for 1080p/30fps is closer to industry average.

IJ
Ian Johnston
Mar 03, 2013

I agree with John, 4Mbps is more normal for 1080P30 nowadays. Main profile helps keep a more consistent bandwidth as does a broadcast grade encoder (like Ambarella).

Some of the mfgs that have ‘rolled their own’ encoders will have higher bandwidths than normal (or worse video for the same bitrate). Modern and professional grade cameras also now have pretty good temporal noise filtering which helps to keep bitrates lower in low light conditions.

Setting H.264 to VBR in "real world" scenarios will help given the fact that if there is no motion, the stream can settle down to a quiescent state which lends more bandwidth budget to other ports on the switch.

But enough about that, the question was regarding network gear.

Consumer grade switches 'saturate' pretty quickly - especially with all the ports running at a constant high data rate.

They're really designed for more intermittent data, like that of web surfing and the occasional file transfer over the network. Every penny has been squeezed out of the cheaper switches and the manufacturers know that most of them sit idle anyway.

Facebook or IPVM does not generate that much data, and only 1 or 2 ports at most will be used for Netflix or other form of high bandwidth traffic. That and the manufacturers don’t care... Most consumers will never know how many packets are dropped by their crappy switch.

Gigabit switches are of course faster and have a higher speed backbone (internal network that connects all the ports together) but the primary gain is realized by having a faster port to the VMS / PC that is recording all the devices. More professional 10/100 switches will have gigabit uplinks that can be connected to the server.

One other thing to note is that H.264 has very asymmetric packets. IDR / I frames are much larger than P frames or B frames (not common in security). Because of this, H.264 can overwhelm a 10/100 switch due to the instantaneous bandwidth exceeding 50-60Mbps when a IDR frame is transmitted.

If the network is huffing and puffing, packets can be dropped from the IDR, which causes the most damage to the decoded stream. All subsequent P frames will have errors causing tearing in the stream. Normal encoders output 1 IDR frame and 29 P frames (1 IDR every second for 1080P30). If the IDR is damaged, it will take a second for everything to clear when the next IDR is delivered. Some encoders stretch out the IDR interval, which makes the overall bitrate lower, but it makes it more prone to packet loss and decode errors.

Gigabit switches can handle spikes of traffic better and will usually drop less frames as a result. Most ‘consumer’ Gigabit switches however can’t do ‘line speed’ (1Gbps) and will max out at around 500-600Mbps.. still plenty fast for most security applications.

It should be noted that Server PC’s also have a tremendous limitation on the ‘line speed’ rating of the Network Interface Adapters (NICs). Cheap NICs and cheap motherboards will have a pretty low sustained throughput on the internal bus (usually PCI Express x1) and could actually be the limiting factor in this case.

There’s a reason why IT depts shell out huge dollars for enterprise class switches and servers. You really do get what you pay for.

All that that said, you should be able to get more than 5 modern 1080P30 cameras on one switch at one time.

Cheers,

Ian.

IJ
Ian Johnston
Mar 03, 2013

Sorry, quick addendum to my last reply.

It also matters if the video is being transmitted via TCP versus UDP. TCP has the benefit of being able to retransmit the packet if it gets lost or dropped, *but* it can also get overwhelmed quickly and can get 'congested' with very high amounts of data flowing through the stack - especially if you're on a wireless network or a crappy switch.

In comparison, UDP is 'fire and forget'. If the packet gets lost, no one cares, but it also doesn't tend to clutter up the network with a bunch of re-transmit attempts and a lot of transmit acknowledgements that wait in a buffer waiting endlessly to get out back to the camera.

Which leads to a modern problem called "buffer bloat"

Steve Gibson from GRC and Security Now did a great podcast (episode 345) on the problem and how it's the scourge of modern day switches and ethernet stacks:

I can't recommend his podcast highly enough and explains in detail the entire network, security vulnerabilities and the current state of affairs in the computing world.

Cheers,

Ian.

JH
John Honovich
Mar 03, 2013
IPVM

Hi Ian, great feedback. One question - how much of the TCP vs UDP decision is made by the camera or VMS? For instance, with your cameras, do you default to TCP or UDP? Do VMSes generally stay with that?

BH
Bohan Huang
Mar 03, 2013

Yes, it is a consumer switch (TP-Link TL-SF1008P) but the specs tell of 1.6Gbps (i.e. wirespeed) backplane bandwidth.

My issue here is a 100Mbps switch based on a mainstream Realtek RTL8309G wirespeed chipset which measures to 96Mbps in network tests like iperf fail to push 50Mbps when subject to multiple (here 10) streams of video.

The 8Mb/s is overkill - however for best picture quality one would set to a medium VBR quality level with a cap ~8Mbps and its very embarrassing when the camera becomes chopping when there is a lot of motion.

Yes we could use SMB level gigabit PoE switches but the issue here is the vast discrepancy - the switch being able to only sustain 50% of rated throughput when there are multiple streams of data.

Also,what if I get a fancy expensive managed switch and experience the same problems?

Overspecifying jobs is of course a fast and reliable way of preventing problems but one would like to (for the sake of maintaining competitiveness) specify only what is required. For me this means designing towards a 70% switch bandwidth utilisation just like the average IT consultant would do.

------

My main concern is how the number of data streams affect the total bandwidth the can be used. I postulate that the relationship may end up to be somethings like this:

1 stream - can use 70% of switch bandwidth

10 streams - can use 40% of switch bandwidth

20 streams - can use only 30% of switch bandwidth

If this is the case then when specifying a 32 channel HD install using a gigabit network - with a 9Mbps allocating to each camera, if we can only rely on being about to use 30% of total bandwidth (we will be doing 64 streams in this case - so no we are assuming also that the switch is "twice as strong" as the TP-Link above) we only have a total of 300Mbps and:

300Mbps/32 = 9.3Mbps per camera. So does that make sense - that you should only have 32 channels of HD on one Gigabit network?

-----

I am trying to build up more knowledge to assist out partners in quoting for bigger jobs. We are a relatively new vendor that supplies mainly to integrators migrating HD from analogue. Their end user customers are coming from an era of instant response full motion analogue video and the last thing we want is to tell them the IP is going to be stuttering sometimes. We want a framework of network design metrics/guidelines where we can look into the eye of the integrator and say "this will work with X channels of Y megapixel and 30FPS", when handing them a switch. Ideally we want to be able to do this without having to test EVERY switch combination we supply in EVERY camera combination/scenario.

We recently got burned supplying a 16 channel 100Mbps PoE switch (a SMB model with a healthy 350W power budget) for a 14 channel 720p job where were planned to run the cameras at 25FPS (Australia/PAL)/720P/3Mbps CBR main stream and 1Mbps D1/25FPS CBR sub stream (we could only squeeze 12TB of storage into the budget and the customer wanted a month of continous recording). This amounts to 56Mbps which is under the 70Mbps design cap.

Upon installation the integrator call me up to inform me video was stuttering and freezing for seconds at a time on about 5 channels. after remotely checking the the bandwidth consumption (unsurprisingly a bit under 56Mbps) I scratched my head a few times, spoke to our VMS provider (Linovision NVR+) and tried various things. I then found that I could get reliable video by turning off the sub stream (cutting bandwidth to 42Mbps but I think the crucial difference was decreasing the the number of streams from 28 to 14). However this taxes the CPU out a bit too much as the live preview is on 24/7/365 (normally it uses the D1 sub stream for 16 channel live preview).

The solution was finally to change the switching to 2 x 8 port 100Mbps PoE switches connected to a 5 port Gigabit switch (the server had a Gigabit NIC) while ALSO reducing the the sub stream bitrate to 512kbps/12FPS/D1 which the client accepted - whew! So in other words each 100Mbps switch was only pushing 14 streams at MEASLY 25Mbps (total 49Mbps split between the 2 100Mbps switches). A very disappointing result network performance wise. It is not easy for use to obtain 16 port gigabit PoE switches at reasonable prices within 1 day (for some reason the job had to be comissioned the next day) which is the main reason we went with the triple switch setup - I was a very shaken by the whole experience and have deciding to start stock gigabit PoE switches exclusively - but can't help but think that we will start facing the same problems when doing 32CH+ 1080p systems that demand 300Mbps+ peak bandwidth.

Anyone with similar experiences?

BH
Bohan Huang
Mar 03, 2013

Sorry I took too long with my post!

Our cameras and VMS use TCP

IJ
Ian Johnston
Mar 03, 2013

Great information!

Netperf is your best friend. It's a great tool to measure bandwidth from point to point. We even ported it over to our camera to do testing across wireless networks.

Be forewarned that some of the switch manufacturers aren’t exactly the most honest when reporting backplane speeds or wire speed testing. I have tested hundreds of switches over my career (was into network printing before I switched into security video 7 years ago) and they varied dramatically. You have to load each port up one by one and see how the switch behaves under load. Personally, I’ve had mixed luck with TP Link switches, but they're pretty cheap.

Netperf can also be a bit misleading due to the fact that multiple streams from the same device come with multiple TCP connections jabbering on… This further contributes to congestion problems.

Sometimes it's not the ability for the packet to make it across the network, but for the acknowledgement of the packet to get back to the camera in time. Things get retransmitted and then everything goes bad very quickly.

Embedded Linux distributions also tend to be very lazy when dealing with congestion and aren’t always very smart about handling multiple connections and high bandwidth streams out of the device.

John: VMS companies almost always dictate which transmission technique is used and it can vary depending on the driver for the particular cameras. TCP is the most common. Exacq does allow a modifier which we can set (transport=UDP) which can be very helpful - especially across wireless mesh networks where congestion and latencies can be death.

Wireshark is also your best friend when attempting to diagnose these types of problems. By looking at the network stream as it flows to the server, you can tell if there are many re-transmits or errors on the network.

Sometimes the VMS will log H.264 decode errors, or dropped connections, but this can be spotty.

As always, your mileage will vary and it is a pretty complicated problem with a LOT of variables involved.

Cheers,

Ian.

BH
Bohan Huang
Mar 03, 2013

Thanks Ian

Ouch! well I guess there really is no free lunch - back to testing every setup for validation and certification.

Have you guys notice significant differences (i.e. 15%+) between Intel, Realtek, Marvell and Broadcom NICs on PCI-E x1 buses as you get on entry level cards and on motherboards?

JH
John Honovich
Mar 03, 2013
IPVM

Bohan, I think you are asking for trouble using a $52 switch, regardless of what the manufacturer specs are. In the past, we used a Trendnet one with similar specs and pricing and had trouble with even moderate load. I assume they are simply not built to handle sustained high throughput.

We typically use Cisco 300 series switches and despite testing lots of cameras simultaneously in adverse conditions (e.g., VBR + low light), we do not see those types of issues.

For other integrator experiences / preferences, see our Favorite Network Switches survey results.

Btw, what was the make/model of the 16 channel 100Mbps PoE switch that recently burned you?

MI
Matt Ion
Mar 03, 2013

^Agree with John on the Cisco 300-series - we've been using these, and their fore-runners (starting with the SFE-1000P), for several years now and found them rock-solid.

On a recent job, I was ready to spec a model with 24 10/100 PoE ports and four GbE ports... then I discovered the all-GBE variation (SG300-28P) had something like 5-6 times the backplane rating at less than $100 more. Went with three of these, and they've been stellar so far.

There's overkill, and then there's overkill, but when you need reliable video, this is definitely somewhere that "better too much than not enough" applies.

JH
Jerome Humery
Dec 24, 2014

I used a SG300-28MP in my first commercial installation and started getting choppy video after just a few cameras were installed. It seemingly was solved by updating DW Spectrum VMS to latest version, but then the choppiness came back a couple weeks later. While I was discussing the issue with DW, the tech said that the SG300-28MP switch was "geared towards voip and was low on buffer size and that could be the issue", which is funny given the video posted above about "Buffer Bloat".

I'm currently researching this chop issue further, so this discussion (and others) caught my attention. I'm glad to see that others have chosen this switch and had good results because I was starting to wonder if I had made a bad choice.

HB
Harris Bond
Mar 03, 2013

Bohan, Since you have the test environment ready if I were you I would try a few other basic switches to see if the problem is the TP-Link?

I would try the basic Cisco ones (100 series) like the SF100D-08P or its equivalent the SG100D-08P which are quite cheap also (maybe not as cheap as TP-Link though).

JA
John Alemparte
Apr 15, 2013

The main reason I would almost never spec an unmanaged switch into a job is that there typically is a HUGE drop off in performance on them. Unmanaged switches are designed for small work groups and SOHO applications, not the kinds of constant high bandwidth traffic you see in IP video and voice. Even in my SOHO I have a Cisco 3750 doing the switching, but I am far from a typical user.

As Harris mentioned, even in smaller applications I would use, at minimum, a 24 port "smart" PoE switch with Gigbit uplink ports. Use the 10/100 for the cameras and small nodes (printers, phones, etc), use the Gig ports for connecting to servers and other switches on the LAN.

Bottom line: If you need 24 ports, you need management.

U
Undisclosed
Apr 16, 2013

It's always better to use managed switch for HD/Megapixel cameras, then we can manage all ports as per our convenient bandwidth limit...

MI
Matt Ion
Apr 16, 2013

The other benefit to a managed PoE switch is the ability to power-cycle the cameras remotely. I've found this useful in the past.

SR
Scott Reames
Apr 16, 2013

Great information! Now I kind of understand what issues to look for.

Thanks!

New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions