Anixter: H.264 Video 'Pretty Much Unusable' With Cat 5e

Anixter has taken a firm, though debatable, stand in IP video cabling.

Their video shows H.264, in their own words, being 'pretty much unusable' with Cat 5e:

The only tech detail given, besides H.264 on Cat 5e, is that the cable run is 100 meters.

For IPVM's analysis, see: Cat 5e vs. Cat 6 for IP Cameras? We recommend Cat 5e for general IP video / camera runs.


What do you think?

One of the most absurd allegations from Anixter ever

Hum... John, can you try to reproduce the same error in your lab? Using a 100 meters cable?

Also, be sure, when you test (if you do), use UDP for transmission, not TCP, TCP won´t break the image as we are observing in the video, but UDP will if packets are lost

Eric, correct me if I'm wrong, but the vast majority of systems use TCP only, correct? If these errors are only found in systems using UDP, that makes the Anixter claim even more confounding, as it would simply never impact most users.

Well.. yes :)

The thing is I have never tested a 100mts long 5e cable with an IP camera... You can definitely use 5e cables in most installations, you just have to do it the right way, as you need to with with 6 anyway.. If you do a very very bad installation it is not the category of the cable that will matter here...

Anyway, I think Anixter is comparing apples to oranges because they are showing the worst case scenario with 5e while comparing it to the best case scenario with Cat 6

But I´m now curious to see if that really happens with a 100mts 5e cable.. and how it would compare to a 100mts Cat 6 cable...

"But I´m now curious to see if that really happens with a 100mts 5e cable.. and how it would compare to a 100mts Cat 6 cable"

The length of the cable really doesn't matter here. The primary factor would be the proximity of the cable to interference sources. You could just as easily have a 100 meter Cat5e cable with no interference and a 100 meter Cat6 cable running alongside highvoltage cables and motors or whatever and have the exact opposite results of what was shown.

I think this video from Anixter is very misleading and draws several bad conclusions to the point that it's almost malicious of them to post it and pass it off as "knowledge". Crap like this is why distributors like Anixter get stuck with a lot of uneducated customers using voodoo logic while the more savvy customers avoid them.

Next week's Anixter webcast: "The world is actually flat."

So far 2.16 Anxiter employees have voted "Unusable".

Integrators the world over are going, "Huh? But I use Cat 5e every damn day and it works fine, Anixter." right now.

I voted "Pretty Much Unusable" because I'm in a contrary mood today, LOL.

Actually, I misread the answer box - I would've voted "Generally Usable" if I had my new glasses.

This here series is called "Disaster Du Jour".

A disaster a day video is a tall order for anybody, I sure hope they don't let the quality slip!

Désastre Du Jour

Not so sure their other videos are much better.

Sure, laugh it up!

But this video was created due to a significant difference in cat 6 and cat5e cable specs. In this case, gross margin.

One question for the experts so I can learn what'all the deception is:

Do we not believe that 100m of Cat5e cable would typically have ~1% CRC errors?

Or do we not believe that such artifacts would typically occur even if we had ~1% CRC errors?

Or both?

I agree With Richard it's so easy to see the point in this video, they want to sell more cat6 because they make more money out of it.

How exactly do you swap out just the cable by unplugging/plugging just one end?

By doing that don't you necessarily involve a new/different Ethernet port in the path somewhere? Why not just swap both ends, if you want a clean test of the cable only?

I noticed that part as well. I think the whole video was basically staged and exaggerated.

Maybe it's ok after all, port 5 seems like it's relatively unmelted.

Ports 4 and 6 must be from the episode on the dangers of running MJPEG and Passive POE over Cat3.

I haven't come across any IP cameras that have a 1gb NIC (all 100Mb).

Assuming the cable runs are under 100m and not near interferance sources, Cat 5 will achieve the same throughput. 5E just allows gigabit speeds (which the IPCam NIC can't do anyway) and Cat 6a allows 10Gb.

If you're noticing a difference in performance with the different cables, it would be due to interference.

FYI, it appears many new 4K cameras will support 10/100/1000. For example, the new Axis 4K spec: "AXIS P1428–E: RJ-45 10BASE-T/100BASE-TX/1000BASE-T PoE" Perhaps, theoretically, you might need 1000 if you were transmitting multiple MJPEG streams but that is, of course, an edge case. Also, Axis 4K bandwidth usage is quite efficient, as our test results show.

I can understand 4K needing 1Gb, but I haven't come across any 4K cameras yet ;)

4K cameras are available at Anixter right now!

All you need are Cat 7 cables ;)

They can use the CAT6 for Distance Less than 40Mtrs.. and update the gernal folks on the results.. :-)

I have voted "Usable": there is fire in the video sure but not that much of Fire for the Qty of Smoke..

Both the cables are not Equal and CAT6 is Fully certified for 1Gbe, which mean actuall specification is "more from Cross Talk & Network Errors", the key to remember is all the components have to be rated for CAT6 to complete the 100Mtr Horizontal Run for 1Gbe.

in a nut shell "CAT6 is tested to a higher standard and is capable of 10Gbps over short distances <40Mtrs"

In Surveillance most camera are with 10/100 ports, hence the expecting the OEM's of Camera to put a CAT6 Connector is doubtfull and the Spec Sheets of Camera always says 10/100base-T. So I would not recommend the CAT6 because the whole system is not rated for CAT6. using CAT6 is waste of Money.

Now becuase IPVM is worldwide readership, people in countries where standards are not verified and cost is the only critieria, a sub standard CAT6 is better than Sub Standard CAT5E.? ( I have had cases where 40Mtrs Horizontal of CAT5E could not perform )

from my collection of short notes collected over the years.

--> The general difference between category 5e and category 6 is in the transmission performance, the available bandwidth from 100 MHz for category 5e to 200 MHz (250Mhz) for category 6.

--> This includes better insertion loss, near end crosstalk (NEXT), return loss, and equal level far end crosstalk (ELFEXT).

These improvements provide a higher signal-to-noise ratio, allowing higher reliability for current applications and higher data rates for future applications.

--> ANSI/TIA/EIA-568-B.1 says the consolidation point should be located at least 15 meters away from the telecommunications room to reduce the effect of connectors in close proximity.

--> Although category 6 and category 5e connectors may look alike, category 6 connectors have much better transmission performance.

"at 100 MHz, NEXT of a category 5e connector is 43 decibels (dB), while NEXT of a category 6 connector is 54 dB. This means that a cat6 connector couples about 1/12 of the power that a cat5e connector couples from one pair to another pair. Conversely, one can say that a category 6 connector is 12 times less “noisy” compared to a category 5e connector." -- Quoted for website Information..

CNET has a video, which is quite a clear understanding.. "if you want to be future proof" --Marketing used by cable companies.

Wow, this has been an interesitng read. Having worked for Avigilon in the past, i have designed systems in Europe and specified cat 5e from camera to switch and never seen any issues. The 29MP camera which has one of the highest bandwidth at 76.8 Mbps (Although this was generaly run at around 21.9Mbps) The most common high bandwidth camera would be the 5mp Jpeg200- unit, which has a max Bandwidth of 120 Mbps a second, although commonly the max would be around 48 Mbps.Cat5e is rated at 100 mbps but has been tested up to 1000 mbps. So I do not feel that Bandwidth would be an issue. Avigilon streams video using RTP and RTCP on UDP ports and again there is no issue with packet loss. So Anixter are saying H.264 on cat 5e - Well that would be less bandwidth, UDP can also be used on H.264 to transmit video so that cant be it. Maybe the I-Frame Interval was set unrealitically in the Anixter demo? Conidering there are large complex systems using H.264 cameras on cat 5e that are not showing any issues, I would like to know more from anixter to how the demo was set up and what packet data was being dropped?


I think your question about I-frame interval is spot on. I tried to time the I-frames on the video and it appears the interval is at least 8-10 seconds. Obviously the video is too brief to get an accurate measurement but that is what it appears to be.

On another note, raise your hands - how many here would actually run video 100 meters on Cat-anything? I'm a firm believer that attempting to utilize stated limits is a recipe for disaster. I wouldn't even think of trying to push more than 70Mbps over a 100Mbps network, nor would I try to stream video more than 90 meters on a single copper run, whether the run is CAT5e, CAT6 or CAT7.

I have worked at Anixter and their technical dept. on the Cable is strong and level headed..

I am quite suprised on this video. Clearly something is amiss; or We are missing the sub text.

Someone from Anixter Technical Dept. Clarify on the Video...It would add value to debate.

Serioooooously! ? !

What are the theoretical reasons why? THe cable carries Ethernet and from the last time i read, Cat # is good enough for most IP cameras. I have never seen a camera go over 40 mb/s using any codec and H.264 actually lowers the bandwidth ...


Wrong on all accounts... SOme could make a case of this being fraudulent !

Wow, impressive sleuthing (and ribbing) by the IPVM crew!

An explanation which doesn't require anyone to be unusually deceptive:

  1. Tech is troubleshooting degraded video problem, like shown above, for camera a.
  2. Tech notices symptom absent with identical camera b.
  3. Tech notices the only difference between a and b is cable grade.
  4. Tech replaces cable on a with cable type on b, problem solved.
  5. Tech tells boss.
  6. Boss makes video.

Though whether the video was made despite the cummulative effect of a ten-second GOP or because of it, would be an open question.

Not only perfectly OK, with the right adaptor devices you can run up to 600m (1800ft).

This has certainly got a lot of people talking! Are Anixter going to comment on this? Just to continue to where I left off; although I mentioned cat 5e can achieve near 1000 mbps, it has not been certified to do this and I would always recommend Cat 6 or fibre between switches. As cameras will very rarely be over 80mbps and commonly no more than 12 mbps, I think everyone can feel safe to use cat5e. Where there are issues on site with cross talk, or there is something that could cause an issue with signal power – then it would be sensible to use cat6. My preference is to install the cable, where it won’t have issues with cross talk. In my experience, the main issues I have seen with H.264, is video decompression. Installers have made the mistake to go for a cheap video workstation that struggles to handle the H.264 streams and decompress into an image for display. Only issues on networks is when inadequate and not fit for purpose Network switches have been installed.

Thanks Luis, so this confirms my point - cat 5e was developed for Gigabit Ethenet, but is not certificated to Gigabit - as mentioned between cameras, they are not going to get near 100 mbps, so cat 5e is fine for the job.

Paul, no, I don't think it really confirms your point, as I disagree with your low number citations, but I'm not going to get into a drawn out conversation about it. If you don't agree, then we agree to disagree. Thanks.

Following them links that Luis kindly provided, it looks like 1000Base-T is certificated for 5e but 1000Base-TX is not. Is that where your coming from?

The people who make more money on cat 6 say cat 5e is bad. How could that be?

If you had the right cooked network (good call on the only-one-end-changed comment) you can do anything. If you select the correct crappy camera I think this would be easier. And if you follow the proper trunkslammer principles and never check the network you might get to this.

Good to know their technical department has no clue and it's provable. Sorry, Anixter comment dude, it looks like their tech support is crap. Not an easy reputation deficit to get over if they release videos like this.

So how is this new old guy they're bringing in as CEO gonna help this? If they just rolled some old fogey from the other end of the CCTV retirement home into the executive suite this sort of silliness will likely continue.

I cannot fathom how the 'new old guy' they are bringing in is going to help. From what we have seen / heard with Tri-Ed, their overall IP knowledge is significantly worse than Anixter. Indeed, Anixter's key pitch is that they are going to bring their IP 'expertise' to Tri-Ed....

I guess maybe Anixter should get a little help ?

Just an idea :)

This is 100% grade A poppy cock. A knowledge "drop" seems pretty aptly named. :)

Cat 5E works perfectly well and I'm willing to bet that most existing (and fully functional) installations are with Cat 5E.

As an aside, you will see more and more Gigabit Ethernet IP cameras on the market soon. The cost of going 10/100/1000 is getting smaller and smaller as more commodity items switch over.

There are real benefits to be gained for switching. Many of the problems that he spoke of in the video are related to poor infrastructure choices, but Cat 5E vs 6 is rarely the problem.

H.264 is very 'bursty' when it communicates. I frames are big but few, P frames are small but numerous. The 'instantaneous' bandwidth of the stream can easily exceed 100Mbps even for a 4Mbps stream (heavy emphasis on instantaneous). Cameras are routinely asked to provide multiple concurrent streams out of the camera, so you have this mishmash of I and P frames that are all trying to get through.

What happens is (simplistically speaking) you get a flurry of P frames trying to get out the same pipe, usually over a pretty congested network link. Switches are REALLY cheap these days and do a pretty crappy job of handling a bunch of traffic on their internal backbone. The P frames usually manage to fight their way through and arrive, more or less in the same order and without too much time delayed between the frames.

Then you get a giant I frame that comes along like shamoo on a waterslide. The P frames start to pile up in the camera, causing buffering issues. The aggregate bitrate is far less than 10/100, but when the I frames go through, it can take longer than required to make it from point A to point B.

With just 1 stream and a 1080P30 camera, 10/100 is usually fine. With 8 streams all going into a $30 switch, you can see problems; especially when the switch *itself* only has a 10/100 connection to the world.

The packets all get congested arrive out of order or MIA and you end up with ‘jittery’ or torn video. TCP/IP does a lot to make all these problems invisible to the user, but it’s a major problem.

H.264 is very allergic to dropped frames, so tearing and other badness is not hard to achieve on a congested network.

Consumer applications like YouTube and Netflix solve this by buffering a large time amount of data, so it really doesn't matter if it took 1-2 seconds for the frame of video to be repaired using TCP/IP retransmissions. By the time the data gets sent to the video player, it's all properly ordered and intact.

Security applications try very hard to reduce the amount of latency on the video.

Buffering creates latency. As such, buffers on the playback side (and sending side) are usually as small as possible. Any problems with getting the packets out of order and/or missing, aren't given enough time to be fixed, which results in playback errors.

By going to gigabit, those I frames are allowed to flow through the system like whale $#!T in an ice floe, so there’s less congestion and a lot less problems with dropped packets and other ‘weirdness’ to the system.

Crappy switches are far more of a menace to IP video than Cat 5E.

Well Said. As you say, there is no major issue with cat5e from a single camera to switch. The Main area for problems is the network switch and the design of the network. Installers can sometime be pushed to save money and buy cheaper switch's which can often create a bottleneck . In the past I often suggested to installers to buy swicthes that had all 1Gb ports, and often asked why when the camera is only 100Mbps? The simple answer was - if all the ports are 1Gb, then we know the backbone of the switch will handle the throughput that is thrown at it! All sub standard switches should be banned from CCTV installs!

"With 8 streams all going into a $30 switch"

Ian, curious, who is doing 8 streams? It seems like an uncommon use case.

Could be 8 cameras going into a cheap 16 port switch (though $30 might be a touch on the low-end).

John, my comment here was from the switch's point of view that it was receiving 8 streams, from 8 different video cameras. The point being is it's astonishingly easy to overwhelm a cheap switch with a ton of frequent and bursty packets.

Realistically, in modern times, there are usually 2 streams (hi-res and low-res) streams coming out of the camera. There can be more than one client/server connected to the camera simultaneously, but that's a lot less common. So with 8 cameras, that would be 16 concurrent video streams flowing through the switch.

Ok, I was not sure whether the 8 stream reference meant 1 camera.

Then you get a giant I frame that comes along like shamoo on a waterslide.

I don't totally agree with your logic here. There is still an MTU in Ethernet, so any given frame is only *so* big. That I frame might be broken into a series of packets, which can of course increase the probability of an error with any one packet, but for the most part modern Ethernet switches can easily handle the typical once-per-second I frame of a 2, 3 or 5MP camera.

There is of course a lowest-common-denominator factor in any network, and saving $200 on a switch might in fact cost you $1000 in the long run. If you want maximum performance from your network, don't be too cheap on the active components.

You're absolutely correct, all the streams get broken up into consistant MTU sized chunks as they flow through the system. Abstracted one layer above, all the H.264 data has also been broken up into chunks as part of the RTSP process. I have confusingly mixed MTU 'frames' and RTSP 'frames' together in my haste of getting my thoughts out.

In this case, having small MTU packet sizes hurts a LOT since you can have a bunch of 'non-critical' (for the time period) P frames leapfrogging the I frame that takes forever to wind its way out to the server. With all those frames buzzing around, there's a high likelihood of something going wrong. TCP/IP and Ethernet in general was designed not only to be fault resistant, but pretty much fault favorable. It doesn't take much for an Ethernet stack and/or a switch to drop a packet or two if it gets super busy.

If you corrupt an I frame, or it comes too late to the decoder, then all sorts of bad things happen. Your entire GOP (group of pictures) is destroyed and the encoder has to wait until the next I frame to sync back up.

Buffering helps a lot in this case and makes the system considerably more fault tolerant. The genius of TCP/IP is that given enough time, the system will fill in the gaps and you'll have intact data in the video buffer.

TCP/IP adds a fair amount of overhead to the system however and can actually cause a "packet storm" to happen if there's a bunch of failures that need to be re-transmitted at one time. This is where 'small' packets hurt, since yes, they take less time to transmit (vs a Jumbo Gb ethernet MTU frame for example), but all those acknowledgements add up.

At some point in time as well, if the Ethernet "Stack" in the camera (part of Linux) gets enough congestion, it can effectively 'declare bankruptcy', drop everything and start over.

All badness for H.264 when combined with very little buffering.

UDP and its "fire and forget" methodology can paradoxically work better in a congested environment since it doesn't apply a lot of extra packets on top of the already struggling infrastructure. Sometimes too, just allowing a packet to drop, is better than having a massive section of video be lost as a result of a "congestive collapse".

TONS of $$ in the IT space are spent dealing with network congestion.

We have several H.264 cameras (in production) on runs at or near 100 meters on CAT5e; we do not have problems. We test our connections prior to use and avoid getting to the 100 meter mark on any data connection, let alone video or VOIP. In my opinion, this has a lot to do with the quality of the network installation, not CAT5 vs. 6. Maybe it depends on the camera and the quality of the NIC card in the camera, but that is a guess. there are many varibles including terminations, patching, and NIC cards, server NIC, etc. Again, we have not had a problem with cameras from different manufactureres. Its a pretty wide-open statement. But I don't sell wire. Data Rate could be an important variable as would the number of cameras on the overall network and aggregate bandwidth.

No matter how you slice, from personal experience, Cat 5e will carry video streams from 99.9999999% (there could be some absurd cameras on the market that streams 1 Gb/s at night :) )cameras all day long. I actually have some cameras on run that are longer than 100 meters and they work fine. It wasn't done on purpose I must add: We have an industrial switch and oa few of its ports ports went bad. We used an 8-pin modular coupler and the darn camera worked with not a hitch. Cameras are Vivotek, Switch is Cisco SF-300 24MP. We used H.264, cable, Cat 5e.

So ....

Ha! What a joke. This is utter nonsense from Anixter. They must have a large inventory of Cat6 cable they're trying to get rid of.