Are You Running Twice As Much Cable As You Need?

Virtually all IP cameras have a single 100Mb Ethernet interface. Even current 4K cameras do not come with 1000Mb interfaces, AFAIK.

Yet 100Mbit Ethernet requires only 2 pair of 4. Even with POE, endspan switches prefer to use Alternative A, and put the power on the same as the data.

So, for longer runs that begin on a panel and terminate at a plate, servicing multiple clustered cameras, couldn't we get 2 cameras on one cable?

Maybe there are compelling reasons that we leave half the copper in the cable idle. Perhaps technical ones, like crosstalk, that make it ill advised? Or is it left for future expansion? If so, does anybody ever use the other half for another camera?

Is it ever a good idea?


If so, does anybody ever use the other half for another camera?

Only if they want to save a few cents and end up with an uncertifiable cable plant with limited options for future expandability.

A box of Cat5 cable is ~$120. What are you really going to save in exchange for a hack install?

Only if they want to save a few cents and end up with an uncertifiable cable plant with limited options for future expandability.

Great points, B! Those are definitely something to consider.

And you're right, for a few cents it doesn't make sense. But what about when a customer would like a second camera near another already placed previously, at the end of a long home run? What would you likely charge for the new cable, labor and certification? Ballpark.

As for the EIA certification of the long cable segment itself, there's no reason it couldn't be certified as long as it was terminated properly at both ends. To test the splitters themselves plus the long run plus panel/plates, you would need two additional splitters at the endpoints that would recombine the two halves back into one plug to connect to the tool and loopback. Of course you would remove the additional splitters in production, but one would expect that would only enhance performance. Still I concede that you might feel this test is dodgy....

As for the ISO/IEC 11801 standard, 2 pair TO configurations are allowed, as well as pair re-assignment, so I believe it could pass, for the proper 100Mb class. Would you agree or am I overlooking something?

As for the future expandibility, that IS what I am talking about in this example, though I believe you are speaking most likely about future 1000Mb links. But since even 4k cameras don't require 1000Mb, it's likely gonna be a few years or more before that kind of capacity is needed by one camera, whereas the customer could decide that they need 2 cameras tomorrow.

More importantly there is no 'loss of future options' since the long cable itself is certified and can be used at anytime in the future for 1000Mb Ethernet, if that is what is desired.

Maybe my discussion title is unnecessarily challenging, because I do understand the importance of standards and conformity. I was curious though if there were any technical reasons that would kill it before it even getting to the standards debate.

When you first posted, I assumed you were talking about taking a single cable and punching/terminating it into two separate jacks or outlets. Thus making a real mess.

If you're talking about using splitters, that is maybe a little better, but then the cost of the splitters starts to eat into the cable savings.

If you're "running" cable, in the sense of actively doing the install, you should run a dedicated cable to EVERY location where a camera or other Ethernet device is planned to go in.

After the fact, if a customer wants a second camera next to an existing one, that's where using a splitter might make sense. You would hopefully explain that the solution is kind of a hack, and might limit some future options, cause erratic behavior, etc. But it's a quick test, and the worst-case is you run another cable anyway.

As long as power was available at the device end of the cable, I'd probably put in a 4-port switch before I'd use pair-splitting.

When you first posted, I assumed you were talking about taking a single cable and punching/terminating it into two separate jacks or outlets...

Yup, I was. Sorry, my bad.

Looking back, I guess what happened was when you seemed to respond dismissively of the whole idea, I just pulled out all the stops to present the most justifiable example that I could imagine. But, I was a bit ahead of myself and should have acknowledged that I was changing the question a bit.

It struck me as a bit weird that we crimp and pin and pull these cables with half the copper unused, and with no likely plan for their use, (with ip cams) anytime soon.

I believe it's also a post-hoc rationalization to say that it's done so that it can be upgraded in the future, as if that is the actual reason for it.

Consider the fact that 1000Mb uses up all 4 pairs of wires. So when you are running a gig link, do you think, "This link is maxed out. It's got no future". Probably not. So maybe you are the type that will run an extra one then, for future use, along side it. But if you are I bet you are the type that always runs another one, whether be it be for a 100Mb or 1000Mb.

And finally I would agree with you that if you couldn't or didn't want to run another cable that running a switch at the other end is the preferable way to go. Remote power is not even a concern because they have switches that take in POE "at" and then distribute out "af" to multiple devices. Plus you could make it 1000Mb, so you would be using all the wires. So then I'm happy :)

Thanks for the insights...

So when you are running a gig link, do you think, "This link is maxed out. It's got no future"

I don't think that because there is no reason to believe it is true. Gig-E is the fastest link-speed *today*, but that could easily change in the future if there is justification for it.

I'm honestly not even sure what you were asking about anymore. Yes, the common 10/100Mb link uses 2 of the 4 pairs. PoE sometimes uses the "unused" pairs, it really all depends on the powering device. Gig-E uses all 4 pairs. The spec calls for a 4 pair cable.

You rarely max out your CPU at 100%, or use all of the RAM in your PC, or extend your power cable to the maximum limit. There are lots of "unused" resources, why are you hung up on some Cat5 cable?

Gig-E is the fastest link-speed *today*, but that could easily change in the future if there is justification for it.

Point being that the next link speed already exists, 10Gig (and 40 soon), and it likes Cat 7 (or at least 6a) cable. Is that what you have been running? If so, kudos! Otherwise where is your future expandibility exactly?

Yes, PoE PSE midspans/injectors use the unused pairs, (for 100Mb). But I was only saying that I thought your remote switch idea was a good one and adding in that having remote power (as you stipulated) was not even necessary, because of POE powered POE switches.

As for not maxing out other resources*, at least you CAN use them, even if they are not usually maxed out, which is not so lamentable.

But it's academic at this point since again, I now think your remote switch idea is superior to splitters and all the bs. I hadn't considered it, though I should have. Thanks.

*Not that it matters, but you are constantly using your CPU at 100% for brief intervals. Anytime it's executing instructions and not in a wait state, it's going as fast as it can. It's just that averaged over the sample period of fractional seconds, there is a lot of downtime waiting for other stuff to happen mixed in to the number you see in task manager. It's basically either 100% or 0% at any instant.

Even if it were an accepted practice to split raw cables to cameras, you can forget using PoE to power them, which essentially means running separate power cables. This negates whatever savings there may have been.

Also, troubleshooting would get more confusing.

I bet the OP has pondered why toilet paper is more than single ply. :)

...you can forget using PoE to power them...

Endspans use data pairs for POE, Alternative A, as I stated in OP.

I bet the OP has pondered why toilet paper is more than single ply. :)

You Damn Anti-Intellectual! :)

What, you don't turn it over to use the other side? Waste not, want not...

To me this would be a non standard installation. I do not think any customer would be ok with this being done. Hitting on another discussion currently going on here....how would you test or certify this cabling? I know I would not allow it in any of our buildings.

Yes, agree, it would be non-standard. And potentially confusing for someone coming in cold to it. But no, I am not suggesting you rewire or wire new like this, despite the title of the discussion.

But are the certification and conformity objections the only real ones? Since standards are continually addended with new procedures and exceptions, which by definition are non-standard before adoption, such a technique could be proposed.

I have mentioned some ways to test/certify that could stand some feedback:

As for the EIA certification of the long cable segment itself, there's no reason it couldn't be certified as long as it was terminated properly at both ends. To test the splitters themselves plus the long run plus panel/plates, you would need two additional splitters at the endpoints that would recombine the two halves back into one plug to connect to the tool and loopback. Of course you would remove the additional splitters in production, but one would expect that would only enhance performance.

As for the ISO/IEC 11801 standard, 2 pair TO configurations are allowed, as well as pair re-assignment, so I believe it could pass, for the proper 100Mb class.

I would say that my primary objection would be conformity. We will on occasion bring in a contractor cold to troubleshoot or add a few new devices. For me, I can think of no situation where this would be acceptable. If we have one cable in a spot, we can get another cable there.

I always tell my contractors that if it was easy I would do it. It's not easy, that's why you're here. :)

Update: There is actually standards guidelines for 'shared sheaths' as it's called, after all. It's contained in a recently issued technical service bulletin TIA TSB-190, Guidelines on Shared Pathways and Shared Sheaths.

Also, Simeon is already exploiting Cat 7 cable in this manner, from their whitepaper:

Cable sharing is the practice of running more than one application over different pairs of a twisted- pair copper cable. Common legacy examples of cable sharing include running twelve 10 megabits-per- second (Mb/s) channels over one 25-pair cable or using y-adapters (splitters) to break out pairs for creating separate voice and fax lines over a single cable. Although the practice of cable sharing has been accepted by tele- communications professionals for more than two decades and was recognized in TIA and ISO standards as early as 1991, cable sharing did not start gaining popularity until the adoption of fully-shielded cabling systems like category 7A.

Thoughts?

I'd never SPEC an install running two IP cameras over a single cable, just because that's "not right", but I have certainly used the "spare" pair to add a camera later (or a video decoder, in a couple cases) when running new cable was highly impractical or even impossible.

The other benefit to having those extra two pairs free is if you end up with a damaged cable, sometimes having those other pairs available can save your butt.