Bandwidth Reduction Or Storage Savings - Which Is More Important To You?


Good topic. I've added a question.

With new codecs (e.g., going from MPEG-4 to H.264 or H.264 to H.265) the top two potential benefits are bandwidth reduction (reducing stress / load on the network) and storage savings.

I suspect most will favor storage savings as there is clear cost savings there. By contrast, many / most (internal) networks have more than enough bandwidth. Of course, VSaaS proponents, who need (limited) upstream WAN bandwidth are more driven by the bandwidth reduction aspect.

John, actually VSaaS proponents are driven by both requirements (assuming on-site recording, and some remote viewing). Retention time is a significant requirement. So is bandwidth limitations of remote viewing. So an optimization to bit rate is of tremendous value for both reasons.

Of course, here in the US the internetworking infrastructure tends to significatly trail storage capabilities along the curve of Moore's Law. This because networking infrastructure goes through major upgrades on the order of every decade even when the technology is ~doubling in capbility every 18 months, whereas next year 4TB drives will be on cost parity with 3TB from a year ago. So they'll simply show up in our recorders.

The point being, both are imporant, but storage is easier to exploit in real-world deployments than are advances in wide-area networking bit rates.

Any interest in your industry in using any rigs like this one?

Hi Rukmini, I'm not sure where in the architecture you're thinking this kind of storage goes? I am discussing the customer's on-site video storage, which is often modest (single-digit TB) and extremely cost sensitive. Of course in a VSaaS solution the "cloud" based storage could be built up with pretty much any high density disk hardware that make economic sense to the service provider. From that side one either builds their own cloud storage infrastructure (most often using commodity gear and software stacks) or rents it from the likes of AWS, Rackspace, et. al. My point to John was that in VSaaS, storage and bandwidth are two sides of the same coin.

My question wasn't clear, sorry. In that thread which I linked to, we have James Talmage's rhetorical question:

Who has 1,000+ cameras and WANTS to stream them all back to a central place?

Unsurprisingly, except to James perhaps, Carl offered 'Casinos' as possibility. I offered 'VSaaS' hesitantly, and was then trying to get from you, in a backhanded sort of way, confirmation of my guess. So when VSaaS architects its own infrastructure, does it make sense to build out a small number of killer high density rigs or just to go with more of the standard old 2u appliances stacked to the roof?

Rukimi, a lot depends on the VSaaS strategy. There are a couple of models--one streams from cameras to the cloud and records there, the other records locally and provides various interactions through the cloud via web services/web UIs. The former might be a candidate for that offering because it would be essentially a giant VMS implemenation with WAN links between the servers and the cameras. But those "constant streaming" model VSaaS implementations have proven limited applicability--at least by today's standards of relatively low egress bandwidth and lack of reliability from the customer's site to the cloud in the US.

So the later model of local recording is typically what I think of as a practical VSaaS architecture. In that case the software stack and servers running cloud-side take on a different form, because they're offering up services that tend to be NOT like a giant VMS implementation. In that case the cloud-side services are more akin to traditional cloud implementations with highly durable, highly scalable file based storage, and horizontally scalable api and web servers. When building these types of services you either roll your own via commodity hardware that offers the greatest density of storage at the cheapest price along with virtualized compute and database servers, or rent services and software/hardware from the likes of AWS, Rackspace, Azure, et. al. Since these services are relatively generic, they tend to be less specialized towards video surveillance hardware needs. Keep in mind that the IndigoVision box you're talking about is just a storage controller that fronts a disk array and has a lot of specialized software designed to accomodate video surveillance--something that's not necessary to pay for if you're building out a cloud-side hardware/software stack that is abstracted away from the local recording.

Yes, I think a regional integrator could provide a smaller scale hosted video service using this type of system. It might be a good way to rent the VMS/server to customers while operating it in your own datacenter/colo. But their applicability would be limited to relatively low individual customer camera counts that also happen to have sufficient streaming bandwidth, and they'd likely run into issues of scalability unless they switched to a different model.

So the later model of local recording is typically what I think of as a practical VSaaS architecture....

Yes, I meant the former, ala Dropcam. But I didn't know that the former model had fallen so far out of favor as to be considered impractical. Well I think there still is still hope for the captial C Cloud, since bandwidth and reliability are only increasing every day, yet the number of pixels in your apartment is not projected to increase... ;).

But one other question about the little c cloud, these cloud services, are they not still working on streams, or partial streams in the cloud? Otherwise what is their value? Management and config only?

Rukmini, the customer is always hungry for more pixels/inch. Thus the "need" for HD and soon 4K resolutions. Internet bandwidths in the US have a lot of catching up to do. Recording directly to the cloud is okay for small camera counts (like 1 or 2). But we're still not there in terms of bandwidth for HD resolutions/framerates and more "professional" camera counts (8+) at a remote site (I'm thinking retail segment here, BTW, not home consumer). Even at one or two cameras yes one can compress the image down to fit the changing network conditions, but when you record that compressed image that's the best you will ever see that picture for that point in time. There is no going back to the full bitrate stream to try to get a better view of the bad guy's face or resolve a license plate.

When it comes to "little c cloud" I wouldnt' discount the value of management/configuration when you're dealing with hundreds of sites in your org. There are really very few solutions that gracefully address management use cases when you've got several hundred (remote) DVRs you need to update, manage users on, diagnose/repair, etc. Or worse, update firmware on a thousand cameras at hundreds of remote sites! There are also a lot of new features/capabilties that can be done if you pull clips of video or still images from remote sites and use centralized cloud-side processing to do analytics or cache material of interest to the end-users. The cloud connected recorder also offers the opportunity to make the remote client connection trivally easy to setup and maintain--so end users only simply need login to your "cloud" in order to see their remote sites rather than needing to do things like figure out the IP number of the DVR they want to make a connection to.. Of course, as bandwidths do become available, one still has the opportunity to eventually stream to cloud side recording with this architecture as well, the on-site recorder just evolves into a remote connection broker/gateway.

According to me most issues are coming from ... bandwidth server overloads ..... (includes generally all streams : recording + multiple viewing streams + meta +Ptz + ..)
but when bandwith reaches certain limit levels ( especialy at night or during PTZ tours, or at night with motion detect not working well with lightning) NVR or VMS could processor maximum and die. (we are talking about 50 , 80, 150, 300 Mbits I/O) not 1Gb.. easy to reach these levels.

So more cameras in less storage capcity, for sure, if you keep the old resolutions ..

But as resolution increase, the economy will be wasted ... and the server will also suffer

Question is : what CPU energy require a base H265 decoder compared to a base H264 ? can we use the same hardware spec or ....increase the server budgets ?

"what CPU energy require a base H265 decoder compared to a base H264 ? can we use the same hardware spec or ....increase the server budgets ?"

I don't think H.265 is far enough along, in terms of production deployment / availability to make a definitive assessment. The consensus is that it is more but how much more remains to be seen.

John, an interesting survey would be to ask and sort the biggest mistakes and systems failures that people face in video at the conception level (not maintenance level)

From Bad camera/optical/ bandwidth settings , bad network architecture or mistakes on level 1,2,3, 7, Storage performance issues, server issues (Sql, Cpu, ...) client display issues ..etc

They are Joined at the Hip. One comes with the other-generally speaking.

They are Joined at the Hip.

Normally true, but the reason for the Poll was because it doesn't have to be that way.

Dr. Rockoff would diagnose this Siamese coupling as a birth defect, and would proceed to surgically remove their connection.

How? by delaying any encoding until at the recording unit itself by dedicating a whole piece of copper to transmitting one video stream of uncompressed video. Once at the recorder, you are free to encode to 'taste', depending upon storage and viewing requirements, but not upon bandwidth ones.

So a question would be if you were provided a network of infinite bandwidth would you then compress less and run at a higher frame rate? If yes, then the netenectomy may provide substantial benefit, check for malpractice insurance first though.

I would vote neither. While storage savings and bandwidth reduction can be important for budgeting reasons, picture quality takes preference above all else in my application. Encoding efficiency, as it relates to the other two, would be a secondary criteria.

...picture quality takes preference above all else in my application...

So do you encode everything MJPEG with low compression?

Yeah, sure...

Seriously, MJPEG offers very few advantages and its cost far outweighs them. Plus, our system would have to transcode MJPEG to h.264 anyway, so what's the point?

On a more serious note, there's PQ and there's PQ. Since ~93% of our cameras are analog and our encoders are only capable of 4SIF (704x480) anyway, we have determined there is no benefit to increasing bitrate. But we do run them pretty high - averaging between 2.5 and 3.0Mbps. Testing proved that any increase above 3.0Mbps yielded no further improvement in video quality.

And it's not a resolution issue but a noise issue. Since we watch cameras 7/24/365 and have eliminated our analog matrix, we wanted to minimize operator complaints due to poor quality images on our Monitor Wall, hence the chosen bitrates. Likewise, bitrates on our IP cameras were chosen to provide a "pleasing" picture. Fixed 720p cameras are displayed at 4.0Mbps. Analog PTZs at 3.5Mbps when not moving and 5.0Mbps under motion. 720p PTZs are 4.0Mbps quiescent and 6.0Mbps under motion (boost mode).

All of the above bitrates were chosen as the minimum necessary to obtain the best possible video quality. All bitrates are CVBR, so some cameras are actually running at far lower than programmed bitrates for a good part of the day.

Seriously, MJPEG offers very few advantages and its cost far outweighs them.

Agreed. With, IMO, by far the two primary disadvantages being, storage and bandwidth. But if you would humor me for a moment, let's imagine a scenario where those two huge disadvantages were neutralized.

How about a 100 2MP camera system setup mainly for live use, with the requirement being 1 hr retention only, then anything not flagged manually as an exception is dumped. Let's also imagine that we have inherited (yeah right) a couple of 10Gbe switches and 10Gbe NIC's from a failed project, so bandwith won't be a problem and storage costs should be low

Does H.264 still beat MJPEG here? Ok, throw in server side motion detection? Now?

I admit its a contrived example, but I'm trying to imagine what the forces that brought us to this point of nearly (mobotix excepted) universal adoption of H.264 will do next. Sure, I know H.265, but I mean a little longer timeframe. Like in 5 years lets say.

My opinion is that deployment of H.26x type of compression schemes may actually start to decline in the relatively near future! Sounds nuts, I know, but I four general trends that I have noticed, if they continue at the same rate for the next 5 years, could make it happen.

Do you agree in general that over the last 10 years that (not giving any numbers on purpose)

1) Storage capacity/density has increased tremendously

2) Network bandwidth has increased almost as tremendously

3) True single task CPU processing power has lagged far behind these (repeal of Moore's law)

4) Video surveillence information is and will be processed more (analytics,value added) and not merely stored and deleted.

Apologies if this comes off as an inquisition, but I'm just curious how you see it...

Who knows? At least in my opinion, I agree that storage costs will become less of a factor. Network? Not sure. 10GbE does take some pressure off (although we are using fiber for our server-to-storage and switch-to-switch interconnects).

I actually think video surveillance is headed toward doing almost everything at the edge. Edge storage, edge analytics, etc. makes a lot of sense. Large server rooms would become a thing of the past and the need for ever-faster networks would be negated.

With most video processed within the cameras, analytics can be performed on the raw uncompressed video and only video meeting selected criteria needs to be transported to long term storage, requiring less bandwidth. That job could also be done over a corporate network at times where bandwidth use is lower.

We are just seeing the infancy of edge processing and storage but I believe that will be the wave of the future. Essentially, the cameras will get smarter while the rest of the system gets dumber.

On that note, I think with the advent of "smart" cameras, we'll see a lot more use of the cloud for storage of the processed output of the cameras themselves.

An interesting aside: I participate in Cox's online customer surveys. The most recent one was asking about my interest in Gigabit connections to the home: would I be interested if it could be delivered at a reasonable cost?

What would happen if that bandwidth was available to business? Even at the present approximately 5:1 ratio of download to upload speeds (I just tested 32.5Mbps download and 6.8Mbps upload and I'm not even on their highest tier), 200Mbps would be more than sufficient to transport the streams from at least 40 cameras at 4Mbps each and if you assume transport of only relevant video, a good deal more.

Kansas City here I come...

Quick question about MJPEG vs H.264, one of the remaining arguments that you hear occasionally is the one that H.264 evidence might be ruled inadmissible, because theoretically, due to a network blip or other glitch, the inter-frame compression might leave an object still around for a frame or two after its really gone. I know that in general of course there is no problem with evidence, otherwise we would have found out a long time ago, but have you ever seen this glitch occur once even? Has anyone ever?

I've seen some pretty weird issues with dropped frames, especially I-frames. My favorite one was where a person suddenly appeared in an area, walked a few steps, then disappeared.

In that case, the issue wasn't network-related but was caused by problems with a SCSI cable or device. We were using U160 SCSI but when there are connection problems, SCSI will throttle back 160Mbps to 80Mbps, then to 40Mbps then to ???. Our 32-camera servers were feeding ~60Mbps to each RAID so when SCSI throttled back to 40Mbps, something had to give.

I haven't seen those issues on any camera since we switched to fiber-based storage transport in 2006 so network has never been a factor.

That said, we have never had our video evidence questioned. We generally haven't even provided "authenticated" video to LE or the courts. Why? We haven't been asked to. In fact, up to this year, we provided all video evidence on Video DVDs.

<Edit> We do retain authenticated clips so if needed, they could be provided. They've just never been needed. The court system is more interested in chain of custody than in the actual validity of the evidence itself. In worst case scenarios, one of our employees will be required to testify to the effect that the video is an accurate representation of the events.

I'm going to take this a step further and apply it to my vertical.

A couple of points:

1. Regulations require specific frame rates and the ability to "see" certain things. Although these regulations vary from jurisdiction to jurisdiction, typically we are required to record specific camera applications in so-called "real time" and other applications at various frame rates. Typically, casinos record cash handling, gaming tables and certain other cameras at the highest frame rate and resolution.
a. We have decided to record all cameras at 30fps at the best resolution they are capable of but I believe we are in the minority.

2. Many jurisdictions require continuous recording of at least certain cameras - disallowing motion-based recording on them.

3. Even in our 24/7 operation, a larger percentage of our cameras are viewing and recording nothing of interest.

So although regulations would have to be changed to allow retention of only relevant video, I believe that will happen eventually. This would allow use of cameras with built-in analytics and storage to replace current continuously streaming cameras. The end result would be to basically eliminate the traditional server room and provide a much smaller "core" processing and storage system which could be located within the Control Room. Or even eliminate that hardware and store all relevant data in the cloud.

A good point. CBR and CBVR settings will strongly impact quality. ( PTZ and night time are the best examples) One of my favourite question is : what would be the CBR or CBVR best settings for a 2MP PTZ speed dome, doing manual or automatic tracking on a parking at night ? (keeping in mind the output settings are the same day and night ..)

Answers are most the time very funny. But results are ugly