Direct Attached Vs Network Based Video Surveillance Storage?

JH
John Honovich
Sep 15, 2014
IPVM

This grew out of the question: What Do You Use For Storage For 100 Cameras?

In general, and specifically, for 100 cameras, what would you recommend?

A few thoughts:

Direct attached is simpler to setup and lower cost typically. However, network based scales better as it does not require a direct physical connection to each VMS server/ NVR, which grows in importance the more servers / recorders you have.

Avatar
Mike Dotson
Sep 16, 2014
Formerly of Seneca • IPVMU Certified

Direct attached storage will have a performance advantage over the NAS storage in all cases.

The main reason is because the hardware is local to the machine which can support the higher local performance capability.

In my lab testing I have been able to get as high as 700Mbits to a local RAID5 array with a certain VMS. Most other VMSs will max out at appx 300-400Mbits.

A NAS with MPIO will max out at appx 100Mbits random throughput.

So the question needs to be amended to include a throughput metric because simply saying camera count is not enough information.

Avatar
Ari Erenthal
Sep 16, 2014
Chesapeake & Midlantic

Is this using eSATA, Thunderbolt, USB 3.0? I'm assuming here that the connection makes a difference.

Avatar
Carl Lindgren
Sep 16, 2014

Mike, 100Mbps? Over what form of transport?

I agree that DAS is safer and in many cases less expensive than NAS or SAN but a lot depends on the system goals and how fault tolerant you want it to be. We are using a form of DAS, whereby multiple servers share one RAID subsystem so I assume that would be considered as some form of hybrid. The connection method is 8Gb fiber with redundant HBAs, transport and controllers. The controllers also have multiple fiber ports, each attached to a server.

One storage system (a RAID and a JBOD) connects to 3-4 servers. No switches to contribute points of failure.

Avatar
Mike Dotson
Sep 16, 2014
Formerly of Seneca • IPVMU Certified

Carl, Ari,

My bad in dropping in the wrong 'B'.

The MPIO is expected to run at a 'hardware level' at 100MBYTES/s (random) over a dual 1GB network link.

Of course 10G/12G/SAS can boost this further and the bottleneck there will become the NAS internal hardware becaue this rate will be a few GBytes/sec.

The DAS rates are the 'Application level' rates....so I have mixed a bit of apples and oranges here. The hardware levels are always higher than the application level.

The simple ATTO benchmark on the DAS with RAID5 of 4x 7200RPM disks with a controllr card will show appx 400MBytes/sec. I think the benchmark uses sequential data.

Carl... Is your system what is called a 'Distributed RAID'?

Avatar
Carl Lindgren
Sep 16, 2014

Mike, no. It consists of three Dell (MD3260?) 60-bay RAID units, each with a (MD3060?) 60-bay JBOD attached. Each contoller has 3-4 GBICs, each RAID has two controllers and each server has a QLogic 8Gb dual-port fiber HBA.

RAIDs and JBODs are partitioned into 9+2 RAID Groups with a hot spare for each group. So each LUN consists of 11 drives or 27TB raw, which each feed one "instance" of the VMS software (3-4 "instances" per server). So 360 3TB drives total yielding ~700TB net storage.

Retention time is managed by the number of cameras recording per instance of the software and their bitrate. Analog fixed cameras have up to 60-something cameras per "instance", yielding >/= 15 days. We have a couple of instances dedicated to 60-day retention and they only have 13-14 cameras. IP cameras and PTZ cameras are somewhere in between.

(2)
JS
James Shimota
Sep 23, 2014
IPVMU Certified

wow. thats a ton of storage! I'm starting to work on an event storage based model to get away from full stream 24/7 storage. Do you think that event storage may be useful someday in your environment?

Avatar
Carl Lindgren
Sep 23, 2014

James,

That's not possible, if you're saying what I think you're saying. Fed and State regulations require full-frame rate storage for a minimum of 7 days for all critical cameras. Internal standards call for 30fps @24/7/15 days for most cameras and 30fps @24/7/60 days for others. Times >1,000 cameras.

Events wouldn't cut it because no device or software would be able to figure out what is/isn't an event and we have to review numerous cameras after the fact.

Evidence clips we create are an entirely different matter. Law says they must be kept for at least 30 days but we never discard evidence - we have clips stored from 1996 (digitized from videotapes). For those, we have a file server with 2x48TB RAIDs in a mirrored 6+1 configuration.

UM
Undisclosed Manufacturer #1
Sep 16, 2014

I agree with Mr. Dotson that DAS will have a performance advantage over NAS.

One advantage to NAS is that it is visible and manageable on the network. That way you (if the customer allows you remote access) and/or the customer themselves can manage the device like any other network device, receive alerts, etc. With DAS you are limited to whatever functions/alerts the NVR allows in their product.

(1)
Avatar
Carl Lindgren
Sep 16, 2014

Undisclosed A,

We have used multiple DAS storage systems over the years and every one had remote management capability. Our previous Infortrend RAIDs had a program that was managing all 37 of our RAIDs called RAIDWatch; our Huawei RAIDS (file server storage) use a program called Oceanspace ISM and our current Dell storage has two programs: Dell OpenManage and Dell Modular Disk Storage Manager. The former manages all Dell components, including storage, while the latter just manages storage.

Ever storage system we've used or looked at has network management ports specifically for that purpose.

BB
Bryan Berezdivin
Sep 23, 2014

I am knee deep in the storage internals and work for a storage and software company (EMC), so I will attempt to be more user centric and unbiased on this response. EMC specifically has a large investment in labs and people that perform testing across many VMS and related security components and I would like to comment on the details we are observing.

To summarize the long winded discussion below, I would say there is place in the surveillance systems for DAS, NAS, iSCSI, Flash arrays, and even object. All for different functions and implementations. As an example of the myriad of options: Axis and Samsung both support NAS on their cameras, Bosch supports iSCSI, while some camera companies support object, and VMS vendors have similar options.

There are unique capabilities to network based storage systems that allow the abstracted disk location to provide some substantial flexibility to the application (VMS, evidence managment, case managament, analytics, etc...). One VMS provider wrote an article to summarize their specific advantages using network storage- As Video Expands on the network, evaluating proper storage options is crucial. The nice thing here is this abstraction allows the storage/software provider for the storage subsystem to implement technologies to allow advances in technology while keeping the interface to the application standardized (SMB, NFS, SWIFT, HDFS, etc...).

For larger systems, typically above 100 cameras, the use of NAS abtsractions can provide substantial simplification of the system design if the network storage subsystem can handle the bandwidth demands of multiple recording instances. I have seen simple NAS implementations handle even 100 1080p@15fps without issues when using some of the more mature VMS software providers. Larger scale system may require a scale out technologies such as and EMC specific document outlines here Scale Out NAS for Video Surveillance. We also see substantial savings when moving this direction on capex and opex, which is usally dependent on the product portfolio's capabilities (aka, best of breed) to be most efficient via various technologies.

For smaller systems, I see much of the value of abstraction to be in allowing a customer/integrator freedom to plug and play multiple components of a sytem. Example: using direct to NAS recording via the cameras. For the majoiryt of customers I have seen in the SMB and even retail space, the use of DAS is most common and will remain that way unless backend anaytics is relevant and leveraging sensor data is part of the approach. Then, I see more of an edge to core approach needed where data needs to be moved to core for analysis (imagery, metadata, 3rd party data, etc...).

We see the pure bandwidth capabilities of the video surveillance system using network based storage compared to block to vary as a function of the VMS implementation and the ability for the per-camera recording/playback process to accomodate higher latency IO operations. This is very important as the same vendor that can handle 700 Mbps on DAS based implementation without a hypervisor can only handle 120 Mbps when using NAS and a hypervisor based implementation.

The industry seems to be shifting towards what is enabling costs savings in many datacenters: server consolidation (1 VMS per 1RU server versus 6 VMS instances per 1RU), scale out systems (reducing need for smaller LUNS, defrag, and data migrations), and then even hybrid cloud based architectures. This is forcing many of the VMS providers to examine their software architectures to ensure the ability to handle the higher latency IO operations.

More specifically to latency, when using block based via a local storage target using SCSI through the motherboard, SAS connection, FC connection, or FCoE connection will be <<1 ms when considering a WRITE() command (maybe .1ms for local and .3ms for SAN).

When using ANY network storage system where the network itself has ~1 ms RTT alone and add to that multiple higher level protocol interactions to accomodate the same WRITE(), thus increasing the average application experience of latency to >1ms (average of 2-5ms is good). The other aspect here is how well the IO implementation of the VMS deals with variable latency (std dev) and maybe an average of 5ms with outlier maximum latencies of 50ms for a very small number of operations. The latter sometimes has much to do with operations like MKDIR() and maybe other file system attribute operations. The use of cloud based storage and object based interfaces (SWIFT for instance), will require building applications that can handle even higher latency (20-50ms average RTT without a CDN) and variation due to the use of the public internet for transport.

http://www.emc.com/collateral/white-papers/h12546-wp-video-surveillance.pdf

(6)
Avatar
Ari Erenthal
Sep 23, 2014
Chesapeake & Midlantic

Good stuff, Bryan. Thanks.

UM
Undisclosed Manufacturer #2
Sep 25, 2014

A lot of people tend to use iSCSI rather than AoE for storage in IT generally. But for storing masses of data and constantly writing, given that you can get NIC's designed for AoE without the IP overheads and thus faster throughput and I imagine less latency too, why are more people not using AoE in CCTV ?

You tend to read about it in petabyte systems, but folks with lots of storage and many cameras thus many VMS systems would surely benefit too ?

Another question if I may please and I hope it's not a stupid one.
I always read about how it's dangerous to have two devices sharing the same block storage over iSCSI. But if one device is only ever writing and the other only ever reading, where is the danger?
Isn't there a bottle neck in having cameras send video to the VMS which store it on a iSCSI device, then the same VME reads it back off again for playback requests to multiple clients?

Wouldn't it be more efficient to have the camera write direct to iSCSI partitions & multicast a live stream to all that want one. Then the VMS or even the client can read off the same partitions to get the recorded footage.
But one client (camera) is writing, the other (VMS) is reading. Neither do both.

If there a fundamental problem I'm missing as no one seems to do this yet and it seems to this layman it would end some bottle neck problems at the VMS. Potentially pushing them back up the network due to the many iSCSI and multicasts.


Obviously a catalogue/database of what footage is where would be needed by the VMS/clients, but that's all down to how the data is stored and is not that different to asking for "cam 1" and it being mapped to IP: 192.168.10.34

Least sharing a SCSI HD between my Amiga and PC at the same time worked back in the day :)

Avatar
Carl Lindgren
Sep 25, 2014

Undisclosed,

A couple of answers:

Two simultaneous streams is beyond the capability of some cameras, at least if you want both to be 30fps. That is a limitation we've run into with Axis cameras, among others. The problem becomes even more acute if you desire ONVIF streams, since many cameras we've tested are unable to provide Multicast over ONVIF.

As far as AoE, I think that transport method is neither well-known nor well-supported. It reminds me of Infiniband - a technology that is often used by supercomputers but not typically in lower-end applications.

UM
Undisclosed Manufacturer #2
Sep 25, 2014

Sure. Some cameras cannot manage it. Depends how the stream out is handled. If the record stream is similar to the live stream or tied to it then it's just replication. Or if the interface cannot handle it too. Maybe in the future....

AoE is missing a nice freindly front end. It's well adopted for massive storage systems but certainly I'm only aware of one CCTV company using it in cameras. Similar reasons no doubt to the limitation you state on your Axis example. The less overhead of the AoE protocol requires less system resource and less bandwidth used than iSCSI or say SMB recording - that many vendors support.

But no doubt, this works very well for a one stop shop. Supplying the cameras and the VMS system. Hands up who likes lock in these days?

JS
James Shimota
Sep 23, 2014
IPVMU Certified

That was the single best thing I've read in 2 weeks. Many thanks Bryan. Seeing solutions appearing for latency issues on NAS and Cloud is very exciting to hear about.

BB
Bryan Berezdivin
Sep 25, 2014

Sharing of any block (SCSI) targets is difficult. I am sure there are ways of sharing a single LUN from 2 servers, but I have not seen it in production. Most systems that allow this do so by front ending the SCSI address with servers that process the requests and effecitvely provide a virtual LUN. VMFS from VMware for instance. It manages what blocks are allocated to what User.

U
Undisclosed
Sep 28, 2014

If it's a NAS you have security and network issues. Gotta be secured. NO, I don't want a super-secret 3-rd tier switch buried in between my 1-u green server and my 2-u green drive array. Man up and buy a real switch and configure it for proper management. Don't bleed the NAS traffic onto the corporate network unless that's genuinely the local style. Make sure the devices involved have reasonable iSCSI support (you really expect cameras to have rock-solid iSCSI support?) Probably shouldn't use SMB2 (Microsoft) NAS as it's likely hard to secure.

Done decently a NAS would be fine. The conservative answer would be direct storage as it may be easier to deploy stably.

New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions