Member Discussion

NAS Device For NVR Backup Storage

With so many options on NAS Devices (Network Attached Storage) How do you decide which Brand / Model to use? We are using a few different NVR's that support a NAS storage option. I would think we need a Enterprise NAS since it will be constantly writting data. Then can you have a 2nd NAS - same Brand / Model to backup the 1st NAS ?

How many cameras? How much max throughput across all cameras?

Those are 2 key questions for deciding.

How big are your NVRs (storage-wise)?

What do you mean by "back up?" Do you mean if the NVR crashes you want to be able to recreate its state from the NAS storage? Or are you trying to extend the NVR's retention time.

When you say the NVR 'supports' a NAS storage option, that makes me think they may provide additional storage via an attached NAS to increase retention time, which I would consider different than "backup." You'll need to clarify that.

What does your NVR vendor say about backup and NAS storage?

Why NAS storage? how about a Clustered NVR/DVR which offers benefit of both?

We've been using NAS devices for dedicated surveillance storage for several sites for several years now. We started with Enhance Tech iSCSI RAIDs, went with a few QNAP units to save cost, went back to Enhance, then Promise, and have recently tried a Synology on a site (cost concerns, again).

On the whole, all brands have been fairly solid performers. Each has experienced the odd issue requiring tech support, and that's where they really start to differentiate. Promise support has been okay, Synology was responsive but ultimately not that helpful, QNAP has been dreadful almost every time I've needed them. The best of the bunch thus far has been Enhance.

Even a SOHO-grade four-bay QNAP running RAID5 has handled a 32-channel hybrid DVR nicely - a couple have been in service for a good 4-5 years now. One Promise has two 32-channel hybrid DVRs on it (these have four iSCSI ports, so each DVR plugs in directly to its own port) and also handled things nicely.

One thing I've noticed is that the higher-end the system, the less capable they seem to be: Enhance and Promise, both dedicated iSCSI storage systems, have no function to expand space by swapping in larger drives, or even to extend a logical drive once created to use more available space on the array. QNAP and Synology, on the other hand, can not only do, but have step-by-step processes to make it easy. Granted, they're time-consuming - swap drive, allow array to rebuild, swap next drive... lather, rinse, repeat - but they DO do it.

The Synology also has a feature they call Synology Hybrid RAID, which seems to be an offshoot of RAID5/6 that allows the use of different-sized drives.

Isn't iSCSI a SAN, not a NAS?

Both Storage Area Networks (SANs) and Network Attached Storage (NAS) provide networked storage solutions. A NAS is a single storage device that operate on data files, while a SAN is a local network of multiple devices that operate on disk blocks.

iSCSI is just a network transport method, along with SMB, CIFS, etc. Either NAS or SAN could use it, but SAN normally uses Fibre Channel (although the Promise array gives the option to configure as NAS, SAN, or DAS - Direct Attached Storage).

In this case, the way we're using it is as Network Attached Storage.

iSCSI is a SAN technology, not NAS.

SAN protocols (iSCSI, FCP) are fundamentally different than NAS protocols (NFS, CIFS, SMB).

The difference is simple and important, NAS works at the file level, SAN works at the disk block level.

With a SAN, a host operating system manages all the underlying mapping of disk space allocations that make up a file. Just the same as DAS does. SAN therefore requires a host to make sense of the storage.

With a NAS, the NAS itself manages the storage and presents its assets to the network as file shares which can be accessed without having any knowledge of the way the file is actually stored. It requires no external host.

SAN's allow the host OS to use the physical drive information to retrieve and store data more efficiently than NAS. With a NAS, the OS can only request logical file segments with out regard their implementation. This double translation of logical/physical mapping is one of the reasons for the reduced performance seen in NASes.

NAS's are easier to setup, and work with almost all OSes. Truly plug and play. They are best when sharing files with many hosts.

SAN's have to be thoughtfully considered, depending on what hosts and what OS's are to be used. They are best when sharing with a limited number hosts needing high thruput.

As you might imagine, backup and archive strategies are also impacted by this choice.

Now, to be sure, some NAS's can connect to a SAN for additional storage. Still they only present the file level abstraction to their clients.

Also, some hybrid boxes, i.e. NAS/SAN boxes, can present both NAS drives and SAN targets. This does not mean that the iSCSI portion is a NAS though, anymore than an IP/analog camera is a NTSC network camera.

The thing is, if the protocol between the host and the storage is iSCSI or FCP then the caveats for SAN's apply. If the protocol is NFS, CIFS or SMB, then the NAS considerations apply.

Fine. In the interest of pedantry, I shall rephrase.

"We've been using SAN devices for dedicated surveillance storage for several sites for several years now."

Happy now?

The TAO of backup rule 1 is spread you backups far and wide. Backing up, or mirroring you backup has significicant risks. These hybrid miirror systems don't know the diferences in good video vs. bad video. We consider the online storage of the VMS as the first location, Raid6 is our minimum, and prefer raid10. We only use enterprise drives, with 10^16th, bit error rates with raid6. Comfortable with 10^15th on raid 10. Never use 10^14th drives - never.

if you are selling a brand, the put Lenovo in the mix. If you're selling your services, then consider selling a windows server as backup storage. Build your own with an OEM platform like super micro, Intel, Seneca is good source. This becomes your 2nd storage spot, either SMB, CIFS, share, ISCSI, the client wants a backup of the backup, consider any number of ways to schedule mirroring that data store. DFS is a great way to handle that, and you can throttle, schedule, and tune the speed.

assuming your VMS can write files that stand-alone, once the files are written to your windows server, now you have the option of selling the client online backup storage as the backup of the backup from any number of MSP storage providers.

My understanding is that NAS was developed for network storage consolidation. That is, rather than having hundreds or thousands of network clients storing data to underutilized local hard drives it is more economical to centralize this. So an "enterprise" class NAS would be one that can support a large number of clients storing and accessing files simultaneously. Very different from using for a single purpose application with a single client (VMS server). So why is NAS a good choice for video surveillance rather than attached storage? Why would you want to stream all that data across a network? I am probably missing something here that a more tech savy member can inform me on.

When we started doing it, it was a matter of cost. The only real options for adding an external RAID array were network, or fibrechannel. USB 2.0 isn't robust enough, and fibrechannel would require an expensive add-in card. So, we popped another GbE NIC in the DVR, and direct-patched it to one of the iSCSI ports on an Enhance RAID.

As it happens, iSCSI on GbE handles the traffic easily, and is solid and reliable... so why not?

In smaller installations, we started using small PoE switches with eight 10/100 PoE ports and two GbE/GBIC combo ports - plugging the DVR and RAID into the GbE ports has worked fine as well. Newer NVRs we're getting have dual NICs, so we've gone back to direct-patching the iSCSI ports.

So my question would be, why NOT networked storage?

It's been my opinion that you get a far more reliable high performance system to stick with local storage for as long as you can. Comparing the overall reliability and technology involved in high end local storage, LSI controllers, enterprise drives, cache batteries, raid 10/60, vs what is needed for equivalent performance / reliability in a NAS, ISCSI environment. Adding the network layer to storage can be expensive. Network storage, when shared among traditional Virtual servers for non-stop environments is the sweet spot. My guess is few if any VMS's could truly fail-over or survive using Vmotion. Adding space, even for my favorite VMS requires stopping a service. It's been my experience that even the best switches, Juniper, Cisco, require far more critical security and performance updates, that bring the system down, than Raid/host bus controllers.

One major advantage to having separate storage is that it stays on site if you have to replace or repair the DVR. I can pop in a loaner while sending the DVR out for repair, or just swap in a whole new DVR, and just attach it to the RAID and have all the old video right there. When the loaner is recording to that same storage, it means the video from it stays on site when I put the repaired system back in as well.

The DVRs still have a TiB or two internally as a "backup" record destination should the RAID have problems or need maintenance.