What Storage Would You Recommend For A 300 HD Cameras System?


We have been tasked with finding a NAS for a system with 200 cameras HD cameras all running at 1080p. The NAS will very big. We will start with 148 TB and the storage may increase to 1~2 Petabytes.

We have looked into the usual Suspects (EMC2 Isilon X-400 with 148 TB to start), NETAPP 5600, etc and the lesser known but rather interesting models such as Panasas ActivStore 18. The goal is to get to up to a year of storage ...

So far the prices we have seen, are, to say the least brutal. We do understand that the larger the storage arrays the more problems come up.. Rebuild time is a serious issue for example and the need to provide redundancy when you are dealing with hundred of drives is not trivial but ... We do not want to scare the customer with the case of the sauce costing much more than the fish.

We would like to listen what people from the collective have faced when it comes to storage at that level. We believe that deep storage is often overlooked by VMS manufacturers and integrators alike. There seem to be few options that allow one to just save the recording on something for future references/viewing. With IP it simply isn't that easy. I don't know of any system that would allow to simply remove the disks and watch them on something else aside from Veracity's Coldstore system which is painfully limited to a handful of VMS manufacturers maybe 5 if that much and even fewer versions for a given manufacturer...

Your comments, recommendations, sharing of experience and even speculation are welcome.


In terms of brutal pricing, Coldstore is the most common low cost alternative but you are already aware of their pluses and minuses.

Can you do anything on the camera side? Are the cameras already picked? You might save yourself a lot by choosing cameras that consume less bandwidth?

What is the storage duration? Month? 6 months? Year? If longer, are you considering different VMS features that might help? For example, multi-stream recording, store full res / fps for 1st week/month than half res for remainder?

I speculate the cloud storage of one petabyte per year including storage and importation costs is around $20,000. Using something like Amazon Glacier w/ Snowball. 3-5 hour retrieval time.

I would flesh it out more, but don't know how quickly the access is needed to archival data nor if $20,000 yr is out of the question.

That and a dedicated GigE uplink...

That and a dedicated GigE uplink...

Nope. I said Snowball.

by recording at 1080p do you mean Casino mode or just with normal compression?

we have a site that's approaching 100 cam's all running at 3mp (higher than 1080p) and when you limit the bandwidth to 4096(we tend not to notice a difference vs running higher bitrates) we find they do around 40gb a day per cam on average (we have a fair amount of motion, but nothing like an intersection) I assume your customer wants Rack mount in the following.

you more or less have two options, you can either spend big bucks on a single large name solution, or go how we did, just run Multiple Servers (we split the VMS's over multiple virtual machines, and then on multiple servers) then run multiple Synology NAS devices.

your garden variety rack mount unit will hold 12 drives, each of those main units will handle 2 expansion units, which is another 12 drives, fill them with 8tb drives and you are talking 288tb of unformatted storage split it up into 4 x 8 drive arrays in Raid 5, with four hot spare slots and you have four ~50 TB arrays, if you assign a virtual server to each array, with short term storage on each Virtual machine (we use WD raptors) then you'd have around 40 days on 20 cam's per virtual machine...

expand/play with numbers as required, be warned, no matter your solution unless you go a crazy expensive solution you are going to experience the occasional array failure... I've seen a single drive going nuts take out a whole array a few times, I've seen bad controllers take out whole arrays etc

Deep storage is not particularly useful unless you know exactly how far aback you need to find the storage, and a VERY expensive fish to chase.

Numbers breakdown:

Synology + 2 x External expansion units: $6.4k

8tb disks $345 ea $12.5k

So for around 210tb of Long term storage you are looking $20k aud, and that's not bad at all compared to the big guy.

Server wise we can get Used DL380 Gen8's with a pair of 8 core processors for $4k, throw in a pair of boot SSD's ($300 ea for intel S3610) then a set of local storage drives for $400 each and your talking $7k for a server ready to hold between 40 to 60 cams

you guys in the states should be able to smash that pricing.

over view wise, I don't like going big single boxes, the prices does not go up with performance, and if something falls over EVERYTHING is down vs just a small pack of cam's.

So for around 210tb of Long term storage...

How long is long?

that's up to you to calculate based on the number of cam's and size of array etc), Milestone I just gradually increase the number of days to store the footage until the array is averaging 80% full (go above this and the array slows down big time Technically go above 50% and performance drops, but it's not good value to stop there.)

Hi Michael

I really like that solution. I do have some issues as to the reliability of such a system. Cost-wise it is great and allow us to make some money as integrators. Now the issue becomes more complex as one approaches the reliability issue. So many disks increase the probability of failure of a given array and re-building time for RAID arrays rears its ugly head when dealing with such large HDD space.

We can't do used being integrators and all. We will look into Supermicro we do find them as good as Dell or other big names as well as much less expensive.

We also like the Amazon Snowball solution for longer term storage in the cloud. We are not clear how fast it will take to "upload" 50 TB in the Snowball while continuing normal recording operations in the main array... It could take days ... then there is the issue of very slow data recovery.. Need to compare this to what tape would bring to that mix...

Keep them coming people

Thank you so very much


We can't do used being integrators and all.

well I'd agree that buying new is the way except that the new prices don't really justify the cost, but hey it's your ball game :) I figure as long as you are straight up with the customer you should not get shot right ? I'd be inclined to explain to them that they have have a hot spare sitting in their rack for the cost of buying new (maybe Supermicro will come to the party though? we are an exclusively HP house when it comes to servers)

very slow data recovery..

yeah and this is a big doozy for huge systems, how long it takes to get the various bits of an event together, maybe on a 300 cam system they might have someone somewhat dedicated to the cam's, but all the sites I've done their cam person is just someone who has another "hat" to their existing jobs at the company.

probably a bit outside my area of expertise sorry to say :)

Michael, thanks for this info, very informative, but I'm a little confused.

You describe a 3 device Synology NAS setup (1 main + 2 expansions) with 210 TB broken into 4x8 arrays. (about $20k for this setup.)You then describe used HP servers with WD Raptors for local storage of 20-40 cameras. You also describe running multiple VM's for running the VMS. (About $7K per server.)

How many physical servers are you running and what drive size/configuration is on the servers. If you're running 4 servers then why would you need virtual machines?

Also are you dual recording on the server and the NAS (one for short term and one for long term?

At 40gb per day per camera and 100 cameras, you're using 4TB per day. 80% of 210TB would be 168TB and would give you about 42 days recording. Are these assumptions correct?

I know you have a logical answer, I'm just not following it.

How many physical servers are you running

my way of running at this point is to allocate 6 physical cores per Virtual machine, which on the old intel DL380 Gen6's is enough power to look after 20 cam's.

so I basically split each physical server into two Virtual machines (each virtual machine also have their own pair of Mirrored drives for our VMS to dump to prior to archiving)

if/when we move to for example tomachines that have a pair of 10 core processors, I'll probably split that machine into Three Virtual machines, each Guest VMS with an allocated seven cores (but I'll drop their total allowed CPU usage to 80% so all maxing it out won't grind the machine to a halt)

so in the case of 80 cam's, we'd be running 2 x DL380 Gen6's past 80 and we'd throw in another server until we go past 120 cam's

drive size/configuration is on the servers.

the servers run a pair of 200gb SSD mirrored for host os boot and guest operating system, the servers also have a pair of 1tb Raptors mirrored for each VMS's dump drive.

Also are you dual recording on the server and the NAS (one for short term and one for long term?

no, Milestone Exports footage to the NAS, so footage that is roughly a few hours old, will be transferred from the Servers to the NAS, and it does this pretty much constantly, even though we set several times a day it should trigger an Archive.

Are these assumptions correct?

Yep, you are right on the money, while in theory you can 100% fill your storage, the reality is that bad things happen when you do :) using the NAS units is the cheapest way I've found of doing it while still having a NAS that can send you emails if things happen, there are some people doing some far out stuff with storage spaces, but I'm yet to see them be in use for a long enough time to have too much trust

Hi Integrator 1,

We did a system with 110 cameras recently and we ended up doing something similar to Michael:

- 3x Lenovo Thinkserver RD640's
- 1x SSD per server for boot
- 7x 4TB drives in RAID6 per enclosure

The new RD650 (one of them, at least) has 12 hot-swap bays, giving you 36TB per server in RAID6 with one boot SSD on 4TB drives.

I see that WD has released 6TB Se and Re drives now, which would give you 54TB in RAID6 per server with the new RD650. This setup would give you the initial storage capacity needed in 3 servers. You could then investigate a DAS if you need additional storage on each server.

Overall, the cost should be around the same as Michael's system and the performance should be similar. I personally hesitate purchasing used or off-lease servers, that's really the only differency in implementation.

yeah that sounds about right, WD have a horrid RMA proceedure here in AU, as well as price being out of the ballpark, so we just use the cheapest Seagate's that have proven pretty solid.

I understand the server issue, I just explain to the guys I'm working with why and they usually seem ok, you guys over in the US would get some nice pricing on new stuff, the price for a pair of 8 cores a 64gb of ram on a brand new server is over $10k for me here, $4k for a used config is pretty easy to go with (throw a spare in, you can remotely restore a VMS to the spare box if it falls over)

Microsoft changing to per core licencing with server 2016 is looking painful when intel are talking 24 + core CPU's being available :|