What Is The Most Common Size Hard Drive You Use In New Surveillance Systems?

Poll / vote now:

We are right now in the process of deploying our first 6TB Enterprise Drives.

What brand of 6TB Enterprise drives are you using?

We deploy mostly Seagate Enterprise drives but we have used WD RE drives in the past.


In RAID, I assume? The thing about ever-larger drives in RAID Arrays is that you have to be careful how many bits are in each RAID Group. 1TB is 8x10-12th bits (1x10-12th bytes) so 6TB is 4.8x10-13th bits. A 10x6TB drive Array would have 4.8x10-14th bits.

That's getting scarily close to, or exceeding, typical claimed drive bit error rate of 1 in 10-14th to 1 in 10-15th. Essentially, larger RAID groups of larger drives almost guarantee failures. Granted, smaller RAID Groups, better drives like SAS (that can have claimed error rates up to 10-17th) and newer storage technologies help alleviate the potential problem but the first option (smaller RAID Groups) is expensive and wasteful and the third option is not "tried and true" for Surveillance storage as of yet.

I specified no larger than 3TB drives and no larger than 10+2 RAID6 storage in our recent system replacement and am happy with the results: only one drive has failed in the nearly a year since the system was deployed.


I get what your saying but we don't deploy RAID anymore. Your arrays are quite large which make rebuild time extremly dificult to acomplish with larger drives. When we deployed RAID 5 we never went past 3+1 or 4+1. If you want a better option I would sugest looking at RAID 10 it has much better redundancy then RAID 5 or 6, rebuilds are quicker and there is less chance for rebuild failure.

We have some older system with RAID 5 and 4TB drives in them and have had no issues rebuilding the arrays.

So far in the last year we have had one hard drive failure out of 92 Hard Drives and 192TB of total storage. I find it hard to justify all the extra cost for raid with such low failure rates. If we have a major incident we would pull the footage imediatly anyways.

RAID10 was not a cost-effective option for our 700TB system. There are newer technologies I've been keeping an eye on but like I said, I'm not certain they are ready for our write-intensive use. Check out some of the white papers available on "Distributed RAID".

Basically, up-and-coming storage technology owes a nod to Pivot3, although from what I've heard, their basic idea is/was sound but their implementation left something to be desired.

We typically have 4/5/6 TB drives in use these days. There is still a cost premium with the 5 and 6 TB drives but that is coming down quickly. Right now the 4TB is a sweet spot from a cost basis so most of our standard builds start with this drive and we bump it higher to save on avoiding the need for extra systems.

We use HP disk shelves and the 2TB density for raid is the cost/size sweet spot right now.

With 35 votes so far, 2TB has the most votes but 3TB and 4Tb combined have almost 50% of responses.

With 57 votes now, 3TB and 4TB have inched up and now are a majority of votes. So 2TB to 3TB seems the most common overall.

Wonder what kind of crimp, if any, this might be having on drive and storage array manufacturer sales with drive storage getting so much bigger, meaning less drives needed. We've been seeing a slight trend down in the number of drives needed and the number and size of arrays needed for proposals. With no one really clamoring for 4K anytime soon, it seems like it will continue.