Member Discussion

Jail Needs ~3PB Of Surveillance Storage - What To Do?

How do I achieve 1 year + a day of storage for 80 x 3.0 megapixel cameras? At 20fps my calculator shows about 3400 TB. If this is demanded, then how do I do it? And then possibly using RAID?

This is for a detention center wanting high resolution video and a year retention in case of law suits.

Yes, you are looking at storage requirements in the range of multiple PBs (3000TB = 3PB, etc.). How much exactly depends on recording type (continuous vs motion only) and the specific scene complexity, camera type, etc. The ~3PB number assumes continuous recording.

Have the camera been picked already? If not, I urge you to carefully test and measure the bandwidth consumption of each camera. Even for the same resolution / frame rate, bandwidth can vary significantly (see: IP Camera Bandwidth / Storage Shootout). Given the size of your storage needs, picking a better camera could cut storage costs meaningfully (tens of thousands of dollars).

Will the jail consider / accept motion boost recording or reducing resolution / frame rate over time? For example, for the first month, record at 3MP/20fps but after the first month, drop it down to 3MP/5fps or 1.3MP/10fps, etc. Many VMSes support multi-stream recording which could deliver significant overall storage savings.

Beyond that, you are looking on the order of ~1000 hard drives (depending on hard drive size, redundancy implementation, etc.). That's going to be a physically big system.

With such systems, network based storage is most common but since you only have 80 cameras, direct attached might be an option as well (see: Direct Attached Vs Network Based Video Surveillance Storage?). Finally, there's a lower cost surveillance storage alternative that might make sense here - Veracity Coldstore Overview.

There's many different ways to approach. I am sure there will be a number of good suggestions in the comments.

When I read this question the first thing that popped into my mind was Veracity. The next was IndigoVision for the low bandwidth like Carl's system and then the question of outputs/clients since most housing facilities of this type require many people to view sections of cameras. This will likely impact your switch selection as Multicast will likely be a costly part for it to work well requiring Managed Layer 3 capabilites. That brings you to the VMS / Camera and Storage selection again. Although most entities won't for many reasons, but you could ask if they will budget the storage purchase over multiple purchases. You don't have to buy all 12 month in advance and 4TB drives are just getting popular. It was easier with tape decks and matrix's except for the quality of the final product.

Go with the best cameras. Dont worry abvout band width yet. Storage is cheap and getting cheaper.

Many of the 'best' top performing cameras are also strong in bandwidth reduction, typically because the advances / processing / cost required for strong image quality also can be used to reduce bandwidth.

Veracity Coldstore is likely to be your best option in terms of both reliability and price.

There's certainly consensus that Coldstore is competitively low cost. The reliability aspect is more debatable as its reliability comes from turning disks off / writing to one pair at a time (or doubling the number of disks). This thread is not the appropriate place to debate this in depth but I did want to provide that context. For those who want to know more and discuss Coldstore, go to our Veracity Coldstore Overview.

Again the new technology and software in this area is spetra Logic. Its very well accepted by IBM, EMC etc and the majority of the big data center players. So you will have the IT guysjalready trained and its easy.

To be clear, what you are recommending is tape archive, which is offline. In this case, given that they want a year or storage and most of it will rarely, if ever, be looked at, I think it's worth considering. For example, see our post on Long Term Storage / Digital Tape.


As the manufacturer of COLDSTORE, can I correct a technical point ? Our reliability does not come from just switching disks off. It comes from eliminating or mitigating the three disk killers: temperature, vibration and wear. COLDSTORE's Sequential Filing System eliminates vibration almost completely and reduces disk operating temperatures. This is proven in many large multi-petabyte installations.

Also highly relevant to this discussion is the fact that COLDSTORE is ideally suited to very large drives as it does not require to rebuild disks at any time. We have 6TB disks now, with 8TB and 10TB disks promised from disk manufacturers. These will be unsuited to RAID5 and RAID6 applications as the rebuild times will be so long as to be impractical.

Such large disks will make 3PB requirements such as this very cost effective.

"It comes from eliminating or mitigating the three disk killers: temperature, vibration and wear."

And that comes from switching disks off.

Update - To clarify, they write to one pair at a time and then switch the disk off.

The bigger point is that if that drives fails, for whatever reason, however unlikely you say it is, there's no redundancy. And many users are just not comfortable with that.

If you would like to discuss this more, you can comment further at: Veracity Coldstore Overview and readers can follow it there. This will not become a specific vendor thread.

The audacity to expect Veracity to function in that capacity. I find their concept at least more palatable than Timesight with resampling as the solution to longer storage. I just happened to agree with John....I'll never let it happen again! :)

Hi, we had implemented a few multi-petabyte systems. You should analyze iops performance. I understand that the main function is archiving so the requirements should be achieve by almost all the storage systems. You can go with DAS or SAN architecture depending on your server distribution. Solutions like Dell Equallogic, hp 3par, etc are some options. You can go with entry levels like ultrastore or supermicro. I recommend you to use sata drives. Raid 5 or 6 affect performance, but shouldn't be substantial for this application.

One thing to consider is that they don't need it all up front if they do decide to use that much storage. If the first full write of the discs won't happen for a complete year then you have the duration of that year to expand it. I've found that drive prices go down so, much so often that it can actually save tens of thousands in multi-peta storage arrays to add it as you need it at quarterly or half year intervals.

Veracity Coldstore is actually a good option provided the VMS being chosen is well integrated and therein lies the rub. Only a few have a deep integration with their sequential filing system.

As far as just ease of configuration and use, I've found the Petarack from Aberdeen to be one of the easiest to set up and configure for large storage capacity jobs. Bear in mind that one petabyte of storage is roughly a third of a million dollars. So this customer would be looking at a million dollar storage array to achieve what they're looking for.

We have installed our share of multi PB CCTV systems, and here are my two cents.

My recommendation is to go with Iscsi SAN - dual controller. If you go with DAS you stand the possibility of losing access to the data in case the connected server is down, at least temporarily.

80 cameras is not a lot of load on the server side or the network (even if you need to dual stream), and I agree selecting the right camera and calculating the storage requirement based on the right bitrate is critical.

The fact that you need the video availability for a full year that will be the consideration.


Redundancy at every level of the storage system:

· Network ports: at least dual per controller – teamed, each connected to a different switch (note: same thing from the server side).

· Power supplies & Fans – dual hot swappable.

· Controllers (raid cards): dual active/active or active passive, hot-swappable with battery back up for the cache.

· JBOD connector: dual & hot swapable

· HDD: RAID 6 with a local hot spare for every 48 HDD is the best value/performance combination – note: with so many HDD running, you will have regular visits to swap dead ones, at least in the first few months and then again after the second year.

RAW VS Usable:

Whatever value you come up with for the usable storage, you need to consider at least 10% extra free space to avoid running the HDD at full load.

For the capacity calculation, the simplest formula for Raw to Usable conversion we use is:

( X[# of HDD in a raid group] – 2) *HDD density *0.9 + 1 Hot spare

Ex: In the case of a 24 HDD enclosure, and using two enclosures for one RAID group

· RAW: 48 *4= 192TB

· Usable: 47 (HDD)-2(HDD)*4TB*0.9 = 162TB usable with 1 HDD (4TB) hot spare

· Out of the 162TB Usable, allocate only 146TB for use - leaving 10% spare

o So 192TB RAW will allow you 146TB usable

Raid Controllers & JBODs:

The cost in the storage system is reduced as you increase the number of JBODs. Hence a powerful raid controller is worth the investment.

Some of raid controllers will allow to connect up to 256HDD (with a second controller in Active-Passive mode or Active-Active mode), so that would allow for up to 10 JBODs per controller (if your using a 24HDD enclosures)


4TB/6Gbps HDD are now the norm for large storage systems (best price/value). Note: 6TB & 12Gbps are coming out (confirm that your storage system is compatible)

The HDD have to be enterprise level with 7,200 RPM or AV drives (better priced). We have seen that SATA drives can do the job just fine, no need for SAS (especially that you have only 80 streams input)

External factors:

Finally I can’t emphasis enough how important it is to ensure the temperature remains below 19 degree (Celsius) , a stable power source (preferably two independent sources) and very close monitor of the room environment.

The amount of heat is not small from such a storage system. Keep a backup AC, the least interruption in the AC units will spike the temp due to the number of drives and in no time, the system will become a HDD popcorn machine.

Provide different power sources to each of the power supplies in the storage system (preferably through a smart UPS). Any unplanned power interruption could easily damage your RAID setup and corrupt the data- back to square one.

Install an environment monitor (temperature, humidity and power) in the room and have it connected to the guards on duty to alert them immediately change in temp, power interruptions or humidity.

If planned properly and commission with attention to the details it should run like clock work for years

I amazed at the intelligent responses. So glad to have found IPVM. The vast wealth of knowledge you guys share is overwhelming. Thank you for the info and for being kind enough to share! Now to get started on this mountain.

Nodal detailed why large storage projects are not just a bigger version if a regular job. I worked with an integrator on a large IP conversion from PC style encoders that wrote to Jukeboxes to embedded encoders that stored on servers. It took a few years to convince the client to change, a few MONTHS to properly design and just a couple of weeks to implement. It's been years and now they are upgrading storage I hear. All the overload, heat, spacing, electrical requirements were planned in front of the install and things went well. If they had shown up with gear and installed where they were told it would have been a disaster. They say measure twice and cut once. That goes for each aspect in the planning. By the way, most of the power and much of the heat issue is Veracities marketing message. The question of working or not I can't say, that's up to them and their customers. You could use a small IBM Tivoli system where they write first to fast spinning disk, copy to slow spinning disk and off-load to tape. It depends on how much time you have to recover video from incidents at different periods. I would think with it being IBM it would cost a pretty penny.

You might want to check into this. 180TB storage in 4U rack for about $12,000.

3 PB for a little over $200,000.00

Storage Pod 4.0: Direct Wire Drives

I would recommend manaufactures that live and breath storage. I purchase my stuff through a local reseller here in dallas. I've been very happy with the performance and the price is better than what I find online.

The manufactures I use are.

Assuming your using a windows machine for the NVR

I would recommend go with a fiber channel storage ( SAN ) I'm a big fan of block level access to storage pools. iSCSI is good, but is trickier to setup. Plus, I've had issues with some iscsi implementations and software in the past. Fiber Channel has always been straight forward and rock solid.

Your probably not going to run into any IOPS inssues. Each storage bank holds 16 drives, 14 usuable. ( The way I set it up) 7200 rpm drives get about 80 IOPS on the low side. You'll have about 1k IOPS to deal with per 16 drives in Raid 6.

You will run unto a bandwidth issue. Assuming your dealing with H.264 compressed video (if its MJPEG you have a much bigger issue), 80 streams of video, and average bit rate for a 3Megapixel at 20FPS is about 11 MB per/sec, then you will need a bandwith of 880 MB per/sec. Depending on what technology you are using ( NAS, SAS, SAN) , you might have as much as a 10% over head. A safe guess is you'll need a total bandwith of 980 MB per/sec

If your using Fiber Channel, the max write and read spead is about 80% of the channel bandwidth. On you 8 GB fiber channel, that would be 640 MB per/sec. You need 2 x 8GB fiber channel connections to provide enough bandwidth.

If your are trying to connect everything to 1 giant system, your need dual 8GB Fiber Channel. Which also means you'll need an OS that understandands multi-path I/O ( windows 2008 R2, Windows Server 2013, or Windows 8.1). I think largest storage pool you can setup with out purchased insainly priced hardware, is about 1.4 PB. There are other issues also, the best thing is to split the system into smaller junks.

If you dived the the video between 5 NVRs, this is very duable.

Here is how I have Video Production companies setup. The main reason I recommend SAN, is the Raid arrarys are very easy to setup and maintian.Swaping out dead hard drives are quick and painless.

Sample setup.

1 x Qlogic ( qlogic is awesome) Fiber Channel Switch $5,000

5 x Single Channel Fiber Channel Card $900 per pc

5 x Tiger Store Install for Shared Storage Access and Storage pools $995 per pc

15 x San Controller with 64TB ( 56 TB in RAID 6 ) can add 3 JBODS $15,000

45 x JOB Expansion 64TB ( 56 TB in RAID 6 ) $7,000

15 x Smart APC Battery Backups $1500 each

Total Bill would be

580K for hardware alone.

Each NVR gets connected to

3 San Controllers. Each San controller is connector to 3 JBODS.

Every 16 Disks I make into a RAID 6 Group. I think that if you expand a raid group beyound 16 disks, you are pushing your luck. I setup email allerts on the SAN, I'm notified if any drives go down, or on any errors.

Each San controller and JBOD takes up 3U of space. Assumming your using 48u cabinets. you'll need 4 Cabinets to hold all this.

I like Tiger Store, becuase you set easly set bandwith limits on each storage pool, and set hard caps on file space allocation. It also makes tape back up easy by allowing shared drive access. If the san was for Database applications, I would take a snap shop of the san then give the tape drive exclusive access to that snap shot for backup. With large amounts of video, your not allowed that luxury. Shared block level Storage is really the easiest way to make a tape backup for video files.

I like the backblase platform allot, but there is a big issue withgetting qualified techs to service it. Swapping out bad drives on a SAN is extremly easy and painless.

You could also off load old footage to an LTO 6 tape. I recommend tandenberg , good tape drives and autoloades.

The Neo400s can store 120TB of uncompressed data. Depending on which backup program you use, it might bring down the storage capacity to 100TB. 14k

Tivoli ( there is better backup software out there) software is about 15k after setup ( based on a previous quote)

I use BRU backup for the Video Production Houses.

Each LTO-6 tape is cost about $70 and holds 2.5 TB of data uncompress.

If you back up all the data using LT0-6, you could store 3PB of data for a cost of 85K per yer.

You could cut the costs of the storage significantly.

Keep 6 months of video on hand. Then back up the rest to LTO-6 Tape. We can divide the cost of the SAN in half.


280k for hardware.

+ 25k for LTO-6 Autoloader with customized back up software and Controller PC

+ 100k a year for new LTO-6 Tapes. ( or you can re-use existing tapes)


I didn't use any systems that that have a higher density than 16 drives per 3u, allot of these require low profile harddrives, or are 2x to 3 x more in cost.

Only use Hitachi or Western Digital Hard Drives.

Back blaze has a great article on drive reliability.

I have found that the majority of clients who request this amount of video storage have no clue as to what it might cost. Once they understand the costs involved, both for initial installation and for ongoing maintenance, their requirements become much more modest (realistic).

Michael, I second that.

Undisclosed A, what does the jail expect to pay just for this storage? They should reasonably expect more than a million dollars. If they choke on that, good to know up front.

We've worked on some Prison projects where we specified Pivot3 and I'm aware of several correctional facilities in Mexico, US and Canada where they're using them as well. Different VMS's, but the hardware platform of choice for large mission-critical, high security systems seems to be PIvot3. We've actually consulted in a couple of projects where competitive server/storage systems were scrapped after months of problems, Pivot3 installed and the systems ran trouble free afterwards.

No one in this thread can reasonably answer your question without a significant amount of additional information. I looked at a system once with 150 cameras that would require a literal conference-room sized room full of storage arrays. I've also looked at 150 camera systems that require 8RU of storage. There are too many factors to consider based on the provided information. Movement, image complexity, storage type, encryption, transmission protocol, night versus day hours based on geography, and so on.

Minimally, to provide any usable information, we would need to know existing capabilities/systems, and at least some basic information about where the cameras are and what happens there during what hours. That would be a good start, but still leave many variables open.

There's also a tape drive consideration, secondary server consideration, removable storage consideration, and so on. You can "retain data" for a year without it sitting on active servers, and that data can be later retrieved.

There are many ways to skin a cat, but if all you say is, "I need a cat skinned," that cat skinning can be significantly less effective and more expensive than if those making decisions are provided with as much information as possible.

PS, using static (IE, not part of the active system) storage for anything over a month or so is going to probably cut your costs in at least half. Just a ballpark. More or less depending on the many other pieces of information that those answering this question don't have.

nick, is the information presented so far really not "usable"? i can't recall a thread on any technical forum that had so much insight.

skinning a 3 ton cat narrows options in and of itself.

A VMS with good local archiving to properly formed folder and file clip names, and paying AWS for glacier storage amounts to $1,000 per month ,and no investment in hardware, or internal IT expertise and/or maintenance. Built or buy a cheap NAS with AWS integration and push that video (encrpyted to web storage) invest in faster connectivity..... this plan will allow you to change VMS, cameras, etc over time, and build a flushing policy...

Andrew, if you pay 1 cent per GB (current pricing - $0.01), that's $10 per TB and $10,000 per PB, correct?

Assuming 3PB, that's $30,000 per month, yes/no?

A reverse calculation of the storage suggests 10.7Mbit/s per camera - seems high for H.264 in what is likely a reasonably lit location. H.264 does not provide a visibly better image when run at the lowest compression (100% quality) when compared to about 70% quality and I would expect a bandwidth per camera of closer to 6Mbit/s or less. That is a significant reduction in the total estimate.

If you take the 80 cameras and try to push them to the web, you will need a continuous, reliable uplink of at least 100MB/s (800Mb/s) which will be expensive and in some locations not possible. It may be less if the above estimate is correct.

My suggestion would be to design for either a week or a month of RAID storage and then offline the data for weekly or monthly as there does not appear to be a requirement for immediate access for old activities. The archive could be done by using an achiver like the Veracity Coldstore and just pull the disks on a regular basis, or use a tape archiver and stream the data off for longer term storage. The Veracity solution would allow for more rapid recovery, but will require you to buy and store a pile of hard disks. The tape solution should store more per tape. Either way, the savings in equipment room space, capital equipment, and especially power and cooling would be significant.


Call it a government oversite and loose all the emails......

Personally as a small business, if I were to have a client with such a high profile application, I'd be talking to my insurance company about my liability insurance. During my 35 years in the I.T. support business, I've seen storage failures with many of the solutions above, and being shoe-horned into a lawsuit would be inevititable. The minimum rule of thumb in data storage is to spread your backs far and wide, while having the running data, and two backups. This discussion does raise the issue of having the proper agreements in place.

Firstly for Prison's/Jail's i'd be very surprised if the recording is anything but continuous, as most of the prison projects i've known you have to "prove nothing happened" just as much as proving something did happen, therefore frame rate boosting is questionable when it comes to "life and death" situations in which this type environment respresents.

Its worth noting that in Qatar, there are already many installations in All of the banks there, in which the goverment has mandated all recording is at 3mp 20fps Continuous Recording for 120 days and this has already been in place for a couple years now, with some of the larger branches having over 100+ cameras.

I would recommend reaching out to companies like March Networks, Milestone and Genetec to understand how they have successfully met these requirements when working with storage providers like EMC, Dell & IBM in these Qatar banking/MOI projects, as they are All using online live storage rather than any form of tape storage because of the requirements to produce evidence urgently in a critical situation regardless to the age of the event (the same could be said of a Jail/Prison), this is of course coupled with the challenge that some VMS vendors not being able to support the delay of tape archive retrieval.

The other thing to note is that in testing in Qatar, there is Massive difference in bandwidth performance from camera to camera when considering video quality, regardless to the supposed "commonality" in compression methods.

Notes: Raid was always used. Connection methods to the Raid storage was dependant on the VMS providers ability to manage its storage efficiently, so always refer to the VMS providers proven/suggested storage connectivity - DAS vs NAS vs iSCSI vs SAN.

"DAS vs NAS vs iSCSI vs SAN"- ISCSI is not a form of storage, it’s a connectivity applicable to SAN.

The answer to the question whether the usual suspects "March Networks, Milestone and Genetec" support any of the standard forms of storage , is YES. So is the case with almost all VMS's that claim to have enterprise functionality.

As for near line storage or even offline, why discard it?

  • It could be backed up via VMS or storage and it could be retrieved without the VMS. Again, most of the usual suspects watermark their video’s for forensic purposes and provide an standalone player.
  • It could offer a cost reduction without any noticeable impact on functionality.

o The urgency of retrieving an event that is more than 30 days (ex: of backing up the video after 30 days to the offline storage) from seconds to minutes.

As for the storage brands, I would recommend to expand your scope to better price/value brands.

In the interest of full disclosure, I lead the Isilon video surveillance practice for EMC and have experience working for hardware (Axis) and VMS software (ipConfigure) vendors in this space over the past decade. You should seriously consider EMC Isilon for this application for a number of reasons: (1) EMC Isilon is the industry leader for NAS with a greater than 80% utilization, meaning you are actually using more of what you paid for. Typically when using RAID you have over-provision storage capacity by 30-35%. Isilon uses a protection scheme (Reed Solomon erasure coding) at the file level to mirror data across disks so it is not dependent on RAID controllers. (2) Isilon has been tested and validated with leading VMS partners: Genetec, Milestone, Verint, Aimetis to reduce risk for the end user/ integrator. We publish best practices, and configuration tech notes to get the highest performance with these vendors. (3) Isilon allows you to pay as you grow. Adding more storage is as easy as pressing a button and 60 seconds later the node has been added to the cluster, no time-consuming mapping of LUNs/Volumes to cameras required. (4) Isilon is a distributed architecture insofar as performance increases as you add more nodes. CPU, memory, storage and network I/O are pooled shared resources; unlike traditional storage your first day is your best day. (5) other features include: load balancing, WORM protection, self-encypted drives, and much more. Check out: