Using 4TB Hard Drives In Surveillance?

JH
John Honovich
Mar 09, 2013
IPVM

There are a growing number of 4TB drives available (e.g., here's what's available at Newegg). I am curious though if anyone is using them yet in surveillance applications. It seems that the two big concerns here are price (disproportionately more expensive than 3TB) and that most of them seem to be consumer class drives.

While we are on this topic, are you using mostly 2TBs or 3TBs now for new surveillance deployments?

Avatar
Carl Lindgren
Mar 09, 2013

I believe all vendors are proposing the use of 4TB drives in our system, although installation won't start before the summer. One concern I would have is RAID rebuild times. The two RAID systems we purchased in November of 2011 are populated with 2TB SATA drives. They take at least 40 hours to rebuild a drive, during which the RAID is running in degraded mode. We've recently tested a few systems that have used 3TB SATA drives and, although they are not nearly as slow, averaging a bit over a day and a half, they still would take 1/3 more time if populated with 4TB drives. I believe 6Gb/s could potentially halve rebuild times but that would also depend on other factors like bus speed.

There are at least two thoughts for Enterprise storage: highest capacity per chassis saving footprint, power and cooling versus fastest and most reliable, which takes more footprint, power and cooling but provides higher reliability and shorter rebuild times. The former would use large capacity SATA drives while the latter would use SAS or SATA III 10k drives.

One issue with 10k drives is capacity. The largest available at this time appear to be the Hitachi Ultrastar C10K1200, at 1.2TB capacity. Seagate and WD typically max out at 900GB. There are also relatively few RAID manufacturers who take advantage of the smaller form factor of 2.5" drives - many use adaptors that hold the 2.5" drives in 3.5" sleds, saving no footprint.

Then you have the SAS vs. SATA argument. Which are longer lasting or more reliable?

SE
Seth Everson
Mar 10, 2013

> Then you have the SAS vs. SATA argument. Which are longer lasting or more reliable?

By default, you can say SAS is more reliable. The reason is that when you say 'SAS', you also are saying 'Medium to Large enterprise, critical workloads, starting price $10,000'. SAS is only found in higher end solutions, and thus only enterprise drives, with their more reliable architecture, are put into SAS chassis.

SATA is actually competitive to SAS in terms of the protocol and architecture. It's a point-to-point bus where each controller has a dedicated 1.5G/3G/6G path to each device, just like SAS. In fact, almost all SAS controllers are capable of negotiating to SATA with a device; Hence chassis that integrate both.

However, if the solution is designed properly and uses the right components, you can find a SATA system that will beat or meet the expectations of the SAS solution. Personally, this is the direction I take- SATA-based systems can scale equally as SAS, so you can create designs with enough redundancy, paired with the right software, that will achieve those 5 9's we're all hoping to get to- or exceed.

JH
John Honovich
Mar 09, 2013
IPVM

Carl, thanks for the feedback. Not to hijack my own thread :) but you bring up the point about long RAID rebuild times. We've been looking at EMC's Isilon. They are claiming dramatically reduced rebuild times. My understanding is that this is because they use a separate Infiniband switch/network for rebuilds. Have you looked at that? Any thoughts - pro or con?

Avatar
Carl Lindgren
Mar 09, 2013

Darned expensive but we have been talking to vendors about that option, along with DDN, which employs a similar architecture and also very dense chassis which can hold up to 240TB in 4RU (SFA7700) and 180TB in 4RU (S2A6620) and can intermingle SSD, SAS and SATA drives in the same chassis. DDN also uses InfiniBand and their systems can perform a few other tricks, like performing only partial rebuilds if a drive "kicks out" due to errors but comes back online after recovering.

Unfortunately, our test of a DDN system last July or August with Genetec did not go well. The system was very slow - the VMS took nearly 1/2 hour to save a 1-hour clip and had other issues, including slow stream access (system would sit there with the rotating clock and "buffering" displayed on-screen). A second Genetec demo with more traditional Fujitsu storage cut the "buffering" issue and clip save times by approximately 2/3 (<10 min for a 1-hour clip).

Speculation is that DDN didn't have the storage set up properly for surveillance applications and they supposedly demonstrated excellent performance in a subsequent demonstration at another property. DDN has never explained the reason for the problems we encountered, even to Genetec.

[IPVM Editor Note: Read IPVM's Overview of DDN's storage.]

JH
John Honovich
Mar 09, 2013
IPVM

Carl, great feedback and, just remember, any problems with a vendor's system is your fault ;)

Avatar
Paul Grefenstette
Mar 09, 2013

We just ordered 3tb seagate s 35 drives to test but for the past few years we have been on 2tb sata from seagate and wd in raid 5 setups - we have some larger installation on sas but I think sata would have been fine as well

Avatar
Carl Lindgren
Mar 09, 2013

Yeah, I've tried to obtain an explanation for DDN's poor showing but they seem to have clammed up. That will probably be the final straw in our minds. If they had been more forthcoming we would likely have given them some consideration during server/storage and system architecture discussions with the remaining integrators but at this point, we have little confidence in their systems, which leaves EMC.

I'm still considering talking to their sales guy again. It's possible they believe that their only chance at us was via Genetec but that isn't true. Both IndigoVision and Geutebruck (the remaining VMS') are fairly open architecture and IndigoVision claims to have certified both manufacturers.

I love options and hate lack of choices.

Avatar
Carl Lindgren
Mar 09, 2013

Paul,

You're still deploying RAID5 for VMS? I wrote an article here once upon a time discussing the relative merits of RAID6 over RAID5 for streaming video storage. My premise was (and still is) that for many reasons, especially the likelihood of reported "simultaneous" multiple drive failures causing data loss, RAID5 is far more dangerous for streaming video storage than for "normal" data storage.

The essence of my argument is that video storage reads data only occasionally while writing continuously. That prevents normal error detection and recovery methods used by drive manufacturers (aka WD SMART). It also can leave "bad" drives in service until another failed drive initiates a rebuild. The system will attempt to read parity data from the first failed drive and fail both drives, causing data loss in RAID5 systems. Since it would be extremely rare for two faulty drives to remain in service simultaneously and a third fail at some point, RAID6 systems are orders of magnitude more fault-tolerant.

The proof of the pudding is that we lost a number of RAID sets (and associated data) between 2003 and 2006 when our system used RAID5. In 2006, we replaced our servers and storage and used RAID6. We have never lost data due to multiple drive failures on our current system in 7-1/2 years.

HB
Harold Baumgarten
Mar 09, 2013

I´m still using 1TB drives with RAID6 for my criticall installs, fast rebuild times and increased protection are our main concern, even higher than system cost or power consumption/footprint/cooling. In any case it depends on the application.

Avatar
Carl Lindgren
Mar 09, 2013

Hi Harold,

Have you looked at other systems like EMC and DDN that are more dense and still have short rebuild times and/or other fault tolerance features?

HB
Harold Baumgarten
Mar 09, 2013

Hi Carl, Looked yes, used no. We have had good results with Pivot3´s, running virtualized Archivers on top of the shared storage pool.

KS
KV Swami
Mar 09, 2013

I have been using Hitachi's Deskstar 4TB 7K4000 SATA III Drives with Veracity's COLDSTORE Storage Solution since last August. I bought 17 drives for $250 each as there was a deal at that time. I bought two extra drives just in case but have not had to use them yet. The drives have been working fine. The advantage of the COLDSTORE solution is that only two drives are on at once.

HB
Harold Baumgarten
Mar 09, 2013

I'm interested in this unit's performance, can you please comment on your playback seek times? Also, are you using the redundant recording mode ? e.g. dual disk recording ? Storage capacity cut in half but more secure

KS
KV Swami
Mar 10, 2013

Hello Harold,

I am using it in COLDSTORE mode, where capacity is "N-1", i.e. 60 TB is 56 usable TB.

In regards to seek times, i am using Genetec as the VMS and from a spinning drive the seek time is as fast as can be delivered by Genetec. When using a powerful client workstation the recorded video pull up within 5 seconds from a spinning drive. For a drive that is off i will have to get back to you after checking what the exact seek time is later this week when i am back in America.

Additionlly, I have never suffered a RAID failure with COLDSTORE.

HB
Harold Baumgarten
Mar 18, 2013

Hi KV, have you had a chance to check out the seek times for the off drives yet ?.

Please advice whenever possible.

Thanks in advance

Harold

KS
KV Swami
Mar 22, 2013

Hi Harold,

Sorry for the delay. I pulled up a video archive file one month old in a Security Desk Archive task and it took 48 seconds for me to start playing the video archive from a disk which was off.

Hope this helps.

Thanks,

KV Swami

HB
Harold Baumgarten
Mar 23, 2013

Hi KV

Thank you kindly for the info.

48 seconds is quite a long time to be waiting for a playback but then again it depends on the critical nature of the protected assets.

I suppose that after a month has passed after an incident this delay might be acceptable, considering the savings in disk cost/power consumption,etc.

You did mention before that playback from a spinning drive was within 5 seconds from the request which is quite good and suitable even for critical applications.

Seems to me you are satisfied with the performance of your system.

Again, thank you for your cooperation.

Regards

H

KS
KV Swami
Mar 26, 2013

Hi Harold,

I performed another test. As you may know that you can also access the archived video files on the COLDSTORE by opening Windows Explorer and entering the network path of the COLDSTORE (ex: 192.XXX.XXX.XXX\coldstore) and within a fraction of a second the VideoArchives Folder is there in which there are subfolders for each camera that archive to that specific COLDSTORE. I opened the folder of one of the cameras therein which are folders by date of recording. I opened the Feb 23, 2013 archive folder (which contains video on a disk that is powered down) for that camera and double clicked on an archived video file. In 33 seconds it was playing in my Genetec Security Desk Client. I have noticed that the bulk of the delay is caused by the VMS.

In regards to having to wait for 30 seconds or 50 seconds for archived video that is older than one day - I believe that it depends on the way the end users operate. We try to make our systems as pre-emptive as possible by using the built in intelligence on the cameras and the features within the VMS to alert us when there are breaches based on the parameters we have setup. This allows the designated investigators to make decisions accordingly and review archived / recorded video as needed. With the 4TB Drives and about 25 AXIS Cameras which are recording at either 1080p or 3MP resolutions at 15fps and highest quality I able to pull up video in Security Center from spinning drives up to 2.5 to 3 days back. With that being said, it does not take a powered down disk drive 30-50 seconds to spin up; it takes between 8 and 20 seconds.

As I mentioned previously, we make sure that we get alerted in real time for any situtations which we feel are critical and that allows us to view or seek or pull up archived video from spinning drives. If someone requests archived video due to an incident that came to knowledge after the fact whether it is a week later of 20 days later and they want to investigate then at time I would be willing to wait to pull up archived video considering the many other benefits the COLDSTORE has to offer to us which have been mentioned by others in this post and in the LinkedIn Discussion which is associated to the original review done by IPVM on the COLDSTORE.

One of the primary reasons I opted to switch to the COLDSTORE solution was a RAID Failure and loss of data. We have so far deployed 6 COLDSTORE Units over three of our facilities and are extremely satisfied with the results, reliability and affordability considering we are a non-profit organization. At two of our facilities the COLDSTORES have been running for
over a year now.

I would like to add that Veracity's Customer Support and Technical Support have been phenominal. I have had to reach out to support mostly due to problems which I have created on my own in trying to change configurations, etc., but every time Veracity has responded positively and remotely repaired my mistakes. The setup with Genetec's Security Center is very simple. After I watched Veracity do it once, I have installed / configured / integrated 4 COLDSTORE Units myself at two of our facilities.

I hope this helps.

Thanks,
KV Swami

HB
Harold Baumgarten
Mar 26, 2013

Excellent information KV, very helpful.

Thank you very much for your time.

Regards

Harold

JH
John Honovich
Mar 09, 2013
IPVM

Great comments, all. Interesting approach, Harold!

Btw, here's Carl's post on RAID6 vs RAID5.

HB
Harold Baumgarten
Mar 09, 2013

Thanks John for sharing Carl´s post, quite old and still valid.

Thanks Carl, great post !

This shows again that actual user experience is what should influence new purchase decisions, not just manufacturers hype .

Avatar
Carl Lindgren
Mar 09, 2013

Thanks all.

Storage failures are among the most common problems with video recording. From plain vanilla DVRs to complex server/storage architectures, storage remains a major point of failure. Even RAID6 can't overcome other types of failures, like controller and transport system failures.

Another goal will be redundant or failure-resistant data paths and storage controllers. After hard drive failures, the next most common and crucial storage hardware failure mode has been RAID controllers. While auto-failover can keep the VMS/NVR system operating, data is unavailable on failed storage systems until repairs have been effected. That typically entails replacing the failed controller and importing NVRAM data from a saved backup location (and on some systems, remapping LUNs).

On systems that are not managed continuously, the data on a failed RAID can be unavailable until the problem is identified and repaired - sometimes taking days, even when cold spare parts are available.

There are also controller failure modes that can cause a system to lose data even with RAID6. At least twice, we've encountered controller failures where the controller falsely reported simultaneous or sequential failure of many drives in a system. Although the drives weren't actually failed, the controller thought they were and marked each drive "bad". When we replaced the controller, the system still reported the RAID set failed so we had to build the RAID set from scratch and all data on that partition was lost.

HB
Harold Baumgarten
Mar 09, 2013

You are absolutely right Carl, thus each box in an appropiate SAN has at least dual Nic´s for iSCSI to connect to dual core switches for redundacy, in the same way that the servers are connected to the same core switches wit dual Nic´s. This a separate Vlan from the camera traffic. You should be okay with such a configuration.

DM
Duncan Miller
Mar 09, 2013

We have been using 3TB Seagate Constellation ES.3 drives for the past year with no issues. We have just placed our first order of 4TB Seagate Constellation ES.3 drives and will be testing them.

Curious is anyone been using RAID10?

JH
John Honovich
Mar 09, 2013
IPVM

Duncan, when moving from 3TB to 4TB, what is the main motivation? Does the savings in drive bays offset the increase in per TB cost?

I am sure some surveillance systems use RAID10 though I suspect it's relatively rare.

DM
Duncan Miller
Mar 09, 2013

The motivation for using 4TB drives is to leverage the existing equipment we already have in place. We can expand the storage capacity of systems without have to purcahse more headend equipment.

The majority of our systems are remote sites with a 1U or 2U servers dedicated for camera recoding they are utilizing direct attached storage. Sites range from 8 to 150 Cameras per site totaling well over 2500 cameras city wide. Most of our sites are running no RAID but critical sites and sites with large camera counts are running RAID5.

JH
John Honovich
Mar 09, 2013
IPVM

Duncan, thanks, make sense. When you do the swapout, what will you do with the 3TB drive being removed? I assume it will still have recorded video for the prior few weeks? Do you hold on to it in case and load it up if they need to retrieve video from that period?

DM
Duncan Miller
Mar 09, 2013

After we swap out the hard drives we will then label them with the site name, date they were removed, and a desutruction date. We then hold the drives at a secure locations for a 21 day perior after which the drives are shreded. Our Privacy Impact Assesment (PIA) states that we shall maintain video survailance footage for a minimum 21 day period which is where the 21 day time frame comes from. We also make sure before swapping out any drives that no investigations are currenlty being preformed

CM
Corey McCormick
Mar 10, 2013

Just because a rebuild fails when a RAID system is recovering, all is not always lost. Usually it is a single or a few bad sectors stopping the process.

While I do think that Coldstore is an interesting approach to data storage, I don't think that 56TB/60TB is a realistic expectation for 100% data reliability. If any drive fails a sector that is not on the currently being written drive, there will be data loss. Just like losing 2 drives in a RAID5 or 3 with RAID6. I have had that happen a couple of times over the years and the bad sectors on the drives were simply marked bad and the rebuild finished. You do not need to lose the entire volume due to bad sectors cropping up. It is just offline until a human makes the call to continue the rebuild knowing there was 1 or more sectors with read errors. Yes, in the xTB RAID volume a single or even a few sectors were corrupted, but what harm did that cause? Maybe a single file was corrupted, or even a directory entry... But many RAID volumes contain space that will never be read again before being overwritten or is was empty space that had not been used. Not ideal, but not catastrophic./p>

Catastrophic is when the fire suppression system trips and vents the FM-200 gas into the data center for a slipping belt on a cooling system (smoke but no fire). The drastic pressure/density/temperature changes crashed a room full of different SAN designs and lost racks full of data when dozens of 10K and 15K spinning drives crashed their poor floating heads. (dozens of multi-TB arrays). This is the one case where Coldstore would protect most of the data and little else in the same data center would be as lucky... On the other hand, Coldstore would not work the other 99% of the time due to the limited IOPS available for normal data center storage…

Avatar
Carl Lindgren
Mar 10, 2013

Corey,

I've never seen a RAID system that will just mark bad sectors containing parity data and continue with a rebuild, ignoring the lost data, nor have I seen a system that offers the option to ignore bad data and continue the rebuild. Every time I've seen a RAID system fail to read the parity data during a rebuild, it also fails the offending drive. What systems do that?

Avatar
James Talmage
Mar 10, 2013
IPVMU Certified

One problem with the increasing drive size is that it will potentially push you towards trying to use more cameras per drive. As we've discussed recently in the hijacked "RAM vs CPU" thread, there is a limit to how many IOPS a given drive can do, and it isn't increasing at the same rate drive size is. These drives will certiainly let you store video longer, but they may not let you record more cameras.

SE
Seth Everson
Mar 10, 2013

James: Agreed, I would only recommend using 4TB drives for the use case of 'additional storage time', and not simultaneous cameras. 7200RPM drives typically push about 75-100 IOPS- no matter what the size. You will have to expand the size of the storage solution to support the IOPS when new camaeras are introduced.

I would recommend everyone on this thread get a hold of Google's "Failure Trends in a Large Disk Drive Population". Google deals with data access patterns that are very much like a large-scale surveillance system, perhaps with reads quite a bit higher. This PDF is pretty eye-opening, especially since their results find that SMART isn't as good as we wish it was.

This lends credence to the RAID6 configuration that Carl pushes forth. You will see failures in hard disks, so prepare and plan for it. I personally prefer a more RAID60-like approach since I've been delivering solutions with a significantly lower $/TB ratio. Google started the 'designed to fail' architecture model, and that's the best way to design any critical system- plan for failure and implement necessary safeguards, including high levels of redundancy for storage.

JH
John Honovich
Mar 10, 2013
IPVM

Corey, James, Seth, great comments. For background on Seth's comment about RAID60 approach, see these two articles: RAID50 overview and RAID60 pros and cons.

JD
Jeff Denworth
Mar 10, 2013

Carl, John - thanks very much for bringing this to our attention. I did some checking with the account team on this POC and learned that this evaluation looks to be the product of some hurried installation and some simple miscommunication:

  • Installation: the project timeline was challenged for installation resources at a time of rapid growth within DDN. This compounded with the fact that it was our first test with Genetec's Security Center archiver. We've gotten feedback on subsequent installations that the combination of the new Genetec archiver and our SFA products deliver "excellent performance" and we can connect you with Genetec SEs to talk about some of these deployments.
  • Communication: following the POC, we understood that the window to discuss our results and next steps was closed. Considering we were working through an integrator - we looked to them to keep us in the loop as the opportunity to talk was open. Turns our we were wrong.

By #s of deployments - Genetec is our #1 video management technology partner in the US. We have since deployed many times with their new archiver and all signs point to this example being more of an installation misfire than a technical incongruity. Carl - this could certainly have been handled better on our side, and we'll work to correct that with you going forward.

A little commentary on 4TB drives, rebuilds and false starts, from the storage guy.

  • the bit density improvement in 4TB drives (from 3TB or 2TB) means each rotation passes over more data
  • be careful not to apply 2TB thnking here: a near-line 4TB drive can stream data (in a perfect world) at rates of 170-180 MB/s
  • assuming 50% rebuild performance overhead, this results in a 12 hr full drive rebuild (efficent algorithms help here)
  • while rebuild windows do continue to grow, there are two additional areas to focus on
    • performance degradation: most systems aren't equipped with the processing and internal bandwidth to manage the additional data recreation without taking away from front-side application performance
    • false positives: many 7200 RPM drives often act failed, but are in actuallity just stalled. Whereas it doesn't make sense to power cycle arrays, there are smart approaches to resolving this problem.

Our SFA platforms hold the designation of being the fastest storage platforms in the industry, but we succeed through providing better operational value to data-intensive customers, with key technology to better protect data and SLAs, including: no impact drive rebuilds (through smart processing and over-provisioned internal bandwidth), autonomous drive recovery (individual drive power cycling and journaled drive rebuilds to shake out false positives and recover drives within minutes, avoiding 80% of rebuild events).

Going forward - we're working on declustered parity protection technology - this has first landed in our WOS cloud storage product - where we go beyond the above methods of data protection to handle drive rebuilds even smarter. With WOS: we only rebuild data (not unwritten blocks on drives), and we distribute the rebuild effort across all of the drives in the system - so we rebuild as fast as we read from parity drives, not the speed at which you write to any one replacement drive. This can compress rebuild times by as much as an additional 80%.

Hope that helps.

Jeff Denworth - VP, Marketing | DDN

Avatar
Carl Lindgren
Mar 10, 2013

Jeff,

I acknowledge your explanation of the events. I have been in contact with both the Integrator that brought your product in and a Genetec rep. The Integrator has offered no explanation, though they are still in the "mix" for one of our final system choices. I do understand that subsequent trials with Genetec went well and that is the main reason I would even consider DDN, though I would couch that with the caveat we will also be talking to EMC about their Isolon storage system.

Although Genetec is no longer in the running, I understand your product(s) have been certified by at least one of the remaining manufacturers: IndigoVision. Is that true? The other remaining contender is Geutebruck. A final purchase decision will be made towards the end of May so storage vendors still have the opportunity to work with the remaining Integrators.

Please contact me and we can discuss the possibilities. I've sent you a LinkedIn invitation.

JD
Jeff Denworth
Mar 10, 2013

Thanks Carl. DDN has experience with a broad collection of VMS platforms – IndigoVision and Geutebruck included. Thanks for connecting. We'll reach out this week to discuss your project further.

CM
Corey McCormick
Mar 10, 2013

Carl,

Nearly all of them can. (At least the RAID controllers know how.) What was LSI/Engenio (now NetApp and they build systems for several OEMs, Dell, IBM, etc...) is the easiest one I know of. It is just a command line sequence to ask it why the rebuild stopped and if it is a read failure on a block on drive XX, you just mark it bad and continue. EMC, IBM, HP, Dell, etc.. can all do it... You just have to ask the the tech support folks team leads who know this stuff...

It is only the cheapest RAID with no tech support, no command-line API or diagnostics that suffer from the whole volume failure issue due to bad blocks... If the big guys tech support says no, then get a different escalation engineer on the phone. If someone only spent $500 on a the storage controllers though expect to be disappointed.

Having said that, don't plan on this as your normal M.O., but if you have done all you can and the unfortunate happens, one reason you didn't put the cheapest storage system you could in place is exactly for this scenario.

Avatar
Jeffrey Hinckley
Mar 21, 2013

This has truly been an informative discussion (the main reason I subscribe to IPVM). I think it started with "do you use 4 TB drives".

Just today I had a RAID5 cause a major system problem. A drive with bad read sectors caused the RAID card (LSI) to apply major resources, cutting down I/O operation. The back buffer for the video blew away the system memory leading to major problems. I wish the system had just pushed the drive off-line.

I really like your post and input Carl (I remember that previous post) and will walk away with that thought (RAID6) for future systems.

Thanks.

JH
John Honovich
Mar 23, 2013
IPVM

KV, that is very interesting. I believe Veracity told us that the delay was only up to 20 seconds in our original ColdStore review.

JH
John Honovich
Mar 23, 2013
IPVM

Harold, why do you say "I suppose that after a month has passed after an incident this delay might be acceptable, considering the savings in disk cost/power consumption,etc."

It's not after a month right unless you are using some other storage device for the first month. If you are only using Coldstore it's after the first hour or day when that drive is filled up and moved to the next one, no?

HB
Harold Baumgarten
Mar 23, 2013

Hi John

I´m only referring to KV´s mention that he pulled a month old file but of course you are right in stating that if he had pulled even a more recent file from any "off" drive he would have incurred in the 48 second delay, no argument there.

Returning to a critical facility, where "active monitoring has to take place", the time to fill up a 4TB disk before moving to the next one will depend on the number of cameras, the active recording hours and their individual frame rates, among other factors. That could range from less than a day to several days i.e. lets say one camera recording 24/7 @15fps takes up aroung 10 GB´s per day. (from one of our own setups).

In that case, 100 cameras would fill that disk in around 4 days, time enough to review an important incident at the higher playback seek time of 5 seconds, but if the incident was missed, well... back to waiting 48 seconds then.

BH
Bohan Huang
Sep 02, 2013

If you use a VMS that supports per camera storage directories like Milestone you can set things up so that each Coldstore unit can store a month on the two hot HDDs.

JH
John Honovich
Mar 23, 2013
IPVM

Harold, that's what makes me nervous about this setup. It's not just a single camera but dozens (or hundreds) going to that Coldstore appliance, so a single hard drive is going to easily fill up in less than a day. This means that any time I need to check video from yesterday or last Wednesday, etc., I am going to have to wait a minute? Yikes.

HB
Harold Baumgarten
Mar 23, 2013

John, you can leave the idle disks powered up and not have to wait, sure you loose some of the the power savings that the normal Coldstore mode provides but then again you can still save money on the disks as they do not have to be the costly Enterprise class type. Yes they would be powered up and spinning but not writing so the recording motor/arm/head are not working as hard (or if at all) plus you are avoiding the power up/down cycle.

Add to that the redundant (dual disk) recording more and with 4TB disks you would have a 28 TB storage solution with fast playback with no degraded mode issues/risks during rebuilds compared to RAID 5 or 6.

As far as camera numbers, I would never put all eggs in one basket so I´d split the total between several archivers (system size dependant), each with it´s own storage unit(s) and implement failover capabilities for the short duration required until a service tech fixes any troubles.

JH
John Honovich
Mar 23, 2013
IPVM

Harold, when you keep the drives powered up and spinning (but not writing), how does this impact the probability of a drive dying? I thought one of the key benefits of Coldstore was that keeping the drives 'cold' radically reduced the probability of failure, allowing you not to implement any storage redundancy.

HB
Harold Baumgarten
Mar 23, 2013

John

I dont agree with "keeping the drives 'cold' radically reduced the probability of failure, allowing you not to implement any storage redundancy".

Frequent spin up / spin down cycles add wear & tear to the drive through the heating and cooling process and after all, HDDs are mechanical.

The most stressful points of a HDD's life are those spin up and spin down cycles but you must also take into account that the electronics can also fail as turning the entire HDD or any electronic/electric device off/on repeatedly has been a flamewar forever.

I prefer to leave things on so I know they are working and ready, plus I like to have redundancy. Contrary to that, how can I be sure if it will even start up when needed.

Just my thoughts. For more information on disk drives and reliability, google did a great test a while back.

JH
John Honovich
Mar 23, 2013
IPVM

Harold, if you are using Coldstore and you don't think keeping the drives 'cold' reduces/eliminates the need for redundancy than what are you doing? Running it in mirror pair mode?

HB
Harold Baumgarten
Mar 23, 2013

That´s what I have been writing about in my previous posts although I wasnt calling it mirror pair mode but redundant (dual disk) recording mode

Avatar
Scott Sereboff
Sep 01, 2013
ACK Data, Inc.

Hello all, excellent discussion.

So as regards John's comments, and Harolds, and Carl (!!!), the amount of time that it takes to fill one single hard drive of 4TB size will depend on the throughput of the cameras. We all agree on that. Now, COLDSTORE has a throughput limitation of 320 Mb/s, but that is total and is comprised of writing and reading. When building a system to work with Genetec (as with the system we have at AT&T Stadium here in Dallas), we usually match the Genetec server specifications which means that you have about 200 or so Mb/s of traffic being written to any individual COLDSTORE.

To a great degree the math is on our side here. For example, 50 cameras at 3.00 Mb/s running 24 hours per day, 7 days per week generates 150 Mb/s and will fill up 53.5TB in 30 days. That means 1.78TB per day, or one pair of 4TB drives every 2.24 days. In that case, on Monday, Tuesday and a quarter of Wednesday any drive data you need will be on a spinning drive, and from 75% of Wednesday on you would be pulling drive data from a drive that was powered off. If the fact that this can take 8 to 20 seconds to power that drive on, coupled with the seek time of the VMS, is too long for you then we would agree that COLDSTORE is not the right system to fit the needs.

However, if the benefits we do bring to the table- 10% of the power of the "average" RAID system, the ability to use any drive you wish, greatly reduced initial costs and operating costs, etc. outweigh the above then we are the right system to use.

The comment about drives being "on" then "off" over and over is not quite right either. The average COLDSTORE drive is turned off about 88% of the time and you can see that via the math above. Over the 30 days above, drives 1 and 2 are on for 2.2 days then off for 27.8. So, drives 1 and 2 are turned on at the beginning, run for 2.2 days, then off for 27.8, then on again, etc. 12 times per year- which of course does not account for the read cycles. Of course, for the average user, data is retrieved less than 5% of the time…so where is the “on and off” to which Harold refers? For the non-average user I could see more duty cycles, but for the average user that 5% figure holds true and it means that any given drive in a COLDSTORE has a far, far greater chance of being used for writing and then turned off until the next write cycle than constantly used for read and write. The most stressful part of a disk’s life is NOT being turned on and off; it’s constantly reading and writing that creates the wear and tear, fragmentation, etc. that kills the drive and requires folks to use RAID to protect data against disk failure. Do we not often turn off and on devices with hard drives such as our computers, our cable TV boxes, etc. with far more frequency?

If an end user did find themselves in a situation where constant read was a requirement, and they were concerned about the on off issue, then COLDSTORE allows you to pull the drive and read all of the data independently of the COLDSTORE. This is accomplished via DISKPLAY. So, you can take the original drive data, copy it, and then use the copy while preserving the original.

John, you are both right and wrong on data redundancy. We provide full redundancy during the write cycle with the mirrored write to overlapping disks. It is true that only one copy of the data is left behind once you step from Disks 1 and 2 to Disks 2 and 3; however, COLDSTORE cannot suffer a “RAID failure” that costs all of the data across all drives and due to the fact that it is likely that data on any given drive is not going to be retrieved, for most of the users of COLDSTORE that one copy is more than sufficient. In the event that an end user wants more than one copy of drive data, then it is more cost effective to use 2 COLDSTORE, each writing the same data, that it is to use any flavor of RAID system…for most! Carl and some folks are perfectly capable of designing a RAID system that has the same general per-TB cost as COLDSTORE (although the ease of use and power savings benefits would be hard if not impossible to match) but Carl and folks like Carl are not the average. We look to provide storage to customers who have long (30+ days) archive times and a large camera count (large being somewhat subjective).

I’ll close with this- COLDSTORE is certainly a radical departure from the “storage normal” and not for everyone. We do have limitations- we do not do general data, and we have integrations to a limited number of VMS partners (fortunately we cover some biggies with Genetec and with Milestone via Arcus and it is important to note that COLDSTORE was certified by Genetec for use with Security Center) and for some, the comfort of RAID is just not something we can overcome! We do have a whole lot of very satisfied customers around North America and the world, and for them COLDSTORE is the perfect solution for their needs.

HB
Harold Baumgarten
Sep 01, 2013

Hi Scott, I was referring specifically to a very critical and particular deployment for a financial institution where turning the HDD`s “on and off” might affect the durability of non-enterprise class disks so I suggested that they use Mirror Pair Mode and keep the disks on.

This customer is not the "average user" and has several departments running continuos audits on the recorded material, spanning at least 30 days back so I`m sure you will understand my recommendation to their IT department for this Coldstore configuration.

Turns out we are still participitating in the bid thanks to that whereas RAID bids have taken a step back.

Avatar
Scott Sereboff
Sep 01, 2013
ACK Data, Inc.

I did leave this out! Yes Harold- you are right and I owe you a big thank you for continuing to carry the COLDSTORE banner in this particular situation.

Leaving the HDD on within a COLDSTORE does keep them spinning, which would increase the overall vibration across all the disks. However, I doubt that it would be significant in that 13 of the 15 drives are not writing, they are just on in the event that data is needed from any one of those drives. We did create the "global search" mode just for this, so that the end user could issue one command to COLDSTORE that would cause all the drives to spin up and be available during a search with no delays.

You are also correct in that the use of the Full Mirrored Pair mode leaves 2 copies of data but also uses 50% of the available capacity. I tend to recommend that people use 2 COLDSTORE instead of 2 drives on one COLDSTORE; bandwidth issues aside it seems to make more sense to have the 2 COLDSTORE in different physical locations- safer redundancy- than have 2 drives being used but on one COLDSTORE.

AM
Alastair McLeod
Sep 02, 2013

Could I just add a couple of technical points to this useful discussion ?

1) Even with all the drives on in a COLDSTORE, the vibration from writing disks is an absolute minimum because of the Sequential Filing System used. This almost eliminates vibration - whether the drives are reading or writing.

2) The COLDSTORE API does include a feature whereby a command can be issued from the VMS to spin up all the drives. This is to circumvent the wait time if you know you are going to do a wide-ranging search or archive. Thus you would issue the command and after the first period of delay (20-30 secs), all the drives would be spinning and then all searches would be immediate (and any remaining latency simply due to the VMS system, but is normally tiny).

I believe we are going to see 5TB and 6TB drives available next year. COLDSTORE will be able to use these immediately and without issue. COLDSTORE never has to do rebuilds, so as far as it is concerned, the bigger the drives the better.

Avatar
Scott Sereboff
Sep 02, 2013
ACK Data, Inc.

Bohan, without going into a ton of detail COLDSTORE does not support per camera storage directories. COLDSTORE writes everything sequentially to the hard drive, and I mean physically sequential data on the hard drive. Because of the way we write data (SFS, or sequential filing system) and the way the data from individual cameras is interleaved together, we cannot have one camera stored for 10 days, one stored for 14 days, etc. unless you store the cameras to different physical COLDSTORE. Given the price point of COLDSTORE this is not a bad option and in fact we do have end users who have some cameras stored to one COLDSTORE with 30 days of retention and some cameras stored to a COLDSTORE with 90 days retention.

In addition, COLDSTORE is really a device that eliminates the need for the use of motion detection. I would be interested to hear the thoughts of those people involved in this discussion, but it seems to me that the use of motion detection is designed to "stretch out" the available storage capacity to achieve longer retention times. Due to the expense of most storage systems, end users purchase a volume of storage that they can afford- say, 30TB- and then try to make it last as long as they can by the use of motion detection (and other tricks but let's stay on motion for a moment). There are problems with motion- sometimes it records the slightest bit of motion, sometimes it can miss things that the user needs to have recorded- but the real problem is that you cannot prove something did not happen without evidence.

Take a classic slip and fall, for example. If I come to court and claim that I slipped and fell on Tuesday July 8th, 2013 in your store due to your negligence, and you cannot prove that I was not there, did not fall, etc. then you have a very good chance of losing the case and paying the claim. On the other hand, if you can pull video and prove that I was not even in the store...no claim.

While I completey accept the fact that few if any end users will store video in lockstep with statutes of limitation, the point is still the same. You cannot prove something did not happen without evidence that shows it did not happen. Our end users who use COLDSTORE can achieve retention times that they need while not requiring the use of record on motion. What they do use, and take full advantage of, is the ability of the modern VMS to record everything on a low FPS and then increase that FPS on motion to whatever they consider of value, i.e. 15 FPS or the like. COLDSTORE is perfectly happy with this- after all, the time frame of the recording stays the same, it's just the size that diminishes- and the end user knows that they have 100% of the video for the retention time they have chosen.

Avatar
Carl Lindgren
Sep 02, 2013

Scott,

It sounds like COLDSTORE would not support pre and post-motion recording either, which is how we (and others) get around the potential of missing things leading up to and following events.

Avatar
Scott Sereboff
Sep 02, 2013
ACK Data, Inc.

Carl-

Actually, COLDSTORE does support pre- and post motion recording.

Avatar
Carl Lindgren
Sep 02, 2013

So COLDSTORE can record a circular buffer?

Avatar
Scott Sereboff
Sep 02, 2013
ACK Data, Inc.

Carl-

As I understand them, circular buffers are handled by the VMS and are overwritten within the VMS, which means that they are handled outside of COLDSTORE's purview. Correct me if I am wrong, but doesn't the video come into the system RAM, and is then buffered until needed or overwritten? COLDSTORE records what it is handed by the VMS and so I do not quite see where the circular buffer issue would have any effect on COLDSTORE.

Avatar
Carl Lindgren
Sep 02, 2013

Scott,

Some don't. Our current system (Honeywell Enterprise) uses Ringbuffer recording, where everything is recorded on circular buffers that can be set to record any stream from 1 second to years. Each stream can also record to multiple circular buffers simultaneously. I believe Pelco Endura, Geutebruck and a few others use similar systems.

Avatar
Scott Sereboff
Sep 02, 2013
ACK Data, Inc.

Carl-

I think this is getting down to "splitting hairs" territory! Strickly speaking you are describing a system that allows for file retention times to be set per camera, which COLDSTORE cannot do and no one really should have to do. That is not pre- and post alarm recording, really...in any case, video surveillance systems should really record all the cameras, all the time, at the best quality and highest frame rate that can be afforded. Anything less than that is really a compromise; ok fine, people have to make that compromise due to what they think they know about storage systems. COLDSTORE changes that value proposition. The whole idea behind COLDSTORE is to make high capacity storage systems that are reliable, affordable and easy to install and maintain.

As I have said, Carl, you are a part of the "storage 1%" and have a ton of knowledge and experience as to RAID and storage systems in general. You can delve way down into this, but again, COLDSTORE is designed to fit what we feel is a critical need for the average user- long retention times delivered via an inexpensive and easy to manage system.

So I think we have deviated somewhat from the 4TB disk discussion and I know how John disapproves of topics moving too far from the original! I'll stop now (Carl you can message me privately) and thanks to all.

New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions