Hitachu AMS2100 scalable up to 235TB.
Ashley, COLDSTORE (my company's product) does not work with all VMS on the market. We have a tight integration with Genetec, can be used with Milestone Corporate via COLDSTORE Arcus and Interconnect (or just COLDSTORE Arcus alone), Video Insight, and then several VMS from the UK and one from Taiwan.
Incidentally, you can buy COLDSTORE from Anixter in Australia.
Hope that helps!
Ashley, Coldstore requires custom integration with VMSes. Check if your preferred one is supported. Their product page does not appear to have updated information on VMS integration.
Thanks for that. However one of the Distributors on the list is who said, "nah cant get it"...
Does the coldstore work for all VMS's?
Actually you can purchase COLDSTORE Units in Australia through Anixter or Sielox. The Veracity Website also has distributors listed on their website for Australia on the following link: Veracity - How to Buy - Australia and New Zealand
There are also distributors in America which offer worldwide shipping:
Veracity - How to Buy - Worldwide
If you are interested, I would encourage you to reach out Veracity for more information and I am sure that your would not have to pay double of what we pay here.
Hope this helps.
Hi KV Swami,
Unfortunately I'm in Australia, and we cant get the coldstore here, but if we could it would probably be double what you pay.
Coldstore is being used by major NFL franchises, including the Dallas Cowboys, with well over 50 units installed (that's 3+ petabytes). Coldstore was also the storage choice for a multi-petabyte correctional facility installation in the Western US, and has achieved a much higher level of commercial acceptance and recognition than you might realize. There are other large installs as well.
I'm interested to see if an unRaid solution may be of use for this. Has anyone had any experience ?
We have been using the Veracity Coldstore Solution with Genetec VMS for over 2 years now at four of our facilities. It has worked out great for us. There is a post on the Veracity COLDSTORE Solution at the following link: Veracity Coldstore Overview
On top of the undisturbed performance, the Veracity Support Team is very proactive in helping to get you setup and running. Depending on which drives you choose to use (2TB, 3TB or 4TB) you can get up to 60TB of storage on one Coldstore unit. They are extremely energy efficient. The power consumption on a single psu is 40 to 62 watts; which is almost 90% less than that of a RAID system if I am not mistaken. If you were to use a dual PSU the power consumption is about 66 watts. We did a simple study to see how much we would save on electricity annually when we switched from an HP MSA60 RAID solution to the Coldstore. Here is a snapshot of the calculations:
We've had tremendous experience with Pivot3. The system's multiple levels of redundancy (HD arrays, iscsi, virtualized server and network) are key selling points when we're working on critical high-security applications. The system is virtually bottleneck free if you design your arrays correctly - Pivot3 will help with this. Because of the virtualization, rack space and power usage are minimized and the systems are truly optimized for surveillance video. We've worked on projects where Pivot3 was brought in to replace systems from one of the top NAS/SAN manufacturers (think Albert Einstein) after months of unsuccessful attempts to get the system running error free. After an install/setup time a fraction of that required for the original solution, the Pivot3 system was up and running with zero failures. For large projects we won't use anything else.
I've seen weird controller failures where all data in a RAID group was lost, even though no drives failed.
Just a very small and narrow addition to Carl's expert advice. The RAID controllers will run hotter for continuous video recording than for IT database work. In our IT systems itegration work we have used Adaptec and Areca. When we first deployed our new IP camera set up around our facility, I used a 16 bay storage chassis we had been using for Data Analytics development and filled it with 10, 4 TB Western Digital Black drives in RAID 6.
As a test I ran 24 HikVision 3 Megapixel cameras on 20 FPS highest quality with Milestone software set to continuous record. The controller overheated in short order and I needed to add additional cooling. Now this was not production fully redundant hardware, but for mission critical installations, if you are paranoid, you will probably want to monitor/log controler temperature. There are not infrequent comments on Enterprise IT message boards about RAID controller overheating
If you are of the mindset of Build-It-Support-It, you have a million options, and when we do this, we've been 100% 3-ware and LSI on controllers for as long as I can remember, circa 1986..... We've always used Top Tier enterprise drives, and we've always used Chassis / Mainbox OEM servers from Intel.
However, if you want to have all of these things integrated, tested, and supported, I can't offer a higher recommendation than talking to senecadata. Jerry in their Atlanta offices is a great resource, and will earn your business.
Their OEM/WHITE BOX storage is XVault, and have some of the largest cluster computers systems in the country under their belt.
480TB is deliverable with an out of the box install, and they can adapt your preferences and create a custom SKU for your applications going forward.
We are Installing Pivot3 Vstack Watch at all of Malls in UAE and they are pretty good , the oldest setup is 15 months with no single faliure .
IPVMU Certified | 03/01/14 05:04pm
We have installed Pivot3 in one of the Stadium Project in UAE which ammounts to 684 TB using vSTACK watch 36TB appliances.
Rasilient also preferred storage here.
Carl, that is true. Most use similar guts- as long as it's quality guts and not some no name or low end stuff, you're right. Then it becomes a matter of how well the software interface is written so that it's easy to use and understand, and what kind of support and options you get through the particular vendor you are dealing with.
We actually had a couple of AC&NC Jetstor RAIDs in the first iteration of our system from 2004-2006. We liked them: much faster rebuild times compared to the Arena Maxtronic RAIDs that made up the majority of our storage. At the time, they used Areca controllers. But that was a long time ago and the AC&NC RAIDs we used were SCSI/PATA.
There are/were a number of companies who sell/sold the same basic product under their own brand, including Partners Data Systems in La Mesa, CA. Their SurfRAID product was exactly the same as AC&NC's JetStor product, although it appears Partners has changed vendors (one of their newest products sure looks like an Infortrend 24-bay RAID).
There are a number of companies selling their own branded storage. In actuality, the primary key to a good product is the guts, including the controller. The rest is just a cabinet. There aren't that many controller manufacturers in the world so you'll see a lot of similarities when you compare one "manufacturer's" product to another. Among the more well known controller manufacturers are Arena Maxtronic, Areca, Infortrend, LSI Logic, Adaptec and Promise Technology.
RAID controllers are basically just computers - typically Intel-based, with different "tweaks" in their firmware to differentiate them from the competition.
Like Carl I have used the Dell Powervault with great success though we used iSCSI instead of fiber channel. We actually virtualzed a system using the Dell Powervault and Dell servers and it was working great last I heard.
We've been using Jetsor and they seem pretty reliable and support has been pretty good. They understand video surveillance unlike many sales reps from IT brand names like HP, Dell or IBM. Not that those products can't be used and optimized for video surveillance. Carl is just lucky enough to know enough about what he needs on a very in depth technical level, so he's not so reliant on sales reps that may have limited knowledge of surveillance needs. Sales reps who don't know better try to sell you on things like data de-duplication and replication which don't apply and don't work for surveillance recording needs.
I wouldn't recommend running a single RAID6 RAID Group on a 16-bay chassis, and definitely not on a 24-bay chassis using even 3TB drives, let alone 4TB drives. Even with one hot spare per chassis, that would be 13 to 21 data drives. Although that is less than 10e15 bits, there is still increased danger of UREs causing data loss during a rebuild and rebuild times themselves would be horrendous. I've seen 10+2 RAID Groups of 2TB drives take over 48 hours to rebuild. Doing the math, 21+2 using 4TB drives could take up to four times that, or well over a week. That increases the danger of data loss exponentially.
As far as bandwidth, that is difficult to tell and a question to ask of each manufacturer. Modern SAS drives are faster than SATA and SSD can be even faster but drives are only one aspect of total throughput. Data transport speed is also dependent on the source (server/hba), pipeline speed (iSCSI, fiber, SAS, Infiniband, etc.), RAID level and controller overhead. Always make certain the manufacturer allows for worst case scenarios, for instance - camera bandwidth spikes, multiple users reading data from the same system, system rebuilds and controller housekeeping simultaneously occuring and allow sufficient buffer to keep from overloading the system..
<edit> Also bear in mind that the larger the RAID Group, the more data will be lost from simultaneous failures. If an 8+2 Group fails, with 2TB drives, you would lose ~16TB of data. If a 21+2 Group fails, with 4TB drives, you would lose 84TB of data. The smaller the RAID groups, the less likely a failure would cause data loss in the first place and the less likely you would lose everything during a catastrophic failure.
I've seen weird controller failures where all data in a RAID group was lost, even though no drives failed. During one failure on an Infortrend system, the controller succesively failed every drive in one RAID Group, then failed completely. After replacing the defective controller, we were still unable to recover the lost data because the controller had marked the drives bad on the disk copy of NVRAM.
Hey,what about using 4Tb enterprise SATA or SAS drives in 16 or 24 bay devices. Is it recomended if we are making a single raid 6 array?
How much bandwidth of recording is maximum recommended in such a single storage, considering its real time recording and deleting old videos as it fills up and also occasional retrieving of of old videos by client.
Wow Carl!! You are truly an exceptional resource, your information is excellent!
We too have had luck with the Dell PowerVault as well as with the Nexsan product line. Same as with Carl, be sure to evaluate the level of protection you are looking to achieve with redundant controllers and HBAs, etc. Ask the difficult questions when evaluating potential vendors. Always ask "what is your product's single point of failure"? There are always risks/trade-offs to consider. Knowing what they are, and what you are willing to pay to mitigate the risks, is so very important when evaluating storage.
We are currently using Dell PowerVault MD3260 series 60-bay chassis populated with 3TB SATA drives. The arrays are each DAS to multiple servers via fiber channel with redundant controllers and HBAs. Each system provides 180TB of raw storage, which is divided up into four 10+2 and one 8+2 RAID6 slices with two global hot spares per chassis.
We've also used Huawei Oceanspace 24-drive fiber/sata systems (one each 12-bay RAID and 12-bay JBOD) Populated with 3TB drives, that would provide 72TB raw in 4RU. Finally, we previously used Infortrend 24-bay fiber/SATA RAIDs. Although we had them populated with "puny" 500GB drives and controllers and HBAs were not redundant (purchased in 2006), the newer versions would also provide 72TB raw in a single 4RU chassis.
There are a number of storage manufacturers with the capability to provide high quality, high capacity video storage. We also looked at DDN (DataDirect Networks), Nexsan, EMC and others. The key is to determine the level of redundancy you require and make certain the manufacturer understands video surveillance recording. Lots of manufacturers have experience with video storage for media delivery (ie Broadcast, editing, internet media, etc.) but those applications are write occasionally / read often versus read occasionally / write often used for video surveillance. That affects system design and setup. Improper design and/or setup can cause data overload (dropped frames or slow responsiveness) and even cause systems to kick out perfectly good drives.
There are some new storage technologies coming out that are supposed to handle disk and even chassis failures better than current RAID systems. One of the biggest issues with traditional RAID is that as hard drives get larger, disk failures have a greater effect and a greater likelihood of causing data loss. That's because disk UREs (unrecoverable read errors) are fast approaching disk size. 1TB is nearly 10 to the 13th power (10e13) bits. Since drive UREs are typically rated at 1 in 10e15 bits, and some pundits argue that figure is inflated, the likelihood of encountering simultaneous sector read errors in a RAID group increases.
Typical recommendations for RAID6 range from 8+2 to 10+2 (8-10 data drives plus 2 parity drives) per RAID6 Group. However, with 3TB drives, that makes a RAID group 2e14 to 2.6e14 bits - nearly a typical Enterprise drive URE rating. That is why I never recommend RAID5 for critical large-scale video recording. The larger the drives, the more likely a rebuild due to one drive failing will encounter a URE when trying to reconstruct parity. RAID6 can handle two simultaneous drive problems but RAID5 cannot.
The other issue with ever-larger hard disks in traditional RAID is rebuild times. It's not uncommon to see disk rebuilds take 2-3 days with 3TB drives. During that time, I/O is reduced due to the additional data load of the rebuild and the danger of encountering multiple failures is increased because the system is running in "degraded" mode. A system should be designed to handle all of the data throughput it's likely to encounter (total write plus read) while simultaneously rebuilding a failed drive and still have a bit of extra I/O margin.