Subscriber Discussion

What Is The Maximum Number Of Drives You Would Use In A RAID 5 With Regards To Rebuild Times?

UI
Undisclosed Integrator #1
Dec 09, 2015

I have had some experience with 4tb drives taking 4-5 days to rebuild when used in a raid 5. What is the maximum number of drives and size of drive you would use in a RAID 5 setup.

Would a RAID 5 system with 28tb of storage using 4tb enterprise class drives be acceptable?Or would you stick to 3tb drives? I know there are 8tb+ drives available but they are surely not realistic for use in RAID 5 environments.

[Note: Related - Video Surveillance Storage Redundancy Statistics]

Avatar
Jose Noy
Dec 09, 2015

We use 4tb all the times in raid 5. A 28tb is good IMOP.

UM
Undisclosed Manufacturer #2
Dec 09, 2015

What controller card are you using?

UM
Undisclosed Manufacturer #3
Dec 09, 2015

Rebuild times can vary a lot depending on the RAID controller you are using. 4-5 days is a very long time for a small/medium RAID array ( under 10 drives). I would look at changing RAID cards. 4TB drives are fine for RAID arrays - i would definitely stick only with Enterprise-class drives when using more than 5 drives in an array ( such as WD RE4 which have been excellent for us in the last 5 years).

(2)
U
Undisclosed #4
Dec 09, 2015
IPVMU Certified

What is the maximum number of drives and size of drive you would use in a RAID 5 setup.

What's the reason for the RAID?

Performance, Availibility or Integrity?

LB
Lee Brown
Dec 10, 2015

While I agree with the previous posts,

I have a hard time construing any surveillance storage array that takes more than 24 hours to rebuild as possessing the virtues of performance, availability or integrity.

Redundancy (The R in RAID), is often cited as the primary reason for RAID. Availability, though is the primary expectation that one has for making storage redundant.

Long rebuild times are not deeply concerning if enough fail-over storage has been integrated at the VMS and/or platform level. Loosing data during a rebuild is a bit more troubling.

IMO the proliferation of 12MP cameras, 8TB disks, 10GB SAN's and beyond will make things difficult for integrators not experienced in working with distributed file systems, copy on write file systems, or some newly emerged alternatives to traditional RAID storage.

AS
Ashley Schofield
Dec 10, 2015

after losing a 21tb raid5 array during a rebuild, i wouldn't use raid5 again for anything over 10TB.

raid6 or raid10 for anything big now...

(4)
MI
Matt Ion
Dec 12, 2015

+1 to this. When we started using RAID storage for a customer, we used RAID5 largely because of cost. After the second time an entire array was lost because one drive glitched during a rebuild, we went to RAID6 for the smaller four-bay systems, and RAID6+spare for anything larger - with varying proximity to sites, we found it wise to include a hot spare drive as well, so if a drive fails, the system will automatically rebuild the array on that spare, and the failed drive can be replaced when it's more convenient.

(1)
(1)
UI
Undisclosed Integrator #1
Dec 10, 2015
The RAID controller cards being used would be the LSI 9271-4i/8i and the LSI 9260-4i/8i. The 9271 has 1gb memory vs the 9260 which has 512mb memory. What brands of RAID controller cards are members using?
UM
Undisclosed Manufacturer #3
Dec 10, 2015

We use Adaptec raid cntrollers. They are expensive but the quality and speed are excellent.

Build times on a 40TB array (10 x 4TB) is about 20 hours in RAID5, - a few hours more in RAID6

UM
Undisclosed Manufacturer #2
Dec 10, 2015

LSI makes great controllers!

We also have very good results with Areca, especially their cost-to-performance ratio.

(1)
EP
Eddie Perry
Dec 10, 2015

you really dont want to use RAID5 if you can avoid it at all cost. Raid 5 is notorious for falling rebuilds the larger the storage pools are.

so far as times about a week sounds right for that size storage pool.

(1)
CW
Craig Wilson
Dec 10, 2015

I had a manufacturer tell me "Friends don't let friends use RAID 5..."

I personally will use RAID 5 if the array is small enough ~12TB or less (4 Drives).

RAID 6 or RAID 1+0 (10) is a far better approach. Global Hot Spares are a must if using a RAID 5, and use a quality raid controller, none of the software raided crap... do it right so you can sleep well at night!

MG
Michael Goodwin
Dec 11, 2015

if you have the option and Power, Raid 6 is my preference, we're now moving off the 4tb drives at $185 each and on to 8tb drives for $345 each

rebuild time does not matter much, your arrays should have hot spares configured, if one of your drives is going to die hardcore enough it'll knock the whole array offline.

we tend to go with arrays no larger than 12 drives, my preference is usually 8.

UI
Undisclosed Integrator #1
Dec 11, 2015

Has anyone actually got any data for tests they've done? Has anyone actually done any testing? I have typically always used RAID 5 setups with a hot spare, I think using a hot spare is common sense, you always want the array to rebuild straight away instead of waiting for a new drive to be shipped and then scheduling a site visit etc.

Yes RAID 6 gives you fault tolerance of an extra drive but you also lose out on write performance which has to be considered. RAID 10 is just simply too expensive for a system with larger storage needs.

(1)
MG
Michael Goodwin
Dec 12, 2015

Lots and lots of playing around with Transfer speeds with Milestone and Synology NAS's...

Controllers on reasonably new NAS devices tend to give very little write speed loss, I found however that as the number of drives expanded the over all write speed dropped as the concurrent IO's increased, so more than 8 ish drives (unless it's a small number of Cam's being archived to it) is not really worth doing...

with regards to reliability, unless you are spending massive amounts (and I mean like 10x the $ per tb price) you are going to loose the occasional array, as long as you have hot spares you're going for the best effort, as long as you just compartmentalize the arrays so you are not loosing too much footage I consider it a fair trade off.

I tend to split my arrays to 20 cam's per array, over sites that have hundreds of cam's loosing the long term footage for 20 cam's at the rate of one array roughly every three years is reasonably good. I also tend to split the archive footage for each area being covered over multiple storage arrays so it would be hard to have no footage of x area.

MG
Michael Goodwin
Dec 12, 2015

On the Synology's we find the rebuild times are sub 24 hours, sounds like your controller or disks are slow, that or is it a massive single array?

U
Undisclosed #5
Dec 12, 2015

"On the Synology's we find the rebuild times are sub 24 hours"

How many drives?

4TB or 8TB ?

Thanks

MG
Michael Goodwin
Dec 13, 2015

I'll be on one of the sites this week, will do a test for you and get back to you with exact numbers

MG
Michael Goodwin
Dec 14, 2015

Test Completed..

it's a 8 drive Raid 5, made from 4tb Seagate Desktop drives (we use them everywhere) this is a Synology RS3412xs NAS that's in the production environment, during the test the full system was running as per normal, I just pulled a drive to force it to rebuild to a hot spare.

Entered degraded mode/Started rebuild at 07:51:56

Completed Rebuild 23:24:47

so what's that, 18 hours?

I'll be onsite one of our servers that's running a 12 drive Raid 6 of 8tb drives and I can do the same there if you like?

(3)
MI
Matt Ion
Dec 14, 2015

I'd be interested to see the time difference if you took the RAID offline so the rebuild was its ONLY task. Don't know how possible that is for a production setup...

MG
Michael Goodwin
Dec 14, 2015

If others want to comment I'd happily be proven wrong, while this array is in production the peak streaming rate for a 4tb drive is something like 180mb/sec average of 146mb/sec so if you average that over 4tb that's a perfect world scenario of writing the whole disk in around 8 hours (seems low, anyone want to check my maths?)

these are the drives we use: http://www.storagereview.com/seagate_desktop_hdd15_review_st4000dm000

http://www.tweaktown.com/reviews/5556/seagate-desktop-hdd-15-st4000dm000-4000gb-hdd-review/index4.html

anyway we do have some of these drives for pure mirrored arrays, and even then it still takes something like 13 hours to rebuild (different shittier NAS)

here's why i don't bother buying the expensive drives anymore:

https://www.backblaze.com/blog/best-hard-drive/

AT
Andrew Thomas
Dec 12, 2015

Raid6 provides immediate rebuild on of the Raid failure, by use of the failover nature of raid 6. The rebuild time becomes a low priority allowing a video system to maintain the heavy load. Enabling a fast rebuild option may impact the online processes.. The old rule of thumb for best raid5 performance was minimum 5 disks, and above that N+1 for raid 5, Raid 6 would imply N+2.. However that was pure SCSI, and long before the drives we have today with internal virtual blocking, and more... that rule may not apply, But Raid 6 in the minimum.
We use Raid 10 whenever possible, and use at least 2 bus's on the controller and we always match up the raid 1 pairs on a 1 to 1 basis between the two channels, and only use enterprise drive.

Historically enterprise drives have 1 to 16th power chance of a multi-bit failure. Video and NAS drives have a 1 to 15th, and desktop drives are 1 to 14th..... These numbers are the reasons that raid 5 rebuilds can fail.....

(1)
MG
Michael Goodwin
Dec 13, 2015

We use Raid 10 whenever possible, and use at least 2 bus's on the controller and we always match up the raid 1 pairs on a 1 to 1 basis between the two channels, and only use enterprise drive.

yeah enterprise stuff is awesome, but how much does a 4tb SAS enterprise drive set you back, last I looked I can have 3 x 8tb desktop drives for the price of a single 4tb SAS (enterprise)'

how much is it costing per cam per day of storage, must be massive?

I've spoken to people in the industry who have seen even the most amazing resiliency fallen over, that and the google drive results were really interesting.

Historically enterprise drives have 1 to 16th power chance of a multi-bit failure. Video and NAS drives have a 1 to 15th, and desktop drives are 1 to 14th..... These numbers are the reasons that raid 5 rebuilds can fail.....

I don't think you can get large drives (4tb+) with an error rate better than 1 to 15th at the moment?

Wonder how long it'll take for SSD's to get past the $ per tb rate of rotational, that'll be good, I do wonder if you get the same amount of total data written per Tb on a SSD vs a Rotational though...

we played around with a SSD Dump drive for a VMS recently and it got killed quite fast, as opposed to the Raptors that just seem to keep on kicking forever.

MI
Matt Ion
Dec 12, 2015

Something to keep in mind (I don't think I saw it mentioned yet) is that rebuild will take a lot longer if you're still using the volume while it's rebuilding. If you have some form of failover storage, try forcing the system to use that, and you should find the rebuild happens a lot faster.

(1)
U
Undisclosed #4
Dec 12, 2015
IPVMU Certified

If you have some form of failover storage, try forcing the system to use that, and you should find the rebuild happens a lot faster.

I agree that the rebuild happens a lot faster when it runs alone. But what do you do after the rebuild completes to reintegrate the days worth of new data from the failover device back to the main array?

MI
Matt Ion
Dec 12, 2015

Depends on how the failover works. With the Vigil systems, when we do use a RAID, we use that as the main storage, and its internal drive(s) for failover. Anything recorded to failover drives gets indexed normally, and is searchable normally, right alongside the other data. So, just need to disconnect the RAID's iSCSI port during rebuild (or disconnect via the iSCSI initiator), and the NVR automatically uses the internal drives until it's back online. The user never knows the difference (other than the "you're now recording to backup drives" popup).

Edit: besides, if you're taking the array offline to rebuild, it shouldn't take "days" - the whole point is to reduce that time.

AT
Andrew Thomas
Dec 12, 2015

And definitely schedule weekly / monthly controller verify tests...

MG
Michael Goodwin
Dec 14, 2015

If you can you need to do two sets of tests, 1, Extended Smart tests (essentially checking for drive bad sectors) and 2, Array Parity checks.

some NAS's will let you schedule both and send you notifications of issues.

New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions