We use 4tb all the times in raid 5. A 28tb is good IMOP.
What controller card are you using?
Rebuild times can vary a lot depending on the RAID controller you are using. 4-5 days is a very long time for a small/medium RAID array ( under 10 drives). I would look at changing RAID cards. 4TB drives are fine for RAID arrays - i would definitely stick only with Enterprise-class drives when using more than 5 drives in an array ( such as WD RE4 which have been excellent for us in the last 5 years).
What is the maximum number of drives and size of drive you would use in a RAID 5 setup.
What's the reason for the RAID?
Performance, Availibility or Integrity?
While I agree with the previous posts,
I have a hard time construing any surveillance storage array that takes more than 24 hours to rebuild as possessing the virtues of performance, availability or integrity.
Redundancy (The R in RAID), is often cited as the primary reason for RAID. Availability, though is the primary expectation that one has for making storage redundant.
Long rebuild times are not deeply concerning if enough fail-over storage has been integrated at the VMS and/or platform level. Loosing data during a rebuild is a bit more troubling.
IMO the proliferation of 12MP cameras, 8TB disks, 10GB SAN's and beyond will make things difficult for integrators not experienced in working with distributed file systems, copy on write file systems, or some newly emerged alternatives to traditional RAID storage.
after losing a 21tb raid5 array during a rebuild, i wouldn't use raid5 again for anything over 10TB.
raid6 or raid10 for anything big now...
The RAID controller cards being used would be the LSI 9271-4i/8i and the LSI 9260-4i/8i. The 9271 has 1gb memory vs the 9260 which has 512mb memory. What brands of RAID controller cards are members using?
you really dont want to use RAID5 if you can avoid it at all cost. Raid 5 is notorious for falling rebuilds the larger the storage pools are.
so far as times about a week sounds right for that size storage pool.
I had a manufacturer tell me "Friends don't let friends use RAID 5..."
I personally will use RAID 5 if the array is small enough ~12TB or less (4 Drives).
RAID 6 or RAID 1+0 (10) is a far better approach. Global Hot Spares are a must if using a RAID 5, and use a quality raid controller, none of the software raided crap... do it right so you can sleep well at night!
if you have the option and Power, Raid 6 is my preference, we're now moving off the 4tb drives at $185 each and on to 8tb drives for $345 each
rebuild time does not matter much, your arrays should have hot spares configured, if one of your drives is going to die hardcore enough it'll knock the whole array offline.
we tend to go with arrays no larger than 12 drives, my preference is usually 8.
Has anyone actually got any data for tests they've done? Has anyone actually done any testing? I have typically always used RAID 5 setups with a hot spare, I think using a hot spare is common sense, you always want the array to rebuild straight away instead of waiting for a new drive to be shipped and then scheduling a site visit etc.
Yes RAID 6 gives you fault tolerance of an extra drive but you also lose out on write performance which has to be considered. RAID 10 is just simply too expensive for a system with larger storage needs.
On the Synology's we find the rebuild times are sub 24 hours, sounds like your controller or disks are slow, that or is it a massive single array?
Raid6 provides immediate rebuild on of the Raid failure, by use of the failover nature of raid 6. The rebuild time becomes a low priority allowing a video system to maintain the heavy load. Enabling a fast rebuild option may impact the online processes.. The old rule of thumb for best raid5 performance was minimum 5 disks, and above that N+1 for raid 5, Raid 6 would imply N+2.. However that was pure SCSI, and long before the drives we have today with internal virtual blocking, and more... that rule may not apply, But Raid 6 in the minimum.
We use Raid 10 whenever possible, and use at least 2 bus's on the controller and we always match up the raid 1 pairs on a 1 to 1 basis between the two channels, and only use enterprise drive.
Historically enterprise drives have 1 to 16th power chance of a multi-bit failure. Video and NAS drives have a 1 to 15th, and desktop drives are 1 to 14th..... These numbers are the reasons that raid 5 rebuilds can fail.....
Something to keep in mind (I don't think I saw it mentioned yet) is that rebuild will take a lot longer if you're still using the volume while it's rebuilding. If you have some form of failover storage, try forcing the system to use that, and you should find the rebuild happens a lot faster.
And definitely schedule weekly / monthly controller verify tests...