Hi!
I was asked to test our current lineup of drives (WD purple 1TB-6TB) for how many camera's 1 of these can take.
I then decided to add in the 10-12TB as well since they have 256Mb cache and see if that makes a difference.
My methodology was as following:
I connect 32x 4MP or 8MP camera's to a server (Xeon E5 1650v4 with samsung 970 evo SSD's in raid to avoid bottlenecking), and set them all up the same.
First test would be "32x 4MP 30FPS H264" and check results for stuttering/write errors/delays.
Pretty sure it's going to fail already.
Second test I'd to 32x 20 fps, go on to 15 and 12 FPS as well and then lowering the resolution to 2MP (or 4 if I go with 8MP camera's)
Each test will run at least 24 hours, there will be motion on all camera's 100% of the time (computerscreen in front of camera's seems to give the highest load since all pixels change)
I then want to make a table as following for each test:
2MP Test | ||||
12fps | 15fps | 20fps | 30fps | |
1tb | x | x | x | x |
2tb | x | x | x | x |
3tb | x | x | x | x |
4tb | x | x | x | x |
6tb | x | x | x | x |
8tb | x | x | x | x |
10tb | x | x | x | x |
12tb | x | x | x | x |
With the X's being the maximum amount of camera's.
This is definitely very platform/VMS dependant, so I'm going to test it in the VMS we're currently distributing and try with an open source alternative as well.
Is this the correct way to test, or does anyone think of a better test procedure I can follow?
Thanks for your input!