Subscriber Discussion

I'm About To Test WD Purple Hdds For Amount Of Streams Possible... Any Ideas?

Avatar
Joran Vandewoestijne
Mar 12, 2019
Visuatech • IPVMU Certified

Hi!

I was asked to test our current lineup of drives (WD purple 1TB-6TB) for how many camera's 1 of these can take.
I then decided to add in the 10-12TB as well since they have 256Mb cache and see if that makes a difference.

My methodology was as following:
I connect 32x 4MP or 8MP camera's to a server (Xeon E5 1650v4 with samsung 970 evo SSD's in raid to avoid bottlenecking), and set them all up the same.
First test would be "32x 4MP 30FPS H264" and check results for stuttering/write errors/delays.
Pretty sure it's going to fail already.
Second test I'd to 32x 20 fps, go on to 15 and 12 FPS as well and then lowering the resolution to 2MP (or 4 if I go with 8MP camera's)
Each test will run at least 24 hours, there will be motion on all camera's 100% of the time (computerscreen in front of camera's seems to give the highest load since all pixels change)

I then want to make a table as following for each test:

2MP Test
  12fps 15fps 20fps 30fps
1tb x x x x
2tb x x x x
3tb x x x x
4tb x x x x
6tb x x x x
8tb x x x x
10tb x x x x
12tb x x x x

With the X's being the maximum amount of camera's.

This is definitely very platform/VMS dependant, so I'm going to test it in the VMS we're currently distributing and try with an open source alternative as well.

Is this the correct way to test, or does anyone think of a better test procedure I can follow?

Thanks for your input!

U
Undisclosed #1
Mar 12, 2019
IPVMU Certified

save time and money by using a virtual cam simulator, Axis, Sony and others have them.  Read more here:

IP Camera Stream Simulator

(2)
Avatar
Joran Vandewoestijne
Mar 13, 2019
Visuatech • IPVMU Certified

That was actually my first idea, but after checking the data it seems higher compression was applied since it was re-encoded as a new h264 stream sent from another server, so we can't compare it 1:1.
Since we're a distributor adding real camera's isn't an issue at all.

(1)
Avatar
Morten Tor Nielsen
Mar 13, 2019
prescienta.com

The resolution and framerate is irrelevant for a test of storage system. What matters is the bitrate of those streams, and the bitrate profile of those streams (i.e. VBR/CBR and I-frame interval).

E.g. you can have a 4 megapixel H.264 at 30 fps encoded as a 512 Kb/s CBR stream with a 90 frame I-frame interval, and have a 2 megapixel H.264 at 10 fps encoded as a 2 Mbit/s VBR stream with a 5 frame I-frame interval. The latter produces a lot more strain on your HDD than the first (but not necessarily all the time).

Caching is useful if you're writing a file, and then modifying it over and over again (the file content is then kept in the hot cache), or if you're writing a bunch of files, and then taking a break and not writing anything for a while (files then migrate from hot cache to the "slow" platters). That's the normal usage pattern of a PC user, but a recorder is a bit different as it stores a lot more than it serves.

If you're recording every camera, all the time, then your platter-speed must be able to meet the sustained bandwidth delivered from the IO system. A cache will then be able to absorb spikes that are temporary. If the inbound data-rate is sustained over days and weeks, then the cache probably doesn't do much for you. Although, I should say that temporary spikes can and do occur - an I frame in a VBR stream is often orders of magnitude larger than the P-frames, so that alone may be a good reason to have some caching take place.

Instead of using the "emulators", I would suggest that you use FFmpeg (free) and get either Evostream, Wowza or Live555 (free?). FFmpeg lets you encode video to meet various resolution and bandwidth criteria. You can then serve a known video file to the aforementioned servers. In your VMS you simply add a "generic RTSP" camera, and point it to whatever server you set up. This allows you to run the same clip through the VMS over and over. If you're using live feeds, your bitrate may change depending on time of day, activity in the frame and so on. The servers mentioned can serve 100's of streams and almost certainly saturate your network if you want to.

(3)
Avatar
Joran Vandewoestijne
Mar 13, 2019
Visuatech • IPVMU Certified

Hi Morten,

Using a secondary streaming platform like using ffmpeg + wowza is indeed a great test as well.
The problem is we sell to installers who don't always have the time/knowledge to configure camera's as good as possible, so I want to try it out using the cameras and settings as they are by default, because that's how they'll be in the field 80-90% of the time (unfortunately!). I'll make a comparison between 3 testing methods, virtual cameras, local cameras and "controlled stream" and see how different they are.

(1)
U
Undisclosed #1
Mar 13, 2019
IPVMU Certified

This allows you to run the same clip through the VMS over and over.

Agreed.  

You are going to want to do this, since even if the random variance is just 20%, that will make it nearly impossible to A/B small changes to the environment.

On the other hand, I do like the idea of using 32 real cameras, just because then you don’t have to deal with the issues of a setting up a replay server and making sure it’s performance is not impacting the test.

Here’s a crazy idea, assuming hardware is not a problem;  after recording the 32 real cameras to the VMS for a bit, make a screen export (like in NX witness, where you can capture the whole screen or use a grabber) of it. 

Then point the cameras at a big screen or three, each zoomed into its own low res playback window, while the “movie” is replayed as many times as you like.

Even easier, you could just playback a battle scene from “Braveheart”, with each camera zoomed to a different section.  On an 8k monitor ;)

 

 

Avatar
Morten Tor Nielsen
Mar 13, 2019
prescienta.com

I'm curious about the cache making a difference. Hopefully you'll share your findings.

UM
Undisclosed Manufacturer #2
Mar 13, 2019

When you are doing this testing and you need any help please let me know. I work for WD and of course let the raw results speak for themselves. One of my team members is a subscriber here with many years of lab experience that you may find helpful. Just an offer and again zero intent to influence whatever the results are. All we want is a fair shake here. 

Avatar
Joran Vandewoestijne
Mar 13, 2019
Visuatech • IPVMU Certified

Hi! Thanks for the offer, I've already reached out to WD, and got a very knowledgeble person on the line, and he confirmed me that internal testing with WD hasn't concluded yet concerning my question. He made a ticket for engineering.
I'll let you know if I need any assistance.

Avatar
Meghan Schwarz
Mar 13, 2019

Hi Sander! 

I'd like to introduce myself, my name is Meghan and I'm the Sales Engineer for WD Purple (I'm the person Undisclosed Manufacturer #2 is referring to :))  I've reached out to our WD Purple Applications Engineering team, and they are not the engineering team that you have contacted -- but this is the team that will be able to provide your best support.  No tickets necessary, we're your guys.  Would you mind sending me an email so we can work together directly?

meghan.schwarz@wdc.com

Thanks!

(3)
UI
Undisclosed Integrator #3
Mar 13, 2019

If possible it would be interesting to see how the test results of the WD Purple HDDs vary in stress conditions. Consider having a base line from the original results and then see how factors such as inadequate cooling, or a failed HDD in the raid effect the drives.

Avatar
Joran Vandewoestijne
Mar 14, 2019
Visuatech • IPVMU Certified

I'm also going to test 2,5" seagate barracudas which we sell in smaller systems, and I also have some other drives around like WD greens and Toshiba 2,5" which I might use to see how they can handle the stresstest, but it won't be really fair as it's only 24 hours which shouldn't break anything anyway. I'm not going to try and purposely break one with bad cooling though, that would invalidate the entire test imo.

Avatar
Craig Mc Cluskey
Apr 08, 2019

I'm not going to try and purposely break one with bad cooling though, that would invalidate the entire test imo.

After you have all your data on how the various disks perform, you could simulate something which probably happens out in the "real world":

* A system has been running a long time, being monitored by people who know nothing about computers, and dust builds up, decreasing the cooling efficiency (though you don't have to throw dust at it), or,

* someone is using tissues and one blocks an air inlet, or,

* someone moves the computer up against something put behind it and blocks an air outlet.

Lots of things can happen which will cause the system to overheat; not everyone has a system mounted in an enclosed 19" rack.

Craig

New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions