How Do I Determine Max Supported Bit Rates For A Server?

How do I determine max supported bit rate of a server (Dell Poweredge 730xd with 16x6TB NL SAS HDD) which will be used for running the VMS and storage.

There are often VMS specific limitations. What VMS are you planning to use?

Either Axxonsoft or Milestone.

We would need a lot more info than given to get you an acurate assessment.

First off, what NICs are you using and how many?

What RAID level and controller?

Which OS?

How much RAM?

Are you running server based analytics (Server based motion)?

How many clients will be viewing/playback simultaneously?

Hi Jon

NICs - 4 x 1GbE

RAID 6 / Perc H330


No server based analytics for simplicity.

Max of 5 clients

OS Win Server 2012

H330 doesn't support RAID 6, but it does support RAID 5. It also lacks any cache, so it will be your bottleneck, most likely. I don't have first hand examples of using the H330 in a RAID 5 setup, but reports from others in Dell forums suggest a measly 30mb/s throughput!

I would highly recommend thinking about the better H730 controller instead. Reports say that controller (with 1gb cache) and RAID 5 you should expect 100mb/s. RAID 6 will be a little slower than that.

Also, the SAS-NL drives aren't that fast themselves. If max throughput is required, consider 10k or 15k drives.

Make sure you get the battery backup option for the RAID controller. If you have a power loss and data is in the cache, but not yet written to hard disk, you're going to get corrupted data.

John , i see you are only considering the recording server disk throughput and i totally agree with your comments ( 100 MB/sec ) .

however , if we are using milestone VMS, based on your experiences what is the maximum recording server bandwidth in Mbps.

based on milestone calculator , i cannot see any limitations and the server can handle more and more if your RAID DISK configuration is able to support the recording server DISK throughput total .

thank you

This is a very complex question to answer accurately.

In most servers the hard drives are going to be the bottleneck, and you can determine a theoretical best-case by looking at drive specs and write performance impact of whatever RAID level you're using.

Because you're talking about a specific use for the server, as a VMS/NVR, I would suggest you talk to the software company (or companies) you plan to use. VMS's store and manage data in different ways, which means the same server might be able to ingest 300Mbps of video with one VMS and only 250Mbps with another.

How the server is intended to be used will also affect this, and it too varies based on the VMS. If users are going to be constantly looking through recorded video, and the VMS transcodes the raw video before sending it to the client you will likely get a lower performance number than if the machine is used mostly just for recording video, and stored video is only rarely accessed.

Not sure about Axxonsoft or Milestone but Avigilon has a storage throughput tool to test the storage read/write speeds. It is a very basic CLI tool but we use it to test SAN storage and custom servers to see how much we can throw at them. I would think other manufacturers would have something similar?

The spec from the Avigilon HD Premium NVR seems that it is an OEM version of Dell R720xd with iDRAC8 (integrated dell remote access controller) express remote management. Avigilon advertises a recording rate of 1350 Mbps (10Gbe network) with RAID 6 (Raw 96TB / 84 Usable) with 16 x NL-SAS HDD.

They only mention the max supported recording rate. I am not sure if this number is sacrosanct as the performance could change dramatically based on number of clients asking for playback and live viewing.

If I were a betting man, I would bet the house you never come close to 1350mb/s with that config.

Is there a reason you want to use RAID 6 instead of RAID 5? It seems like performance is important here. RAID 6 rebuilds will take forever!

Also, those NL-SAS drives are simply 7200RPM SATA drives with an SAS interface. Which means, 600mbps per disk tops. With 16 drives in a single RAID 6 array, figuring 25% read / 75% write, the theoretical max throughput is around 107mbps, according to a RAID calculator that I found online.

Putting 16 Drives in a RAID 10 array will give you 16X Read speed and 8X Write Speed.

So 16 4TB drives will effectively give you 32768 GB of usable space

It will also give you 1360Mbps Read and 2720Mbps Write per second.

this is based of an average of 170 R/W per drive.

16X Read speed and 8X Write Speed.

but then

1360Mbps Read and 2720Mbps Write per second.

Is that correct?

I'm sorry, It's the other way around.

So it's:

2720Mbps Read and 1360Mbps Write.

600 mbps per disk tops. With 16 drives in a single RAID 6 array, figuring 25% read / 75% write, the theoretical max throughput is around 107mbps, according to a RAID calculator that I found online.

That's not what I'm getting from that calculator:

Which gives ~2000 Mbps.

Any chance you are talking MB/s and not mb/s?

Looking back, I think I did make that mistake, MB vs Mb. My bad. However, I still am not seeing a value as high as you are showing for NL-SAS drives. I was very generous in giving the highest value per drive in the calculator and the results are shown below:

You gave a value of 73 MB/s for a single drive performance, but the chart shows a range of 24.3 to 32.1 MB/s for 512 blocks (which is what the Seagate specs show for their Enterprise NL-SAS 6TB drive). You used the IOPS value with the MB/s radio button selected.

So, with all of this taken into account, I still think it is much lower than the max for NL-SAS of 32, more likely middle of the chart, around 28 MB/s, which gets you 82 MB/s, or 656 Mb/s. This is about half of the amount some have stated, or less.

Change this config to RAID 5 and you will see a gain in total throughput, up to 120MB/s or 960 mb/s.

Now, if this guys chart or calculator isn't accurate, toss it all out, FWIW.

You gave a value of 73 MB/s for a single drive performance...

I used 73 MB/s because I was trying to use your numbers, you said

600 mbps per disk tops. With 16 drives in a single RAID 6 array, figuring 25% read / 75% write, the theoretical max throughput is around 107mbps...

600 Mbps = 75 MB, the 3 must have been a typo.

Also, I used 16 drives, like you said initially, but now you are using 14?

Opps on the 14 drive, loooong day. Here is with 16 drives. That ups it to 107 MB/s or 856 Mb/s. Still a far cry.

As far as the 600 Mb/s max performance, that isn't real world realistic. Maybe in a sequential read, but not any type of random read, or any kind of write speed.

As far as the 600Mb/s max performance, that isn't real world realistic.

Again, Jon, I was only trying to replicate *your* findings, using the only numbers *you* gave.

In any event, 75MB/s is likely to be high, but 32MB/s is just as likely low, since surely there is a great deal of sequential writes taking place, no?

Regardless, I haven't used these controllers and therefore don't have any skin house in the game here. ;)

Jon I would take that bet. Having installed the new servers I would not be surprised if we could exceed the recommended specs.

With RAID 6 on a H330 controller with NL SAS drives? Love to see it.

...I would bet the house...

Jon I would take that bet.

High stakes.

"...I would bet the house...

Jon I would take that bet.

High stakes."

That really depends on the house being wagered...

According to my experience, using an MSI (Z77) desktop mobo with i7, 8Gb RAM, Win7Pro with integrated Intel RAID controller (4x4TB WD RED HDD in RAID5) can continuously handle ~300Mbit/s incoming video data (Avigilon) while serving one client with 4 monitors. So your comments about professional servers with pro raid controllers ability of handling 30Mbit/s only sound fairly shocking to me?! Our 'servers' have been running for 2-3 years by now without any issue and/or data loss on a dedicated LAN.

I mistakenly said 30 mb/s when I meant 30 MB/s, which equates to 240 Mb/s. The reason being that the H330 isn't really a "pro raid controller". It doesn't have cache. It doesn't support RAID 6. It is about as low as you can buy, as far as performance is concerned. It should perform about the same as your config above.

Full disclosure - I'm a backline support manager at Milestone

I won't comment specifically on the possible throughput with this hardware as it's not my specialty. I'm sure our presales support team would be able to help here. Normally we will take the project specifications and provide server specifications based on requirements. Those can usually be adjusted based on goals like "I want 150 cameras per server" as well. But we are often asked to "reverse engineer" the specification based on the available hardware such as in this case.

With regard to total possible throughput/bitrate on the Milestone platform, you are mostly limited to the throughput of the storage, and it also matters how frequently you're accessing recent recordings. On a system where playback is infrequent, and usually on archived footage, the disk access is almost entirely sequential. But in environments where operators are frequently doing quick replays of video and lots of timeline scrubbing, you'll have a higher rate of random disk access which obviously will reduce the amount of time available for writes.

Most of the focus of this discussion is based on the storage because that is almost always the first bottleneck encountered on Milestone products. CPU and RAM are important too, but at 128GB of RAM you should not hit any memory limits and with a reasonable CPU passmark score, you are not likely to max out the CPU unless you...

  • Use transcoding - modify image quality in Smart Client which forces the server to convert the video down to MJPEG at desired quality
  • Use motion detection on more than keyframes - blindly doing server-based motion detection on all frames on all cameras is not recommended. If it is needed for some cameras/scenes, enable it where required
  • Use motion detection with more than 200-300 cameras - If your storage can handle the throughput of more than 200-300 cameras, you may need to consider disabling server-based motion detection and either use edge-based motion detection or record always. This will nearly eliminate CPU usage resulting in fairly minimal usage even with several hundred cameras

If I had to guess, I would say most customers do not exceed 150-200 cameras per server. There are certainly customers who push well beyond that, and our Husky M550A specifications quote a possible recording rate of 1400Mb/s. There are some caveats there I am sure.

There have been some great recommendations here with regard to the controller - it's important not to skimp on the RAID controller. RAID 5/6 requires a lot of compute to generate checksums on incoming data which can limit the throughput. And you generally want to have a battery-backed write cache with write-back enabled for best performance.

Ultimately, it would be best to engage with Axxonsoft and Milestone presales with regard to the performance expectations of your specific hardware and expected usage. If you do, it would be great to see what the results were!