How Should IPVM Test VMS Server Performance?
IPVM is going to start testing VMS server performance. Obviously, there are a lot of factors that come into play (CPU, RAM, motion detection - on or off, number of cameras, resolution, bitrate, to name just a few).
This discussion is to gather feedback from the community about how we should do it. If you are a manufacturer, integrator or a technologist with an opinion here, please speak up now.
We will be testing a number of VMSes, with the goal of better understanding key performance drivers.
A few immediate questions:
- What hardware should we use? Certain brands? Certain components?
- What cameras should we use?
- What issues should we beware of?
If there are things that you find essential, let us know.
It will be impossible to make everyone happy as there are far too many possible combinations. However, we want to have a reasonable set of equipment and tests to cover the most common use cases.
What's the server load going to look like ie number of cameras and clients viewing live video?
I would definitely recommend Dell but there's so many different combinations. I guess a 520 with a single quad core or six core processor would be about an average build for 40-80 cameras depending on the VMS and whether server side motion was used. A typical supermicros server build would be good to test on as well. Other than that I'm not super familiar with any of the other server manufaturers out there.
As far as cameras most people have a mix of 720p and higher res cameras but it would be coool to see a server loaded down with 5 MP cameras as to what that would do with performance especially if you have server side motion turned on and ahigh frame rate.
RAID performance would be good to see too I think. RAID 10 vs RAID5 vs RAID6.
There's so much like you said it's hard to get it all in :)
What do you think about using VMWare to test and measuring the performace of the VMS systems?
I want to reiterate for VMS manufacturers. If you want to have input on this, speak up here and NOW. Do not make excuses later. If there is some specific way your VMS must be tested or some specific situations you believe should not be done, say it now so we can evaluate before we start. If you don't like your results after we test, and you didn't speak up now, it's your fault.
Awesome, I look forward to seeing the results.
As this technology develops and more products become avalable it is great to have you guys testing products and sharing the results with the community.
Suggestions on Hardware: HP Z220, HP DL380, Dell R210 v aplliances such as Razberi.
Cameras: single & multi-lens including Arecont 8185DN.
VMS - Milestone, Exacq, Genetec, Avigilon, Geutebruck, Mobotix
Software raid controller v hardware raid controller
You can't expect the manufacturers to spec the test for you; and even they did, you'd be running 5 different setups, rendering the test somewhat meaningless.
As an end user, I probably don't care that vendor A has a super-optimized server side motion detection algo, if vendor B supports camera side detection and thus incur no processing cost on the server. If my TCO on the system is lower using vendor B, why would I care that A has "magic software" in it?
That said, I'd go with fairly standard equipment, so that you can reliably max out the system. If system A gets maxed out at 20 cameras on an i5, while system B can do 40, I'd be pretty confident that system B would also perform better on a higher end system. If your CPU is crazy fast, you might never max out the CPU, but instead be seeing bottlenecks in the storage system.
I'd stay with just one camera type for this test, but it might be hard to get 50-60 identical cameras. Most VMS manufacturers support a very wide range of cameras, some may be supported very well, and others may have a shitty driver. If you are mixing up cameras, your results may be valid for that particular permutation of cameras, while a - say Axis only - setup might give other results. I'd go with Axis as the drivers for these cameras should be pretty mature by now (for all VMS's)
Then I'd look at the following scenarios:
1) Record everything. Start with one camera af full res, max fps, lowest compression, and keep adding cams until the VMS can't keep up. Some will max out the drive and others probably wont be able to. This will test the efficiency of the storage system. Depending on the write-strategy on the different VMS's you will probably see different results.
2) Server side motion detection: This one is hard, because some will have a more robust (or fine-grained) motion detection, while others will offer only very coarse detection. I suppose that any "shortcut" in the code is ok, as long as the detection is "good enough" (whatever that means). Again, start with one camera and then keep adding cameras until the CPU maxes out.
3) Record on motion: Again, add cameras until you are maxed out. You can use your little train to trigger motion deterministically, or alternatively, point the cameras to a screen that plays the same video loop over and over again.
Testing a client is a project in itself; I've often heard that system A was inferior, because at X cameras, it was at 100% CPU utilization, while system B was only at 80%. Upon further investigation, it turns out that system B is only drawing half the number of frames on the screen. The way that the clients present video is very different from one another. One vendor didn't (until recently) support stepping in reverse frame by frame, which surely gave them an edge in terms of CPU and memory use, caused a huge problem in usability.
Clearly, starting a client will impact the VMS. One could set up a baseline - say 16 cameras, and then connect one, then two, then three clients showing a 4x4 view of the cameras. But you should then keep your eyes peeled on the SERVERs CPU utilization (and database performance too).
Just my 2 cents.
First of all, all VMSes must be tested in 100% identical environment. It means that the same physical machine, OS, network and set of cameras must be used for all tests.
SETUP
1) We suggest to use at least two different computers
- low end like atom with 2GB ram and 1-2 HDDs
- middle-end i3 or i5 with 4-8Gb of Ram and at least 3 HDDs
Tips: some VMSes are optimized for certain CPU instructions that may not be supported by low end systems ( like atom ), this is why simple results extrapolation would not work.
We do not think it is necessary to perform tests on high-end server equipment (like XEON processors, huge RAID etc) because:
a) Results obtained from tests on middle-end systems can be extrapolated to high-end segment,
b) It may require too many setup and too any cameras to really feel the difference.
2) As for camera setup: the many different cameras you use, the better it will be. One note: all cameras should be properly discovered and set up in all VMSes.
3) Network. We suggest to use 2 gigabit NICS: one for cameras and one for client connections. In low-end systems just one NIC could be used. All switches and VMS servers must be connected by CAT6 cables.
TESTING
The major bottlenecks that may be tested are:
- HDD usage (direct correlation with the maximum number of cameras that can be recorded simultaneously)
- CPU usage
To estimate HDD usage:
a) Find a software to measure HDD usage.
b) Isolate OS and VMS archive: do not write archive on OS hard disk drive (do not mistake partition with physical HDD!). That’s why we suggest you to use 2 HDDs.
c) Set up VMSes to record with the same FPS, bitrate, resolution. We suggest to set up continuous recording on max fps/resolution/quality that cameras may support (not motion recording).
d) Remember that maximum HDD usage occurs when HDD is writing and erasing the existing archive in the same time. So the disk must be full in the moment of measurement and activate the corresponding feature in each VMS (the oldest recording should be erasing).
e) To ensure that the software is able to handle cameras try to open streams from cameras in client (archived footage). The playback should be smooth and frames must not be dropped.
f) The measurement must be done at the moment while no client is connected to cameras.
g) Also it will be useful to compare HDD load while the recording is in progress and all cameras are viewed by client (archived footage).
To estimate CPU usage:
a) CPU usage can be measured by Windows Process Monitor.
b) It may be necessary to record many cameras to overload CPU so either use a lot of HDD or compare the average CPU load on the different VMSes.
c) Pay attention that some VMSes do software motion detection or transcoding.
As for memory usage estimation, different manufacturers may use some tricks to increase memory usage but decrease CPU one. The memory is cheap in comparison to CPU or HDD so I suggest to omit the estimation because the overall picture may be blurred.
Saying that you can simply take a i3 and i5 and "scale it up" to a server grade processor is overly simplistic. Processors have become more and more complex over the years and there's a lot more behind it than just simply processing. I'm not an expert but I know enough that there is more involved.
On the small and mid range VMS I totally agree with you. When you start getting into VMS that allow for unlimitted cameras per server based on hardware then it becomes extremely important. Also running server side motion and/or analytics it becomes critical as well.
*edit* It also becomes critical if you do any virtualization. I know that's not being included but just had to throw that out there.
What about network utilization, how many Mbps can the VMS handle?
That should be a main factor into deciding the number of cameras that could be added onto a single server. The cost impact of upgrading components on the same server is minor when compared to the costs of having to add a second server.
I would recommend considering, the number of licenses per OS? Some VMS maximize their license per OS, while they haven’t utilized the hardware resources, and hence virtualization and all its complexity.
Initially VMS should be able to utilize all available network bandwidth. However the need to record 1Gbit per second (especially if it is many concurrent streams) will result in bottleneck in HDD performance. This will show how VMSes are able to handle the recording of several streams simultaneously.
But I agree it may be useful to measure network utilization during tests.
I do agree with the fact that 1Gbit will cause a bottle neck at the storage level, but keep in mind that 1Gbit would require almost 500 cameras (+/-), consequently multiple storage heads would be required to contain all of these video streams, hence the bandwidth would be split, and the HDD bottle neck removed.
If a VMS can support more than a thousand video streams on a standard off the shelf server and single OS, that (in my opinion) could be a decisive factor for the end user and the integrator, the cost impact is not negligible.
I guess, what i would like to see is what is the license limitation by the VMS for a single OS (if any) and what is the bandwidth limitations (if any).
Are you taking into account the HDD/RAID controller? All the data still has to flow through that beore going through to the drives. It's true the higher spindle count the more performance you get hence huge database programs have a ton of small size fast drives as they need crazy I/O.
Hello, this is a very big topic and I don't expect to be covered by one article. Anyhow, we could start testing the CPU and memory unitization regardless of the server vender since many vendors are using the same Intel/AMD processors. Selecting specific CPU will help to establish a baseline for further benchmark. Also, choosing the Xeon processor will be a good choice since it is a server grade processor.
Also, we could have an abstraction layer on the camera side. So it doesn't matter which camera I'm using or settings or frame rate; we have to focus on the overall bit rate and do stress testing to see how much traffic could the server handle. Also we have to see the impact of the MJPEG/MPEG4/H.264 decoding impact on the server/client. Also, we have to consider the storage sub system in term of IOPS, the throughput, the OS block size, and so on as this will impact the processor and the memory as well.
I have couple of documents that could help.
Let's talk specific hardware to test. As mentioned earlier, we want to buy either Dell or HP, simply because they are common real world choices.
Incorporating some of what was said above, there seem to be 3 possible levels:
- Low end - Atom processor
- Mid end - I3/I5/I7 processor
- High end - Xeon processor
What about specific models?
I will suggest a few from Dell to discuss:
- Low end - OptiPlex 160 Tiny Desktop
- Mid end - OptiPlex 9010 Small Form Factor
- High end - PowerEdge R210 or 420 rack server
Any specific problems or limitations?
The storage performance will need separate articles to be covered. As an example, RIAD 5 has higher throughout than RAID 6; what will be RAID controller memory; Are we talking about internal storage, DAS, NAS, or SAN;
I have couple of documents that I would like to share and will shed more light into this subject, but I don't know how to share it. Thanks

Scalability over time. How does the system react to adding cameras? (for testing purposes assume all the same cameras, configuration and settings) Select one or more appropriate performance metrics (i.e. CPU and memory usage) and plot those values as you increase the number of cameras.
Simplistic metric but should give at least an empirical profile reference to use and understand the impact on a particular hardware / software setup.
Joel

Impact of using Virtual Machine environment for the applications would be good to see.
Joel
Things like RAID controller performance (software RAID vs. mid-tier hardware; 512MB RAID RAM vs. 1GB; internal drives vs. external SAN/NAS) would be good to include from my perspective. I know the goal is to test VMS, but how it interacts with different hardware/software combinations would be helpful to know.
I would like to see MS Windows Server 2012 vs. 2008 vs. Win 8 vs. Win 7 Pro vs. Win 7 embedded OS.
For processors, I recommend using a dual-core, a quad-core such as i3, and a high-end quad such as XEON. Then test one process vs. two. THEN, I would recommend testing with similar AMD as well. Some software performs better in one processor environment than the other. Some hardware, for that matter.
For hardware, I concur with the recommendations for Dell PowerEdge equipment. HP would be my other recommendation as well.
Avigilon's 15TB NVR is, I believe, a rebranded Dell R520. I prefer the newer R720xd which allows for additional internal drives that can be dedicated to OS, but understand that many installations are not going to go with something that "beefy".
As are many others here, I'm interested in seeing how much bandwidth each VMS is capable of handling, total number of connected cameras and clients, etc. We are moving from 100% analog to about a 50/50 mix of analog-to-IP encoder and true IP megapixel cameras.
1. Start with a common entry level E3-1230v3, 8gb ram and single 3TB 7200rpm sata hdd
2. Simulate 32ch 1080p 3mbps load with software motion detect recording
3. Log CPU, RAM and storage IOPs usage for server and client processes.
4. Compare!
Newest Discussions
Discussion | Posts | Latest |
---|---|---|
Started by
John Honovich
|
20
|
less than a minute by Undisclosed Distributor #5 |
Started by
Undisclosed Integrator #1
|
13
|
less than a minute by Kevin Mundy |
Started by
John Honovich
|
1
|
1 minute by John Honovich |
Started by
Scott Napier
|
3
|
less than a minute by Brian Rhodes |
Started by
Undisclosed Integrator #1
|
9
|
less than a minute by Undisclosed Integrator #1 |