I work for NetworkOptix. We do VMS. We also claim that primary bottleneck is storage, not a software(or server CPU). With our software i5 server can handle 128 popular cameras with no issues and accurate software MD. CPU load would be less than 50% recording more than 1Gbps.
Milestone 2x Server Capacities With Enhanced VMD
Milestone's first 2017 release includes performance enhancements for video motion detection, which they say can roughly double the number of cameras that can be supported on a single server.
In a discussion with Milestone, we share what is required to take advantage of this enhancement, and details on Milestone's claim of "the world's fastest recording server".
Definitely agree that the spindle count/speed is the bottleneck right now. There may be servers capable of 3.1 Gbps, I just don't know what it is. Perhaps all SSDs? Either way the price tag on that server is likely to be hefty.
Milestone motion detection does work very well though. I am excited that they're able to get the CPU load down as that has classically been a challenge with Milestone server based motion detection. Again, their motion detection processor load for server based motion detection was already pretty good if you have the processor to support it.
I'm interested in the benchmark test results that validate their claims to be the "fastest recording server".
About all I can find at the moment is the following. I'm travelling today but when I find more I'll post it here.
"Tested with these system specifications: Intel i7-6700K, Windows 10, Intel HD Graphics 530, 16GB, RAM, video feed: H264 –1080p, VMD on key frames and 12% pixels, 30 FPS, 2.5 Mbit/S"
It's not often a customer has a storage subsystem capable of maxing out the speed of the database. And under most circumstances, the CPU is not a bottleneck except when the system is perhaps under-specced. But the real value in hardware accelerated motion detection IMO (for systems where server side MD is required/desired) is not that you can put more cameras on one server but that you can reduce the cost a bit by speccing a less expensive CPU. In the scheme of things this might not be a significant difference based on the overall project costs but it's something anyway.
The other potential benefit is that if you are "transcoding" video on the server in order to send a lower quality image to clients, this would be massively improved by hardware acceleration. I say potential because I'm not positive that transcoding is also taking advantage of QSV. I'll check up on that as well.
My personal opinion is that 3.1Gbps is indeed bragging rights. In practice, most hardware will fall over before that. But I suppose it's worth bragging while it lasts :)
With the trend towards cloud-based (private or public) hosting of VMS platforms, using hardware acceleration becomes less attractive as it's not a trivial exercise to allocate GPU processing to virtual machines.
I'm more interested in how VMS providers are going to adapt to automated horizontal scalability for their applications in a highly dynamic scenario.
For example, if the majority of cameras in a network see bursts of traffic at specific times of day and that places increased demand on CPU to analyze and process, then scale out application instances at those times, and then scale them back when load subsides. My cloud provider charges me by the minute or by the IOP - I'd happily pay for extra server capacity for a few hours a day rather than have all that expensive tin lying around.
VMS provides need to start to plan how to decouple their software form bare-metal hardware.
The actual test write-up is posted on the Milestone site here:
The server config was built for speed:
The test setup was:
That isn't the test configuration that was used for the hardware accelerated VMD because the E5 Xeon processors do not support Quick Sync which is what our software is using for the hardware acceleration. That looks like testing from the reference architecture that we did with Dell though which is separate, but equally interesting.
I think your link was supposed to go here: Milestone / Dell Reference Architecture. Right now, it is linking back to an unknown page in IPVM.
In regards to using camera based VMD versus server based VMD that this article talks about, there are pros and cons to both. The main pro of camera based is that the resource needs on the server are drastically reduced. However, I believe the pros of server based outweigh the cons:
- Some cameras are incredibly terrible at doing motion detection. If you use server based motion detection, it doesn't matter what the cameras capabilities are because you know what you will get server side. Obviously, the server side motion detection could be poor as well but as UI1 pointed out above, the Milestone motion detection works very well. Our motion detection is also optimized (like Axxon and others) when using the default settings because it is only detecting on the keyframe and only analyzing 12% of the pixels.
- In most cases, camera based motion detection has to be configured by visiting the cameras web interface. I know some VMSs can do this client side for some cameras but I'm not aware of any that can do it for all cameras (I could definitely be wrong on this though).
- When using Milestone XProtect Expert and Corporate, the Smart Search functionality (searching on motion after the fact) relies on server side motion. When we are doing the motion detection we are saving the metadata related to the motion. With that functionality, instead of searching for motion on the fly (like most competitors do and we do in the Enterprise and below software), we can quickly search the metadata and present thumbnails showing the results. You can see an example of it at about the 4:00 mark in this video: Milestone Smart Search
That's my thoughts at least. I'm sure there could be a lot of debating around this and I could certainly be missing some other pros for camera side motion detection. In my nearly 7 years at Milestone though, the only time I have come across a customer using camera based motion detection is when they have a camera on a cellular style connection where they have to pay for the bandwidth. In that type of scenario they set the camera up to only stream when the camera detects motion. I can count the times I have run across that on one hand though.
Thanks for the URL fix. I apparently did a bad 'paste' of "https://www.milestonesys.com/solution-finder/test-3/Dell-R730XD/"
I was matching up the 3.1Gbits result from the test setup I was familiar with which is based on the 2016 R2 code level.
Is there a write-up available where the 3.1Gbits result with the 2017 R1 code base is described?
I disagree. For many cameras it is possible to configure embedded camera VMD by using client. I don't see why camera's VMD can be worst, especially considering that it works with original uncompressed video directly from CMOS. And it is possible to use camera's metadata for smart search, but again - not every camera transmits metadata. So, all cons are releated to concrete products.
And another problem is that server platforms (Xeon) has no embedded GPU with QuickSync support.
But this feature can be useful for some special realtime analytics like face detection or car plate recognition where we need to decompress all frames on severs. For example, we use GPU for fire/smoke detection, because it is neural network based analytics and one key frame with 12% of pixels is not enough for good result. But for VMD really not necessery.
I have yet to run into any CPU bottlenecks using Milestone but and seeing numerous issues with media overflow. I understand this is an issue with disk speeds but i'm only running 50-60 cameras on dual E5-2670 with SSD write intensive. Can anyone share what they are doing to optimize their disk speeds and configurations.
What version of our software are you running (e.g., Corporate 2016 R2, Professional 2014, etc.)?
Here it is a quick checklist:
- Make sure you're either running version 10.2B, or at least have applied this hotfix if running 10.2a. Alternatively, if you're running 10.2a, you might consider upgrading to 2017 R1 which includes a couple more improvements from 10.2B.
- Do not archive to the same drive as the live storage location.
- If using antivirus, always add your live/archive drives to the exception list
- If archiving to a NAS, make sure the connection is responsive and reliable. If archiving to EMC Isilon, bump up the number of archive threads from 1 to 3 or 4.
- Consider bumping up the size of the queue from 50 frames to 100-200. This can be useful if the storage performance is only momentarily slow. A larger queue allows us to buffer more frames to memory in the event the storage cannot receive the data fast enough.
If the issue persists, then you may be looking at a storage performance limitation, though it still could be software. First thing is to try to rule the storage out. I typically use Windows Performance Monitor to check the following counters:
If the Idle time is really low consistently, could be the storage.
If the avg disk sec/write is higher than 15-20ms (.020 seconds), could be storage.
If the avg disk write queue length is consistently higher than 1, and especially spiking to 5-10+, could be storage.
I've worked with your Milestone reseller before and they are quite good in my experience. If you haven't brought this up with them already, I high recommend it. We will help them identify whether we're looking at a storage performance issue or whether it could be software instead.
I neglected to mention your #2 ref. I can confirm that doing this DOES reduce the performance.
A side note: For the Professional and lower code base, this is actually a recommended setup because the archive process does not have to move the data to the archive location like the Corp/Expert will do.
Bingo. In XProtect Professional the archive process is a standard file move/copy operation so when the archive path shares the same drive letter, Windows simply changes the pointer to the files rather than actually moving bytes around.
In XProtect Corporate, the media database architecture is very different. Each live/archive path is a "database" and when archiving data, we are reading from one database and writing to another. It is effectively a copy operation, but at the application layer.
Dave, on the Milestone support page there is a Knowledgebase link. Go there and enter 'overflow' in the search.
The KB1605 is there and I can confirm that this does help in many instances.
One other reason you will get these is if your 'LiveDB' space is too full.
Assuming that you ARE running the archive process, be sure you have it scheduled frequently enough so you dont fill the LiveDB space.
If you do have a good schedule, perhaps the process is too slow and KB1783 will help with that.
You can also observe in Windows Resource Monitor the state of the system when the archive process is happening. You will want to watch the CPU, Memory and Disk Queue length of your storage location. A continued high value is not good.