Good write-up! What is the process to upgrade from one version of XProtect to another? Is it simply the application of a new license key and the upgraded capabilities are unlocked? Or does the actual software need to be re-installed.
You can move between Essential+, Express+, Professional+, Expert, and Corporate by just applying the appropriate license which will then unlock the features of that particular product. There is no need to reinstall the software unless you are also moving to a newer version at the same time (e.g. moving from Professional+ 2018 R2 to Corporate 2019 R1).
You can move between Essential+, Express+, Professional+, Expert, and Corporate by just applying the appropriate license which will then unlock the features of that particular product.
But you cannot move from the Essential, Express, Professional to any of the + products correct? is there a timeline for the non-plus variant to be discontinued? I liked the old non-plus versions before there was a distinction between the two. However, it is a bit confusing now. It's entirely possible that there are customers looking to save the $20/channel on Professional vs Professional+ while also not realizing they will pay for it down the road.
"Reliance on Third-Party Integrations - Milestone does not offer their own video analytics or access control system, instead of relying on partner integrations, which complicates sales and support"
VMS companies can't be good at EVERYTHING without also essentially being a one-off for every single use case, and there are specific companies that are very good at analytics. That said, I expect the Briefcam acquisition by Canon to one day show its face as an analytics add-on for Milestone.
Is IPVM actually saying that every VMS company should also have an access control offering?
Let's let analytics companies do that, and VMS companies do what they do, and push them to integrate tightly together.
The fact that Milestone has many third-party integrations is positive. The fact that they do not have any 'native' 'unified' offering is negative. Which one has a greater impact depends on the user / application.
I bet there are opportunities where Genetec beats Milestone because they are 'unified' and others where Milestone beats Genetec because they allow the customer to use various combinations of systems. Agree/disagree?
@John: that seems to be the case, when I speak to end-users.
Noticable thing is that both of these not notch systems (Genetec/Milestone) will work with Suprema Access Control readers (Bio/RFID), however:
- Suprema with Genetec will require the use of an ACU (Wiegand: VertX / OSDP: Mercury)
= Centralized architecture only is possible, used with Genetec Certified Unified system (Cloud Link)
- Suprema with Milestone does not requires the use of an ACU (Suprema devices can be directly connected to the door or to Secure I/O2 module in case of a Secure Door)
= Distributed or Centralized architecture are both possible (can be mixed as well), providing more agility in the design/set up.
Which I believe replicates the differences of perception of users, when comparing Milestone and Genetec respective advantages.
Then comes AxxonSoft and here is my trio of top notch Video partners that I like to promote and work together in my sales regions (EU/Africa). Each of them having specific target customers and specific strenghts.
Could you test and report on Avigilon analytic cameras and their functionality in Milestone Xprotect? In Device Pack 10.3a Avigilon Analytics are now supported on H4A cameras. I would be interested in how events are setup and received, search-ability of classified object motion data, and anything else you think prudent for the community to know. Thanks in advance.
Milestone has two basic code levels known as "C-code" and "E-code". The "E-Code" is/was all the versions without a '+' from Enterprise on down.
The '+' versions are all on the "C-code" base just like Expert and Corporate. The different licenses enable/disable the feature set as you go from lowest to highest. A key one being when the Nvidia GPU can be used by the Recording Server which is only on Expert/Corporate versions.
The "E-code" handled the camera data differently and that may be one of the reasons you need a helper application to migrate.
It depends on which aspect of the software and which hardware acceleration. Currently, the Smart Client, Mobile Server, and Recording Server all support hardware acceleration. However, there are two types of hardware acceleration. There is Intel Quick Sync (using the Intel GPU that is part of most i3, i5, and i7 processors as well as a few lower-end Xeon processors) and there is the Nvidia GPUs.
The Smart Client and Mobile Server support both types of hardware acceleration across all products. The Recording Server supports Intel Quick Sync across C-code products (the "+" products, Expert, and Corporate). However, Nvidia support on the Recording Server only works on Expert and Corporate.
Can Milestone now record two streams? ie a low resolution at low frame rate and high res on motion?
Previously I do not believe this was possible as you could reduce the frame rate but you could not actually pick which stream you want to switch to. You had to define a record stream and you were stuck with it.
On our test PC, with SmartClient hardware acceleration enabled, the application automatically load balanced video decoding between the NVIDIA and Intel QuickSync GPUs. We noted an ~40% decrease in CPU resource usage with hardware acceleration enabled. The NVIDIA GPU was utilized even though the viewing monitor was not plugged into the NVIDIA Card's HDMI output:
Mostly because it doesn't make any sense to me.
Nvidia has nvdec and Intel has Quicksync. I'm going to use the term GPU, even though AFAIK both quicksync and nvdec are specialized circuitry outside the general purpose GPU cores (used for vertex and pixel shaders).
Certainy the nvidia GPU receives the compressed video, and then decodes it into it's on-board VRAM. The problem is that the Intel GPU doesn't have direct access to this VRAM. To get to it (which is needed because the Intel GPU drives the display), you either have to do a GPU-2-GPU copy, this is non-trivial and I am not sure that an nvidia GPU will dance with an intel one. The other option is to go through system RAM. This means, that you send the compressed video to the nvidia card, then it decodes it, then you copy the full, decoded frame back from the nvidia GPU to RAM, and then from RAM to the Intel GPU (there might be some hack that allows the iGPU to directly access system RAM - as this is usually done for low end PC where performance is not a priority).
It makes even less sense to me if the intel GPU is at 9% load - at this point, doing "load balancing" is probably detrimental to performance, and serves no real purpose (since the 1050 is at 6%, the intel GPU alone could easily do all the work and there'd be no need to copy decoded frames around).
If you use task manager on a Windows 10 machine, you can actually see if the GPU is busy doing video-decode or something else, but it's not shown on the screenshot.
To determine if this "load balancing" is making any sense, you ought to do a benchmark test where you push the system to it's limits. First w/o the nvidia card installed, and then again with the card added.
@Morten. I agree with your summary. Milestone is focused on leveraging trending technologies (and buzzwords) to boost marketing efforts regardless of the effectiveness of their solution. This is what you do when your product is “long in the tooth”.
It's just another type of sprinkles on top of a turd. It'll distract you from the bitter taste for a little while, but the reality is that underneath the mountain of condiments, it's the same old dog-shit.
But make no mistake, they're not alone in pursuing this strategy. There's this notion that software gets better with age. That's obviously only something people selling outdated software would say (keep my words in mind next time you hear "tried and tested"). If it was true, Windows XP should be the preferred OS (or windows 3, or DOS). Ohhh.. I can already hear the sentimental sysops claiming that DOS is better than OSX or Windows 10... but is it really.. sure, the initial release is always shit, and then you get some gradual improvements, but at some point new tech and architecture is needed, and that's when you get a new generation. Imagine if Windows 10 was still based on DOS.
Almost comically, there's a player on the market that is still peddling a recorder based on a platform developed around the time when Milestone called their product "Surveillance Lite and XXV" (good times!). It's covered in a mountain of condiments, but once your spork slices through it and hits that warm, soft, middle, you regret bringing it home. Unfortunately, you paid $$$ for it, so you just have to live with it.
That said, I am genuinely interested in understanding if the touted load-balancing is offering any performance benefits. It's impossible to say from the article - it just says it's doing it, but there's no further examination of the value/effectiveness of it. My own tests showed that if I used Quicksync to do decoding, but wanted to use my nvidia GPU as the output (which I think most people would), then the CPU savings in decoding was just getting spent on copying the frames from the iGPU to the nvidia card.
There's a big difference between doing server side video motion detection using a multi-GPU setup (if you do, get some cheap cameras!), and then rendering video on a client.
Here's why: say you want to compute how many pixels have changed from one frame to the next (a typical ingredient in old-school motion detection). You push the compressed H.264 to the GPU that then decodes to VRAM. A GPU typically has hundreds of cores that run in parallel, so not only do you get a fixed decoding pipeline, you also get to do the computation using the general purpose GPU cores to compute the difference. The result of that computation is a a number (or an array of numbers). The decoded frames are never pushed to a screen, so it doesn't matter if GPU A or B is doing the computation.
When you are drawing video on the screen, you want to keep the pipeline on the GPU that is driving the output. If you send the compressed video to card A but drive the display with card B, then the decompressed video needs to move from card A to B for display, the process of copying the video between two (different) GPU's especially when B is not under load, just doesn't make sense to me.
My theory is that the "load balancing" is actually making the system slower than if you removed the 1050 from the system. If you're running two screens, then it makes sense to send the H.264 to the GPU connected to the display.
My point being : you're probably not going to get better performance on a single screen by adding another GPU (I say probably, because it hasn't been tested), but on the other hand, I don't think its something people do.
I don't have a highly technical answer the explains exactly how things work but I am looking to see if I can get that for you. However, I can provide some better examples of the benefits of our hardware acceleration in the Smart Client. First though, I want to mention some things:
- When using a computer that has both Intel Quick Sync support and one or more Nvidia GPUs, we recommend connecting the monitor(s) to one of the Nvidia GPUs. If the monitor is connected to the onboard Intel GPU, then unnecessary CPU cycles will be used.
- If the computer has a CPU with Quick Sync support and also has one more Nvidia GPUs and the monitor(s) is connected to the Nvidia GPU, then Windows will disable the Intel GPU because it assumes you don't need it. Workstation class computers (such as HP's Z series machines) have the option in the BIOS to force enable the Intel GPU. More consumer class and gaming class machines typically do not have that option.
Now, on to some test results I did this morning. I ran this on two different machines. My testing was loading up 1080p, 15fps, h.264 cameras until the CPU resources started getting too high. Both of these machines are running a single monitor. You will see the drastic benefit of having hardware acceleration.
Machine #1 (this system is 7-8 years old, with the exception of the GPU) - Intel i7-2600 - 8GB RAM - Nvidia Quadro P1000 w/ 4GB - Windows 10 1903
The first screenshot is with hardware acceleration enabled and the second is with it disabled. This machine does not have the option to force enable Intel Quick Sync so all of the decoding with hardware acceleration enabled was done on the Quadro P1000. I have our diagnostic overlay enabled so you can see which camera are being decoding through the Nvidia card (will show Nvidia) and which are being decoded by the CPU (will show Off). You will see that with hardware acceleration off I was able to pull up 16 cameras before the CPU was approaching 80%. With hardware acceleration on I was able to pull up 49 cameras with the CPU running just under 70% and the Nvidia GPU running at 90%.
Machine #2 (my Lenovo P51 laptop) - Intel i7-7700HQ (has Intel Quick Sync using Intel HD 630 graphics) - 16GB RAM - Nvidia Quadro M1200 - Windows 10 1809
The first screenshot is with hardware acceleration enabled and the second is with it disabled. Since this machine has Quick Sync and Nvidia support, I am using Task Manager to show Quick Sync and GPU-Z to show Nvidia. I have our diagnostic overlay enabled so you can see which camera are being decoding through the Nvidia card (will show Nvidia), which are through Intel Quick Sync (will show Intel), and which are being decoded by the CPU (will show Off). You will see that with hardware acceleration off I was able to pull up 16 cameras before the CPU was around 90%. With hardware acceleration on I was able to pull up 49 cameras with the CPU running around 75% and the Nvidia GPU running at 80%.
When using a computer that has both Intel Quick Sync support and one or more Nvidia GPUs, we recommend connecting the monitor(s) to one of the Nvidia GPUs. If the monitor is connected to the onboard Intel GPU, then unnecessary CPU cycles will be used.
That aligns with my experience, and what makes sense. What threw me off, was that in the article, the monitor was NOT connected to the nvidia gpu. In that case, the "load balancer" might possibly lead to worse performance (which is probably counter-intuitive).
My guess is that Milestone is using Microsoft DXVA2 to handle decoding. You pipe in H.264 and end up with raw video data stored in "Surfaces". If these surfaces need to migrate from quicksync to nvidia, then DXVA will probably do that via the CPU.
In SOME cases, it might be faster to do decoding on quicksync and copy the surfaces across. Copying surfaces is a fixed cost, but decoding is variable. E.g. consider a highly compressed 4K stream. Depending on the activity, the decoding can be extremely fast, but the memory that needs to be copied across is always 4K - regardless of the compressed quality.
We had a lot of fun with the quicksync being disabled too. On Windows 7 I had to have a virtual second monitor attached to have quicksync visible in windows alongside my 670 GPU.
Basically, GPU/HW decoding is the way to do it, it's more power efficient and it's how things were intended to be used. But people tend to think of the GPU as this magical device that just decodes video at no cost. They may be getting worse performance by adding a discrete 1050 card, but be so distracted by the GPU load graph that they don't see that the actual realized throughput on the screen is lower than before.
But thanks for the research and tests, much appreciated!!
Another data point for the video decode H/W acceleration is that the number of decoding chips on the GPU is very important. We found a NVidia Quadro 4000 RTX card that has 2 decoding chips. Most of the NVidia GPUs only support only 1 decoding chip. I think there is even a Quadro 8000 card with 3 decoding chips.
We use two Quadro 4000 RTX cards (no SLI) in an HP Z6 G4 workstation and can decode up to 120 HD resolutions cameras at 30 frames per second. Each graphics card has two decoding chips.
We did field dual NVidia GTX 1080 cards with a single decoding chip and we could support less than 50 HD resolution cameras at 30 fps.
If you need the decoding power, consider not only the number of cuda cores but the number of onboard decoding chips as well.