Help Me Specify A VMS Server For Milestone

If you have 32 megapixel camera you need more poweful workstation than server to display all the camera in two screen. This is what milestone said to me, is this true for all VMSes?

Sounds like a reasonable rule of thumb, simply because it takes a lot more processing power to decode / display 32 cameras than it does to write them to file.

This, of course, presumes you are displaying all 32 MP cameras simultaneously at full resolution each. However, this would be wasteful. You might as well have a secondary low res stream, which would not impact quality displayed but would massively reduce bandwidth and processing needs on the client PC.

Thanks john

But i have qestion

What is the best cpu usage value for a server and client pc?

Best CPU usage?

Take a look at VMS Server Load Fundamentals Tested where we cover CPU and RAM usage for servers.


I read the fundmentals for vms server cpu usage

But when you install the system what is best cpu usage pecentage for the server and client

For exmaple

If you have system with 60 camera and 1 server and when you steup the system you end up with 90% CPU, is that good practice and is that recomended by server manfucture like hp , dell or ibm ?

"If you have system with 60 camera and 1 server and when you steup the system you end up with 90% CPU"

Are we talking about an Atom or Xeon?

Also, are you doing motion detection on the server side?

And what is the bit rate?

It depends on the throughput the server needs to deal with, the 'power' of the CPU and the tasks the CPU is asked to do.


If you give all the parameters you talked about them to milestone like bit rate type of camera they will give the server hardware which they recommend based in cpu usage of 70% .

Why milestone choose 70%? Based on what?

I assume Milestone is saying not to have CPU utilization more than 70%.

Ultimately, though, you need to pick a specific CPU. Which CPU model did they recommend to you?

Xeon e5_2630

The Xeon E5-2630 is a 6 core / 2.3GHz CPU.

What is your max expected throughput (i.e. incoming + outgoing bitrate)? Are you doing server side motion detection?

300 mbs and VMd

300 Mb/s and server side VMD?

Well, then that CPU selection makes sense to me. That's a pretty heavy load you are asking.


If i go up to 80 % , do you think i will have problem with system or life time of server will be less?

I think it's prudent to stay under 70%.

I'd start by looking at steps to minimize load. For example:

  • Can you do camera side motion detection for any of the cameras?
  • Can you / have you turned on i frame only server side motion detection?
  • Does it really need to be 300Mb/s for 32 cameras? Is the CBR or what is driving such high bit rates?

*Camera vmd is not good such as server because it is very simple

*the bit rate is caculated based in worst case at night as the bandwidtb goes up to 3 mbs and two clients

* if you want better vmd it is better to leave all the frames to be anlysised to get more recodering days

For night time bandwidth spikes, consider a cap on VBR. Many cameras support this. What cameras are you using?

I doubt VMD will be notably better when analyzing all frames vs just I frames. From talking to Milestone, I believe they agree and it will save you a ton on CPU.

Camera is axis q1604


If you see aviglion server data sheet it is written that it can handle 40 mega bytes per second ; in this rate what is the cpu usage ?

From Avigilon's system requirements page, it lists a minimum of "Intel Quad Core Xeon 2.0 GHz" to achieve "a recording capacity of 32 MB/s"

Note, though, this is without server side VMD, as Avigilon supports only camera side VMD.

It is tricky to say 32 mbytes and you stop they have to say with vmd , number of clients ,cpu usage at this rate

So we've tested the Q1604 extensively.


  • Turn on VBR with a cap. For Axis, this means entering a value into the CBR field which acts as a cap. At night, the Q1604 is very noisy and the extra bandwidth it consumes uncapped is wasted.
  • Why do you think Milestone server side VMD is going to be much better than Axis camera? Also, check if Milestone will support Axis VMD 2.1 which is very good VMD, is free and eliminates running VMD on the server side.

I think u recommend to cap to 6 mbits and i did it with 3 so i dvided the quality and i think we have to comspre the vmd of the server with vmd of the camera to say wich is better

"presumes you are displaying all 32 MP cameras simultaneously at full resolution each"

How do you do that ? :)


Some important sites when you need to see details, you need to display all of them

All of them on 1 Monitor ?


All of them on each separate Monitor?

16 per monitor

"16 per monitor"


then how can you display at full resolution each camera

on one monitor at THE SAME TIME ? :)

I think you want to conculde that the screen is FD and you can not display more than 2 meqapixel camera ?


If your screen 1920x1080

then you will never display more :)

Agree ?

Matar, you can 'display' many megapixel cameras on a screen simultaneously. You just will not be able to 'see' the full resolution of each camera because the VMS client can only 'display' a fraction of each camera's full resolution because of the limitations on monitor resolution.

This is why I said previously that you should just use multiple streams and use lower resolution streams (VGA or QVGA) when displaying 16 or more on the same screen simultaneously. For example, even with VGA, displaying a 4 x 4 matrix requires a total of 4.9 MPs, more than any Full HD monitor.

This is true but

If u have used milestone client, there is several opitions to display the cameras on the screen, only one opition to display all the cameras on full resolution and full frames and other opitions if you choose any of them you will notice a delay on the camera . so if you want to display all of them on full frame and full resolution with out delay you need powerfull workstation to decode them and if you do not have the workstation will stuck every hour


Why do you want to display 16 MP cameras at full resolution on a single screen?

Yes, it would take a lot of CPU power to do so.

No, you should not be doing this.

Use a secondary stream set to VGA and you should be ok.



I need to display them because i need to see detials and if i want VGA i will not install 1 megapixl camera.


You use the full resolution when you are watching 1 camera at time (or at most 4). You also use the full resolution when you are playing back recorded video or exporting it.

There is NO reason to use full resolution when watching 16 cameras in a 4 x 4 display. You are doing it wrong and causing unnecessary problems.

To sum: When displaying 4 x 4, switch to a lower resolution / secondary stream. When displaying 1 camera by itself or playing back recorded video, watch it a full MP resolution.

Just for informational purposes We are having the same type of issues - the workstation CPU is spiked out at 100% most of the time -the recommended fix was to add a 2nd workstation and keep both CPU's at 70% or less usage normally - this allows for spikes when movement is high.

We were also asked to reduce the frame rate - I had not thought of reducing the displayed resolution - I understand how that would just like displaying 16 analog cameras - you dont get the full resolution because of the monitor capabilities.

I always have same problem with workstation

Have you found it to be pretty universal that a VMS client will display the full resolution of a camera when in full, single view and automatically even when configured for low resolution for "live" view?

How can the system (Client or Recorder) be configured to display 1920x1080 resolution only when in 1 camera LIVE view or the pane is full screened from a LIVE multiview - but figure out to send a 320x240 LIVE image when that camera pane is being displayed in a multiview window?

Universal, no. Available in some/many VMSes, yes.

Reference: VMS Multistreaming Comparison

Someone owes Jon a consulting fee for that.

Mark, he didn't take any of my advice anyway :(


Can you do compartion between camera vmd and software vmd?

Can you compare between the cpu usage at night and day?

Matar, we haven't specifically tested camera and server VMD against each other. In theory, camera VMD should be more accurate because the manufacturer best knows how to "tune" it for their cameras. Server side VMD is applying the same settings regardless of the camera's specific performance, requiring more user configuration for best performance.

As far as CPU usage, we only have nighttime figures from our server load testing. For eight cameras, on a test machine with two quad core i7 processors and 16 MB RAM, CPU consumption was:

  • Camera VMD: 9%
  • I-Frame only VMD: 11%
  • All frame VMD: 56%

We were using 720p cameras, all 30 FPS.

Also we tested Axis standard VMD versus their VMD 2.1. We found VMD 2.1 to be far more accurate during the day than standard VMD though it regularly missed activations in the dark. However, their standard VMD had frequent false triggers at night, which would waste recording space. Which one is best for you depends on what sort of scene you're looking at, and the time of day which is most important.

So if you have 3Mbps for each cameras (as you said), you will have 96Mbps total bandwidth so a server with a simple XEON like E3 series or E5 with 8GB RAM will do the job (VMD or not it would be OK). Then add, minimum 4x SAS drive (300Gb at 15k rmp) and then the storage needed on SATA drive to keep the Archive DB, depending on retention, you will have to calculate this.

For workstation, it's really different as you won't be able to display 16x cameras FullHD - H.264 at 25fps in one piece of HW. In Milestone you can choose to send a low res stream when you are looking in "grid" mode (4x4 for example) and then full resolution when doing a full screen for one single camera. In this case I think you would be able to have 2x screens on a i7 (perphaps a 4770) with 8GB RAM...about GPU it's not needed to have a powerfull graphic card as for display it's only CPU usage for just choose one with dual screen.

My 2 cents...Hope this helps ;)

"In Milestone you can choose to send a low res stream when you are looking in "grid" mode (4x4 for example) and then full resolution when doing a full screen for one single camera."

Philippe, can you clarify this statement? In the bolded portion, do you mean a view containing only that 1 camera (setting the pane permanently as a low res stream for the 1x1 view) or do you mean when full-screening one camera from a multi-view? And if you mean the latter, where is this setting applied? On the recording server or Smart Client?

Can you provide a screen shot of the setting interface that you use to do this to help me understand? Thanks.

I can't give you a 100% answer or a snapshot as I'm not Milestone certified and I don't have any system setup but I'm 99% sure that it can be done on a multiview option, which means that from a grid (4x4) you send all cams on low res and then double clic in one camera then it send full resolution. I think it's on server side that you have to configure it as far as I remember...but I'm not 100% sure also but as far as all streams go to Milestone should be there. You should verify with Milestone support all this...sorry that I can't help you more.

The folks that have already commented are pretty much spot on. CPU is king for Client viewing with most VMSs...especially Milestone. Philippe has a nice summary from a performance perspective.

Where I work I regularly run a variety on VMS Clients in my lab to see what system resources they consume. I also observe what effects the Client has on the server (ie things that cause a server to transcode streams).

At 32 x 1MP cams you state that you are allowing 300Mbits for the Recording Server at night.... which will need a minimum LiveDB size of appx 350GB with Archiving every hour and Expiring the Live data also at 1 hour. The Archive DB itself can last as long as it was sized for.

What was not mentioned is if the question is for a standalone Viewstation or if the Client was going to run on the Recording server itself. Doing the latter is not a recommended practice due to the CPU resource needed to support decoding the streams (assuming H264).

For CPU selection, the use of the Passmark CPU benchmarks is a resonable comparison between different CPUs where the higher valued ones will indeed process more client streams. The new Haswell CPUs of I7 variety are nice to use.

This is true for 'Single' CPU systems. For Dual Xeon Client systems...the result is not double (actual is appx a 50-60% increase depending on the VMS) so you have to compare the expense of the extra CPU against the performance increase.

Setting up a Dual Stream capability as suggested is a very good idea. The recording stream can be the H264 stream and the viewing stream can be a MJPEG provided the local LAN can handle the extra bandwidth.

Dropping the viewing FPS down to 10-15 FPS is also a very good suggestion.

For Milestone, things that cause Server side transcoding are the 'Mobile Server' and changing the Quality of the Client streams to anything other than 'Full'.

For Milestone, things that cause Server side transcoding are the 'Mobile Server' and changing the Quality of the Client streams to anything other than 'Full'.

Does that mean if you are viewing the 4x4 matrix on a client HD monitor that all 16 streams are sent to the client unaltered? (Assuming no multi-streaming)

Streams are sent in native resolution on client side which means that if a stream is configured on the camera as H.264 - 1080p - 15fps this is the same stream sent to the client monitor for live viewing, as you said, with no multistreaming.

Mobile server is different as it has to restranscode the stream to be sent to mobile device...and be aware that it consums addition CPU even if on 2014 version it seems that it's better than the other version, but still...the server will have more CPU consumption.

Philippe is correct.

Here is a statement right from the SmartClient manual....

"While using a reduced image quality helps limit bandwidth use, it will—due to the need for re-encoding images—use additional resources on the surveillance system server."

This is a very large effect on the server so you must be careful when using it to save bandwidth.

On the Mobile Server... this has a recommendation to use a standalone server if you have more than 10 cams to serve to the web. I agree based on my lab testing. The 2013 code versions would start out a new web client connection by transcoding ALL defined cameras and views...which could literally peg the server CPU for several minutes (and lose frames as a result), until the user selected the view they wanted to use. After the selection, the transcoding would continue only for the cams that exist in the view.

Other VMSs do similar things regarding transcoding, so understanding which parts of the VMS will start a transcoding operation is very key to designing a good solution for your customers.