Sounds like a reasonable rule of thumb, simply because it takes a lot more processing power to decode / display 32 cameras than it does to write them to file.
This, of course, presumes you are displaying all 32 MP cameras simultaneously at full resolution each. However, this would be wasteful. You might as well have a secondary low res stream, which would not impact quality displayed but would massively reduce bandwidth and processing needs on the client PC.
Some important sites when you need to see details, you need to display all of them
Someone owes Jon a consulting fee for that.
Can you do compartion between camera vmd and software vmd?
Can you compare between the cpu usage at night and day?
So if you have 3Mbps for each cameras (as you said), you will have 96Mbps total bandwidth so a server with a simple XEON like E3 series or E5 with 8GB RAM will do the job (VMD or not it would be OK). Then add, minimum 4x SAS drive (300Gb at 15k rmp) and then the storage needed on SATA drive to keep the Archive DB, depending on retention, you will have to calculate this.
For workstation, it's really different as you won't be able to display 16x cameras FullHD - H.264 at 25fps in one piece of HW. In Milestone you can choose to send a low res stream when you are looking in "grid" mode (4x4 for example) and then full resolution when doing a full screen for one single camera. In this case I think you would be able to have 2x screens on a i7 (perphaps a 4770) with 8GB RAM...about GPU it's not needed to have a powerfull graphic card as for display it's only CPU usage for Milestone...so just choose one with dual screen.
My 2 cents...Hope this helps ;)
IPVMU Certified | 08/04/14 05:04pm
The folks that have already commented are pretty much spot on. CPU is king for Client viewing with most VMSs...especially Milestone. Philippe has a nice summary from a performance perspective.
Where I work I regularly run a variety on VMS Clients in my lab to see what system resources they consume. I also observe what effects the Client has on the server (ie things that cause a server to transcode streams).
At 32 x 1MP cams you state that you are allowing 300Mbits for the Recording Server at night.... which will need a minimum LiveDB size of appx 350GB with Archiving every hour and Expiring the Live data also at 1 hour. The Archive DB itself can last as long as it was sized for.
What was not mentioned is if the question is for a standalone Viewstation or if the Client was going to run on the Recording server itself. Doing the latter is not a recommended practice due to the CPU resource needed to support decoding the streams (assuming H264).
For CPU selection, the use of the Passmark CPU benchmarks is a resonable comparison between different CPUs where the higher valued ones will indeed process more client streams. The new Haswell CPUs of I7 variety are nice to use.
This is true for 'Single' CPU systems. For Dual Xeon Client systems...the result is not double (actual is appx a 50-60% increase depending on the VMS) so you have to compare the expense of the extra CPU against the performance increase.
Setting up a Dual Stream capability as suggested is a very good idea. The recording stream can be the H264 stream and the viewing stream can be a MJPEG provided the local LAN can handle the extra bandwidth.
Dropping the viewing FPS down to 10-15 FPS is also a very good suggestion.
For Milestone, things that cause Server side transcoding are the 'Mobile Server' and changing the Quality of the Client streams to anything other than 'Full'.
IPVMU Certified | 08/05/14 12:57pm
Philippe is correct.
Here is a statement right from the SmartClient manual....
"While using a reduced image quality helps limit bandwidth use, it will—due to the need for re-encoding images—use additional resources on the surveillance system server."
This is a very large effect on the server so you must be careful when using it to save bandwidth.
On the Mobile Server... this has a recommendation to use a standalone server if you have more than 10 cams to serve to the web. I agree based on my lab testing. The 2013 code versions would start out a new web client connection by transcoding ALL defined cameras and views...which could literally peg the server CPU for several minutes (and lose frames as a result), until the user selected the view they wanted to use. After the selection, the transcoding would continue only for the cams that exist in the view.
Other VMSs do similar things regarding transcoding, so understanding which parts of the VMS will start a transcoding operation is very key to designing a good solution for your customers.