[Note: Originally posted as a comment in the Genetec and Milestone 500 Camera Recorders Announced report, moved here]
Evaluating the throughput rating of NVR's has been a big question of mine for some time and I currently have a design that I have been working on that has over 700 physical cameras with over 1000 camera streams (some are multi-imager cameras). Total throughput is around 12Gbps and storage is approximately 3 to 4 PB for 30 days (24x7x30).
The manufacturer of the VMS we are evaluating claims a throughput of 400 Mbps using their NVR (customized Dell server) and 300 Mbps for 3rd party server. These numbers are recording throughput and that the same rate can be achieved for video retrieval.
The client requested that the project follows their current data center guidelines and that we add to the existing VSphere environment using Cisco UCS servers and a Dell storage area network.
For this project, we evaluated the option of using Cisco UCS B Series Blade Servers (Dual CPU Xeon E5-2660 B420 M4 Blades) with one NVR instance per CPU. The Cisco UCS chassis supports VIC (virtual interface computing) network interface cards that allow for high speed fabric connection directly from the virtual machine to the core network using multiple 10Gb interfaces. The servers would be connected to a Cisco Fabric Interconnect switch that is connected to a Nexus 7004 core via multiple 10Gbe interfaces.
For the storage solution we are looking at Dell Compellent FS8600 storage controllers connected to the Cisco 7004 via multiple 10Gbe interfaces. This solution drops off the fabric at the Cisco 7004 utilizing 10Gb iSCSI to the Dell SAN. For the storage backend we are using Dell S5000 switches and SCv2080 storage arrays.
I would think the server configuration above would provide the 400 Mbps throughput rating equal to the manufactures provided server or even higher. Without having a way of testing or confirming the actual bottlenecks in the NVR platform it's hard to make any assumptions.
The other issue is the cost of the virtualized server and storage area network in this solution is a more expensive option than using the manufacturers NVRs and the management without a qualified IT department would be a nightmare. The manufacturer offers 90TB single NVRs which would provide all storage in the NVR without need of any external or attached storage. From a keep it simple standpoint the standalone NVR would be the best option.
12000 Mbps total throughput / 300 Mbps (400 minus some breathing room) = 40 physical NVRs for this project with 3600 TB of storage. Using 300 Mbps (they recommended 200 Mbps for virtualized) we would need ~60 virtual machines.
Other issues that come up would be server side video analysis and transcoding. What is the impact of storage only on the CPU and at what point of utilizing the CPU beyond this causes any impact to the storage throughput. The manufacturer being evaluated does have a tech paper showing different camera/server configurations and results that help make some assumptions but it is based on their NVR solution.
It would be nice if IPVM could start a section on NVR throughput testing and bottleneck evaluation. I would love to hear from other users here on their experience with NVR throughput, especially in a virtualized environment.
EDIT: One point I forgot to make and was a reason for my comments above. We are using high resolution high bandwidth cameras for all designs now. As such we have a need for high throughput and not high camera count NVRs. The solution I indicated above would only be 15 to 25 cameras per NVR.
NOTICE: This comment was moved from a report: Genetec and Milestone 500 Camera Recorders Announced