HI John, a question: 700 cameras in what resolution / fps do they mention?
There are standalone equipments with 80 MBps data throughput and its meant for up to 32 channel. The more powerful ones at 380 Mbps throughput standalone and it is a 32 channel NVR and this for high megapixel. So lets say it is usingabout 10 Mb of data throuput per camera (at 1080p full hd). So it will be intersting to check if they recommendthe 2000 Mbps data throughput for 700 cameras for what conditions in resolution and FPS.
By camera quantity might be tricky since we have in the past seen 8 Mbps throughput Standalone NVR for 16 channel then it could only record D1 resolution per channel and low fps. in that case its sharing 1Mbps for 2 cameras.
Basically i think it is very on that Mbps if real it is suitable for 700 cameras at 720p or 15fps sharing? But still it is a great spec powerful NVR appliance.
I have personally squeezed 100 cameras on 2GB ram (as used by VMS processes - excluding base Window usage - total 3GB ram in system), a single 5400RPM HDD, and 25% CPU usage on a i7. So the claim should be achievable on an efficient VMS system.
Remeber that the IP cameras themselves do all the video encoding - the VMS server really does not have to do much - just dump data onto a RAID5/6 array, which I have seen sustained throughput of 10Gbps with just 6 SATA drives on a entry level hardware RAID card with 512MB RAM.
Video Insight was making a big deal out of their ability to record 2,000 cameras on a single server not that long ago. This is apparently happening in production, from what they told me, but I've not seen it.
They say it's due to re-writing the entire platform to be 64-bit only. They make claims about it here:
I can believe it. That is basically the system we have installed, although we're not pushing it nearly that hard. Basically, we are recording up to 200 cameras per server at ~3Mbps per camera. The servers are averaging around 3% CPU load and 25% on 1/2 of the network team. The network capacity could be improved in a number of ways to up the throughput, including eliminating the redundant core switches, adding ports and teaming the entire server to one core switch.
The limitations in our case aren't the servers themselves but what we specified in terms of redundancy, system overhead and storage requirements. And yes, we are running 64-bit O/S (Server 2008 R2 Standard). Our servers use Xeon E5-2420 CPUs with 16GB of RAM.
Sure, there are SAS slow near-line disk storage subsystems that can perform this level, like Bohan boasts, and the processor can keep up, as long as it is little more than the coordinator of the bulk data transfer between the bus and RAM, per Carl's comment, and naturally a 64-bit memory architecture that allows one single proccess space to access the needed full complement of volatile memory is a must, as Ethan effuses...
But one last question, not listed in John's spec 'hints', would be how the hell does it acquire 2000 Mbps in the first place?? Before you simply multiply 2 x 1, understand that real-world performance of 600 Mbps (at the application level) on a single Gbe connection is considered good.
How about 2 x 10 Gbe Single mode fiber ports? And 2 x 1 Gbe plain jane ports (out of band signaling?) like so:
Dual Port Redundant 10Gb SFP+ (Video), Dual Port Redundant 1Gbps (Video), TCP, UDP, ICMP, IGMP, SNMP, HTTP, NTP, Telnet, FTP
IMHO, that's the secret SAS sauce in this box...
I don't think the claim is improbable seeing as we regularly put 250 1080p cameras streaming at 10fps on our HP DL380P Gen8's with WD RE drives using Video Insight 5.5 the server shows an average CPU load of 40% server side motion and 20% camera side motion.
Needless to say you need to make sure your fundamentals are rock solid and steps have been taken to prevent any issues that would cause loss of video.