Migrate From Instransa To Dell Servers

Is anyone looking to migrate from their Intransa servers to maybe something like Dell Equallogic units? We have been using the Dell iSCSI units for years and have no problems. Also wondering if anyone is running their setup in vCenter or VMware hosts at least?

You might also want to consider Oracle Sun servers. Oracle is making a strategic move into storage for surveillance. I would expect to see a focus on more support for VMS integration to DB/OS management tools from them.

If you are going to run a virtualized machine environment you definitely want to do it using a bare metal hypervisor like Xen, VMWare Sphere or ESX, or KVM. Using a hosted hypervisor is going to be less efficient, less secure, and lower performance. I like VirtualBox, but it has that OS host which always poses a security risk.

We have run Milestone, Exacq, our MVE system, and a few others on Xen under windows 7 pro 64 guest OS with good results, both on a local server and in a remote hosted server. These were simple servers that were just writing H.264 encoded streams right out to disk and play back from disk. So a server for archive is easy to run on almost any VM environment.

If you want to serve video out to clients, then that is a different thing.

If you are going to transcode for the purpose of serving up rate adapted/device adapted playback, ala DASH or something like the HauteSpot MVE system, then you will want to use something like the NVIDIA VGX or Intel Quick Sync drivers for your host system. NVidia has H.264 encoding in hardware on all of their CUDA GPUs and Intel has Quick Sync in all Ivy Bridge processors. You will need your VM hypervisor and guest drivers to support the GPU too. Citrix Xen and VMWare both support NVIDIA. I don't know of any VM that supports Intel Quick Sync yet. Support means letting virtual guests os have access to the GPU for acelleration. So today if there were a VMS that was optimized to use the NVIDIA GPU under virtualization, then everything would be awesome. Adobe has already built in this to their software. Right now I don't know of any VMS that has drivers to support this. Network Optix supports the Intel Quick Sync hardware acelleration, but not under virtualization.

With NVIDIA VGX acelleration you should be able to transcode up to 8 30fps H.264 1080p (2MP) streams. But I don't know of any VMS client viewing architecture built around this.

Most of the development of GPUs in virtualization has been for graphic workstations not for streaming/transcoding. If you want to serve to many client systems (like in a Security Operations Center) folks like Citrix have built optimized VM Clients that have graphics support. They use XWindows or RDP like remote graphics calls to display their guest os on a remote client. Another applications of virtualization would be for remote desktops like this. It does not take care of transcoding or rate adaptation but it would allow you to have many desktops sharing one server. Of course then you have a single point of failure. But the trade off is lower cost workstations and network security.

It would be interesting to know if anyone is working on optimizing their VMS to use GPU acelleration like the NVIDIA VGX in transcoding for streaming out to remote clients. If anyone knows more about off-loading encoding to the GPU in virtualized environments I would love to hear about it.

Wow, thanks for the reply Bob! I just finished the OnSSI class last week and am looking for the most efficient way to manage the data. Since the vendor only supports the Base as a VM I guess it would be pointless. You can export the configs at any time and could always do a P2V maybe monthly for DR purposes. We were told anything higher than 15 FPS would be a waste since the human eye can not discern between 15 and say 30 FPS. Since we are taking on Intransa units I am looking to get the equipment changed out ASAP to new supported gear. Again, thanks for the great reply.

Thumbs up to you, Bob.

I recently converted an OnSSI system running on intransa to Dell. We used 720xd's as the hosts and MD3220i for the SAN. VM's run on a RAID10 set 15k SAS drive high spindle count, live video to RAID10 set 10k SAS drive high spindle count and archived video to RAID5 set with 3TB Near Line SAS drives. Runs like a champ.

Good to know. So do you have vcenter running with three hosts for failover or just esxi running on a single server with vm's for the base and recorders? Other than support for vm's from OnSSI being a concern I am talking with DELL about storage concerns with relation to the streaming. Archives are already pushed to SATA units every hour but I figured we would need some high-end SAS drives for the recorders. We have about 200 cameras so it is a small shop but I would have having to manage a new recorder server every time we need to add another 40-50 cameras. Seems like money is the issue initially so we may only be able to purchase an Equallogic unit for now. I would be interested in breaking the unit into LUNS based on drive speed which seems like what you did. Thanks for the reply.

1 VM = 1 server = 2gig and 4 CPU = 45 cam, 25fps, SD

1 physical server = 12 VM = 540 cam

I have 60 cameras on one recorder, the others I kept at 50. The 60 camera server when it was on Intransa it was having issues, now that it's VM it's fine. We do not have Vcenter running yet, they're going to go full VMware in the next phase of implementation.

I'm only providing each VM 4GB Ram and 1 CPU with four cores.

I was concerned with the streaming as well but this customer has multiple clients running all over campus and we have not experienced any lag. Each server has eight GB NICs, two for iSCSI network, two for VMs, two for the future HA, one management and one spare.

This customer has 160ish cameras but growing fast, when I was researching I though the Equal Logic was sweet but way over powered for what was needed. The Power Vault is better priced and meets the needs.

Great info, thank you very much. I cant believe you are getting away with a recorder at 4 gig mem and 4 cores. That is pretty amazing. Sounds like you then have iSCSI on int's own SAN switch which is what we also do for our regular porduction environment. While the current slew of Intransa servers are no joy I am a little worried about having all the VM's on a single host in the mean time. I guess I could potentially P2V the base and recorders for now to set aside and then setup the storage for at least archiving. I would like to stick with iSCSI for this also rather than chaining a mess of SAS arrays together as the years go on.

1 VM = 1 server = 2gig and 4 CPU = 45 cam, 25fps, SD

1 physical server = 12 VM = 540 cam = Dual Xeon 32gig

1 Physical server is great but what is the SAN on the backend that providing all that?

I second the recommendation to PowerVault. For systems under 1000 cameras, you are going to pay more for features you don't need if you go more than entry-level storage. DAS and iSCSI are great fits for surveillance workloads, and so is NAS if you can find a good unit in the price range and it's supported by the VMS.

Video surveillance is very low in terms of IOPS, since each transaction can be handled by large block sizes (1MB and larger).

1000 cameras and less can be handled easily by 1-3 servers, each attached redundantly to storage.

By the way, VMware is a great platform if your VMS supports it. Hyper-V is *STILL* not enterprise-grade, and box, KVM, Citirx, etc.. are good solutions but don't run at the hardware level like VMware does. And VMware still has Transparent Page Sharing, which is proprietary and is the most efficient memory technology in the hypervisor world. We run our Genetec systems on VMware in all cases, HA or not.

And Bob, the issue with offloading to GPU's for H.264 has always been stream concurrency. Usually, the GPU's themselves only support a set amount of MPEG decode streams since they're targeting games. I'd love to see a card that can decode 64 streams... of course, that's not saying it may be possible if someone wrote the decoder in CUDA or another native GPU-level language?

Equallogic SAN iSCSI

we have 5 physical servers, more than 2000 cameras.

1 for HA

VMWare VCenter and ESxi

Did you mix the drives on a single EQ unit or do you have multiple units for recording and archiving? I know you can mix drives in a single EQ so I thought a SAS 15K drive LUN for recording with another SATA 7K drive LUN for the archives. Does anyone see a problem with this thought? The EQ units are pricey but when it comes time to move LUN's aroundor other tasks they are pretty awesome.

Yes we have the iSCSI connections on their own switch. In regards to the VMs yeah I am pretty happy with the performance thus far with just four cores. We have two physical hosts right now. I have four VMs running, three recording servers and one ocularis X server. The next phase will be adding one more host and a VMware essentials bundle for vCenter and HA to accomodate the future growth and provide redundancy.