Getting Into Server Virtualization - What Should I Be Aware Of?

I was hoping to get the viewpoints from others in our industry as a good starting point.

I have a project right now that when finished, will have multiple servers (4-5) running separate databases that make up the security/wireless network infrastructure for the building. I see this as the perfect opportunity to get into virtualizing my machines. I've always wanted/thought it necessary to learn this, but realistically, I've typically only supplied up to two servers at a time (access control/video management) so it just didn't make sense before now.

While I'm eager to learn and present this to the client, I've been trying to do some research so I don't overanalyze or over/undersell the various system's needs. With all the information available, I found vmware's free vSphere Hypervisor may be the place to start. My needs are pretty simple:

1. One rackmount in-house server for virtualization.

2. Ability to have 4-5 separate virtual machines running simultaneously (Linux and Windows).

3. Ability to set each virtual machine to a separate VLAN.

4. Video management server will not be included as I've left this on a separate machine.

I do plan on using a PC and set up a test environment to give me a better understanding, but if any of you utilize this on a regular basis, I'm hoping you can answer a few questions:

1. Is the vmware vSphere Hypervisor a good starting point (free being the driver)?

2. Are there any hidden costs associated based on my general needs?

3. Will the free version of this product perform all basic operation functions?

4. Are there any issues to be concerned with?

5. What is the best way to calculate physical hardware needs (like an online calculator)?

Application hardware requirements within each machine (besides the OS) are small, separation is what is most valued. Thoughts on the questions above or your general opinions are appreciated. Thank you.


If you are eager to learn, go out and set it up in your test environment. From our experience, I would say be very careful as you might be introducing yourself to painful and time-consuming bugs / troubleshooting.

The biggest issue is whether the applications you plan to run have been verified and validated to run on a given hypervisor / VM. I'd start there as you might find at least only that does not support vSphere hypervisor, etc.

Related: VMS On Virtual Servers?

Unless you have a backup host server to run the VM's should the host server fail, you'll be down 3 or 4 servers (however many virtual machines you are running), instead of just one. So you'll still need more than one server.

Non-VMS computers may be fine. But VMS servers, depending how much they are doing (server side motion or analytics) tend to have higher, consistent CPU usage than other types of systems. Part of the concept of running VM's is instead of having 2 or more servers that only occasionly peak their own physical CPUs, you run multiple VM's that take turns peaking the same physical CPU and hopefully don't often peak it at the same time. It's like 2 people taking turns using a stove instead of each one using their own - a sharing of resources. (I was going to use somehting else as an analogy, but then deemed it wouldn't be appropriate.) But if a VMS server is consistently using 30% or more of the CPU, even if that isn't very much by itself, it doesn't leave much for other systems.

Unless of course you get host servers with multiple physical CPU's with like 6 or more cores each, then you have plenty. But wait, don't forget you need a backup host server. Now that cost savings doesn't seeem so great.

Other gotchas are free versions, whether Microsoft or VMWare, tend to be limited to the number of VM's they will host unitl you upgrade the license. And maybe they won't take snapshot backups of the VM's unless you upgrade the license or use a 3rd party product. And sometimes are limited to the throughput they can use unless you use virtual SCSI drives, but then that requires purchased license. Want emergency phone support beause a customer's access, VMS, and intercom system is down? Gues what....

Virtual servers are like wireless cameras, high megapixel cameras and video analytics. They have their uses and sound appealing, but they have their limitations and special considerations. They are not for every job. The factor in the IT world for VM's usually is cost savings on space, hardware, energy and cooling, but that doesn't really come into play until you start scalling up to a LOT of virtual servers. A lot more than what your small group of security related servers can usually justify.

That's my 2.71 cents.

Been using vSphere Hyperisor for our office demo/test system. Picket up a 1U dual CPU Supermicro server for the host and currenty runing one 8 Cord Xeon CPU with 32GB of ram and mutible SSDs for the host and VMs. Right I am using a Synology NAS for iSCSI storage but I will be installing a Fiber Channel storage unit to replace the slow Synology storage. I am running 6 VMs running different VMS platorms with between 30 and 50 cameras recording to this host. I plan on adding another CPU and 32GB more of RAM so we can have every major VMS running and testing at the same time.

It's alot of fun to play with and great to testing and has been rock solid for the last 9 months but I wouldn't deploy this at a customers site unless we have multiple redundant hosts like Luis said.

Mike, that sounds great! If you don't mind be asking, is that 30 to 50 cameras across all your VM's, or per VM? What tends to be the consistent CPU % used?

Luis some of the cameras record to mutible VMs and others to a single VM. Total CPU useage is about 50% capacity last time I checked. I am by no means a VM expert but I love to learn and play with them.

how many cameras on each VMS and what are the cameras resolution and frame rate used in your office?

2MP to 16MP resolution cameras most running at full frame rate. It changes everyday as I add, move and change things around. Right now I am migrating the storage from the iSCSI targets to the Fiber channel unit.

You run into 2 things here one as was said by others, having a redundant back up, and getting it all to work together.

Something else to consider if it breaks or you get a software that bricks the system you are out in for some real fun. worst case the server dies with an unfortunate motherboard malfunction. not saying all of this will happen, but its something to consider when putting it together. last thing you want is one "server"( phyiscal) to go down and take 5 systems with it and have no back up plan.

On a personal note i have had two VM's running with two different VMS's just to prove it works. It does work just pay extra attenion to the network side of the beast.

Has anybody used Virtual Box from Oracle?

I just chose it at random, but it seems okay. I had a problem where a process was corrupting SQL. With the VM I could just endlessly restore snapshots until I figured it out.

Haven't put any load on it though.

virtual box is what I use normally. It is a barebones virtualization software. its good for testing, light production and back ups. as much as I am a fan of VBox it would not be my first go to in a large scale production.

Some companies like this, Scale Computing, attempt to make it easy for you by offering a minimum of 3 servers that are essentially clustered together for full fault tolerance. So no matter how many VM's you are running, you can have a full hardware failure of a complete server and everything keeps running. (System is customized Linux flavor.)

With a good support plan and relatively easy to use interface, it should be minimal overhead to administor. But it starts at I think around $20K to $30K, so again the cost benefit is gonna be hard to justify.

If your going to VM, and are unsure of exactly what you are doing, i may recommend Hyper-V from Microsoft. Its basic, its free with server 2012, and we run a lot of our customers on it. It doesn't have nearly the bells and whistles like VMWare does, but its a good starting point, and it will give you practice with the Virtual switching and routing aspect, you can save your VHD to a SAN or something off host and if your host fails, you can fire it up on a different host. I think the next version of Server will have the ability to failover to a different host. And you can run a VM inside of a guest os so run your CCTV on the host, then run your access control inside a VM or visa versa. The thing to look out for is to make sure you build your system right, with plenty of RAM and plenty of CPU and ensure whatever your going to use is licensed for the number of CPU Sockets that you are implementing. Avoid Virtual Box, and Xen Server, you will be crucified infront of any self respecting IT guy. If you want to be fancy, go VMWARE, its pretty cool, if you want to just virtualize and be done with it, go Hyper V its already built into windows and most of your OS licensing already includes a VM license.

I can't agree more with Luis about the fact that a VMS rarely has CPU and RAM peaks and valleys. My personal experience is that VMS systems generally have very even, stable usage. This is the epitomy of what VM's aren't designed for. VM's really save money when you have bigger peaks and valleys in usage. For example, you have a file server, print server, email server, etc. None of those are likely to be constant, stable, even usage. They are going to have on and off usage, which is great for a VM system. They can handle multiple hosts and spread out the resources to a point where all the peaks and valleys compliment each other, ending in a constant usage profile when summed.

And I go back to my advice given in another thread...

K (eep)

I (t)

S (imple)

S (tupid)

Don't McGuyver a solution when it isn't needed. If you have space for a stand alone server, how do you justify the expense, complexity, and vulnerability of VM? Space is really the only factor where VMs can be justified.