What Is More Important - RAM Or CPU?

In the IPVMU class, we were discussing specification of VMS servers. One attendee asked, "If you had to chose only 1 parameter to upgrade - dual to quad or more RAM - which would you typically choose?"

I think this is a worthwhile thought experiment. Obviously 'it depends on the application' but for your applications or scenarios you commonly deal with, what do you typically find more important to have more of - RAM or CPU?

It would depend on which you were lacking the most, but in general, all things being equal, upgrading RAM will usually be a much cheaper, and more noticable upgrade.

Some determining factors would be how expensive each upgrade would be, how much additional RAM/CPU the system will allow, and whether or not the OS supports these upgrades as well.

For example, if you have a system that currently has 3GB of RAM and a dual core 2.0GHz CPU, running 32-bit Windows, a CPU upgrade to a quad 3.0GHz would make more sense than upgrading the RAM to 8GB.

On the other hand, if you had a server with a quad core 3.0GHz CPU, with 4GB of RAM, running 64-bit Windows, a RAM upgrade may be very cheap (under $50) and would give you your best bang for the buck.

It may depend on the application, but it will also largely depend on the server, how its software works, and the type of capture hardware or cameras you use.

If you're running a DVR (hybrid or otherwise) with a hardware compression card, the effects of a CPU upgrade may not be as noticeable; with a software compression card, it may make a huge difference.

I know of NVRs being built on Atom-based machines, running systems like Exacq, in which the server portion doesn't do anything but receive, index, and store the video streams, requiring very little processor. Client/VMS systems that need to decode those stored video streams, however, are much more processor-intensive.

3xLOGIC's "Vigil MVR" micro-recorder units are also Atom-based, using hardware compression cards for the hybrids (note: the Vigil server is a combined recorder/VMS system, so playback can be adversely affected on a heavily-loaded unit with minimal CPU). Playback and viewing via the client on a beefier machine is generally recommended with these.

From my experience with Vigil, it's more memory-hungry than CPU-hungry, although less so as of version 6. V5 and prior used a single MSSQL database for its search index, and with a substantial amount of stored video, the database could easily outgrow the allocated RAM, leading to excessive pagefile use and extremely slow searches as the system tried to load the entire database into RAM (V6 and later maintain a number of smaller rotating database files so this isn't an issue anymore).

As Gary notes, the OS is also a factor in the "value" of a RAM vs. CPU upgrade, since 32-bit systems can only make use of around 3.5GB RAM.

One thing I've found recently that really boosts the performance of Vigil machines is the use of an SSD for the system drive - not only is overall booting and loading time faster, but since the database resides on the system drive, it's also a lot faster to read/write, greatly helping search times. (Use of SSDs for video storage is a whole other topic of discussion!)

Anyway, those are just some examples I'm familiar with/aware of...

If we're talking current-gen gear (a processor from the last 3 years or so), then I'd say RAM. Most PC's haven't really been very processor bound in quite a while.

It really depends on the VMS Application. However, as you listed above, Exacq : we use cheap computers for multiple clients but for the actual engine we use quad core I-7 or better Server based machines with lots of ram, and high out put video cards, with lots of ram.

I Like Cpu power, but love Ram for Handling. You really have to upgrade both. And Then it Depends on Peripherals, Lot to think about here.

As BK noted, just from that general question, it has to be adding more RAM all day long.

If it is a 32 bit application then everything over 4GB of RAM will go unused. Exacq has 32-bit only binaries. Can't speak to others.

The Intel Xeon E5 series of processors introduced a number of features I think could be very beneficial to VMS platforms. The AVX extensions should help speed video transcoding. More importantly, the "Direct I/O" feature should have a fairly significant impact on latency. I don't know if these enhancements are automatic or if vendors would need to do something during coding/compiling stages to take advantage. All this is speculation on my part. I've got no specific knowledge to back it up.

Assuming that the area of concern is archiving video (and sending it back to clients) then processing power plays a much less significant role on the server than memory does. This leaves the server's role as receiving data from the network and pushing it to disk. Nothing about this is CPU intensive on modern hardware.

Exceptions to this are when video streams are being transcoded from one format or quality level to another. Some servers that do this have dedicated hardware of some form or another and don't rely on the server's primary CPU(s) to do the heavy lifting.

Another area where processing power is actually relevant is when watermarking video. This can add some minor processing overhead for each camera.

I would take CPU over Ram, but in saying that both must be present to make your CCTV system effective.

I would spend more on CPU as it is harder to replce than adding more RAM. You can always add RAM as needed.

Prices of RAM, like other components" is dependening on market forces and mother nature as we learned with drive storage due to floods in Thiland. You can also run out of cpu sockets and RAM sockets too. So lot of variables but at the end of the day if you cannot replace CPU but you system supportes RAM upgrade then its no brainer.

Matt, I ment to say its easier/cheaper to change RAM than CPUs.

However, you are absolutely right about the RAM pricing. Makes me wonder, how often a typical RAM technology change (ex. from DDR to DDR2) !!!

Pushkar, I know you were comparing to CPUs... my point was, while CPUs (and most other components) continue to fall in price as newer technology comes out, RAM often goes back up as it nears obsolesence. So if you're expecting to upgrade your RAM in a year or so, fine... but if you're thinking you might upgrade it 5 years down the road, don't expect it to be as cost-effective.

Granted things aren't as bad today, but I remember once, many years ago, a friend asked me to upgrade the RAM in her computer (I think it was a 486, when Pentium III was the current technology)... if memory serves, just doubling her RAM to the max her system would support, would have cost more than simply upgrading the entire thing (and ending up with four times the RAM of her old setup).

I purposely didn't read the prior posts so I wouldn't be biased one way or another when I answered this for myself.

I would say that RAM is the most important hardware factor to upgrade, given that you have a 64bit system that can utilize the RAM. I see RAM as being the constraint much more often than I do the processor being bound up with processes. An important aspect to consider with the RAM, however, is whether you are using ECC or non-ECC RAM. For example, when I build AutoCAD workstations, using ECC ram is a must, because it helps avoid those inexplicable errors and lockups that can't be explained any other way. I haven't had as much hands-on experience with VMSes, but I imagine they are extremely RAM intensive in how they display all the video feeds and depending on the amount of desktop real estate (resolution), is used.

My take was that we were discussing VMS servers, not clients. In my humble opinion, if your server has a monitor attached, you are doing it wrong.

For clients, I would think focus shifts primarily towards graphics cards, followed by CPU, then RAM.

Non-ECC RAM still creates VERY stable systems (I've got non-ECC servers with uptime measured in years). However, for larger systems, I usually adopt a "Better Safe Than Sorry" approach. A server crashing (especially one that's not actively monitored) can cause missed evidence, and embarrasment for the integrator. So once I'm beyond a dozen cameras I usually spring for ECC ram (it's not THAT expensive).

I realize this was an A/B question, but, IMO, an option C plays into this as well: HDD interface/speeds...

HDD speed can be a big factor too (I think I already mentioned that 'way above).

I'd reiterate here, in deciding which of the THREE factors is the first to upgrade, it's important to have at least a working idea of how your specific software works and how it makes use of system resources FIRST.

Throwing a bunch of RAM at a machine won't help at all if your software already isn't using much of the RAM you've got.

Similarly, if it uses lots of RAM but not much CPU, then sticking in a new CPU probably won't a very cost-effective upgrade - sure it might make the system a little faster overall, but you likely won't get the kind of improvement you're looking for.

I would say RAM and processor speed are both important, but if I had to choose one, I would upgrade the RAM. It seems to be at a greater risk of being maxed out or pushed to its limits than a CPU would. At the same time if, you are not running a 64 bit OS, than too much RAM is pointless.

To throw in one other element that I don't think has been cited yet, where motion detection is performed likely impacts the choice. To the best of my understanding, motion detection is typically more CPU intensive than RAM. Agree/disagree?

John, I'd think that would be somewhat dependent on the specific system as well. I know I've seen the Avigilongelists go on about how server-size MD is SO much more processor-intensive than Avililon's camera-side method... however, my Vigil installs have used server-side MD on 99.99% of channels (there are maybe 20 channels between many dozens of sites that use constant recording, the rest are all MD) and I've never noticed anything that I would consider to be "heavy" CPU usage beyond what one might expect otherwise.

I've also been told by a couple of 3xLOGIC engineers, specifically when I asked about this, that their MD takes very little additional CPU. I haven't had the chance to actually test this myself, but as I say, I've never seen any systems that seemed to be particularly loaded down from MD. When I have found a Vigil running high on the CPU meter, it's always been traceable to something else.

That said, I wouldn't be surprised if other systems have less efficient MD - obviously there's no one "standard" for implementing it, and others may not do it as well or code it as cleanly.

If I get the chance, I'll try some comparisons on my bench machine and see what I come up with. Nevertheless, my point stands: knowing where to upgrade first should rely on knowing how your specific system uses resources.

Matt, I suspect there's a wide range of VMS motion based performance / load. Even within a system, sometimes there are options. A while back we discussed how Milestone offered detection on all frames vs detection on key frames only, which resulted in significant load differences at the tradeoff of less frequent checks for the key frame only version.

Agreed, John, and that's exactly my point: if you have to prioritize upgrades, it helps to know which will aid your SPECIFIC software. And that goes for ANY kind of software, not just VMSes and NVRs.

If I am using VMS and setting up motion dection in it does it know that camera x has this capability and that it should use camera x api and off-load this process to the camera?

If the install is small to medium, perhaps going virtual may solve this whole debate. You can allocate cpu and ram as you need it.

Paresh, it will depend on the VMS. I know Vigil doesn't use camera-based MD at all. Others like Exacq don't have their own and will ONLY work with camera-based MD, which could be a drawback with cameras that don't do it. I'm not aware of any that will do either/or, although I'm sure someone has thought of it.

There is not really a correct answer for this. In every case it completely depends on the system and setup. It is determined by what you have already hardware-wise and by the VMS system you have installed and the specific way you use it. There is always a bottle neck in a system somewhere and by addressing that specific target you will get the most bang for the buck and just move the bottleneck to the next system resource. Adding RAM that will not be used effectively is just as wasteful as upgrading a CPU you will not need. In a world of future unknowns, I would generally agree with Paresh in that the CPU is easy to choose now and add RAM later, but if your system will never use it, what is the point?

However, there is NO way to know the correct answer without the homework involved in your specific scenario. It is a long multi-page description of the process for determining the RAM, CPU, I/O requirements for any system design/upgrade. Being a VMS does not change that.

More is never a bad thing (RAM, CPU, SSD/DISKs, NICs, etc...) unless you are the one writing the check... :-)

I have been testing the performance of several VMSes on our XNVR server systems for a couple of years now, and it is very true that you have to include HDD and NIC performance as Corey mentioned. The OS is either Win7 or Server2008-R2 in my testing since most VMSes are not Linux ready...yet.

CPU is important to be able to handle the data traffic through the system, any analysis that is done on the server side, as well as preparing client streams from the old recordings (ie playback vs live view) if that is part of the mix. A lesser CPU will show its stress at a lighter overall load.

For VMSes that are built to be BOTH Server and Viewer on the same machine.... get BIG CPU and BIG Video RAM as these are the primary elements stressed doing client operations.

RAM is a mixed bag. For sure, one does not want to run a VMS with only 2GB RAM...4GB is a minimum. ECC RAM actually slows the performance down by a small amount, but you do get good reliabilty from it.

HDD performance is a key factor in all VMSes. At a minimum, the RPMs is an indicator of potential performance with 5400 RPM at the low end and 15KRPM at the high end.... 10Mbytes to over 70MBytes depending on the VMS. The HDD interface speed also plays a role with the 6GB SATAIII offering an advantage. A good RAID controller card helps here as well.

NIC performance is also important as you want to use 1GB ones. The easiest thing is to have several NICs available that you can allocate to different subnets in order to divide the load across more than one hardware element. The Teaming feature does not work as expected with VMSs and only seems to server as a redundant data path in case of a fail, thus the reason I mentioned using multiple subnets.

Certain NIC hardware also cooperates with certain OS features to minimize the overhead involved with servicing the data movement. Both of these are very important if your long term storage is located on a NAS system someplace instead of in the Head box (the All in one server).

It is a balancing act for every integrator to be able to estimate what sort of hardware and OS resources will be needed for a particular installation, without having to design a CYA system. Each VMS stresses different ones.

Having a good definition of the installation camera loads is key to be able to pick the system pieces you spend your $$ on. More of the 'right stuff' is what you are looking for.

Mike, great feedback! Thanks.

For hard drives, how do you know how many RPMS is enough? In other words, do RPMs map to throughput? i.e., 5400 RPM hard drive supports X Mb/s throughput while a 7200 RPM one supports Y Mb/s?

Btw, I wouldn't hold your breath about most VMSes not being Linux yet :) Unless there is some new generation of VMS competitors, most of the incumbents are deeply locked into Windows.

A review of Hitachi Ultrastar drive specs shows only a loose correlation between throughput and RPM. Seek time is much more closely related to RPMs. Seek latency is a key factor for databases, but not very important for VMS purposes. I would think the added of expense of 10k or 15k drives would not be worth the expenditure.

6Gb/s SATA III is also overrated (few spinning drives will saturate a 3Gb/s SATA II connection). Once SSD's become practical for video storage, that should change. However, SATA III doesn't seem to really add cost to the drive, so there is no harm choosing it. SATA III support for your RAID controller does seems like a good idea though.

As for Linux support. The lack of it drives me nuts.

One thing to note is that SATA is generally a point-to-point connection. So either a SATA drive can saturate a link, or it can't. Other drives do not matter really as their data does not share the same link. (SATA expanders and/or RAID enclosures/controllers are not included in this though as they have differing design issues.)

RPMs don't map directly... Sort of indirectly. The first 1/3rd of the HD storage has about 200% better sequential performance than the last 1/3rd. The speed and quantity of the HDs required for a VMS depends on the storage allocation technology being used. VMS systems vary quite a bit here. In most cases a single 1TB 7200RPM HD can support at least 5-8 2MP HD 5-15FPS cameras under Windows 7 without much trouble and without using any sort of fancy storage technology. With average tuning this usually has 25-40Mbps throughput.

If you watch the Windows 7 PerfMon and the disk queue for that disk is not getting above 1.0, then you do not generally need more RPMs, or more drive spindles to handle the I/O.

The keys to storage are IOPS (I/Os Per Second) which is the number of disk transactions (seeks, read, writes) that can happen on the disk and Throughput. Cameras will not be able to outrun modern disk throughput generally. It is IOPS that is usually the limit and that is both the seek time of the disk and the indirect relation to RPM. A 15K RPM drive can handle ~2X the IOPS that a 5400RPM drive can, but the cost of a 15K drive is more than 2X higher. This is not apples-to-apples though as 5400RPM drives are usually desktop/consumer grade, while 15K drives are Enterprise grade and have more features than just RPM improvements.

This focus on IOPS reduction is mostly responsible for the the various efforts at VMS storage models that has been discussed here several times before.

Great detail Corey. My lab findings support your observations.

You mentioned Disk Queue and I have not quite decided what metric to assign to it. Yes, lower is better, but VMS application engineers tell me they have a rule of thumb that is 2x the spindle count on a RAID set.

I think that really means that if you look at the Disk Queue average, that it should not be above that rule count.

Personally, I lean towards a 1 to 1 for a RAID array (I use RAID5 as my reference)... so if I have 4 drives in the array, I do not want to see a queue depth average above 4.

What have you observed?

Yes. The details are a little more complicated if you get into what is really going on. My point was that if the numbers are below 1, disk performance is not an issue by itself and most likely you can invest in something else. I was trying to distinguish the relative importance of the original question of "RAM or CPU" and the answer is not that simple. If the queue is between 1-2 or higher it needs a good bit more understanding. Lots of questions suddenly become much more important.

When are you measuring (day or night and in your world which is more data intensive)?

What applications are running during the mearurement? Video Reviews, burning DVDs for export, Analytics searching, maintenance/backups, etc... The non-recording apps which are sequential in nature, can change the I/O workload from a predictable sequential load to a much more random I/O workload and reduce overall performance.

What kind of local storage/SAN or NAS? How does the storage system handle it's write cache and how much is there to adapt to changing workloads? How does the storage scale as workloads increase? (some systems attempt to autotune and move hotspot disks/files blocks around)

With most RAID setups the 2X number is not a bad place to target from 10K feet, but it is usually 2X the number of DATA drives. RAID5 of 4 drives has only 3 DATA + 1 PARITY, so that would be 6, not 8. RAID6 has DATA + 2 PARITY and RAID1/10 has 50% PARITY. Also, after it goes above 1, the disk transaction time becomes a key parameter to watch. Once that number gets above the average seek time of the drives installed (usually between ~3-12msec for hard drives depending on RPM and SAS/SATA technology), you are basically saturated and will have a difficult time sustaining any more workload on most storage systems. Also, in general, RAID performs better when the number of DATA disks are a power of 2 (2, 4, 8, etc..) or at least evenly numbered. There are exceptions to these, but it requires very specific knowledge of the exact storage system design details to delve much further.

There is no one parameter that is the key, nor will there be. You can't tell someone that asks what the best truck to buy is until you know what they want to do with it. They might not even need a truck, but really a 4WD mini-van, or a hybrid EV... You need to know the details of the application to be able to offer real guidance.

Sorry if this got a bit long, but that was my point in the beginning.


Interesting stuff. A few more questions.

Are you saying that going from a RAID 5 with 9 disks (8 DATA + 1 Parity), to a RAID 5 with 10 disks would cause a decrease in performance? That's not my understanding, nor do I see it in the documentation for any RAID controller I've looked at. I have heard of the "power of 2" rule before, but my understanding was that an additional spindle nearly always provides more benefit.

Also. When evaluating queue depth, what is your recommendation w/ regards to load conditions during test? For example, upon experimenting with a few Exacq servers, it appears searching archived video will take disk utilization to near 100%. This is regardless of disk systems IOPS performance, or the load when it's only recording. Upon completion of the search, IO loads return to normal, with no apparent interruption of recording or live view functions. These servers all seem to be performing well. I am guessing Exacq is somehow throttling the reads to ensure recording is not impacted. I was surprised to see 100% utilization during searches, but it is happening no matter how much "headroom" the disk system has (though those with more certainly do complete searches faster - but all ramp up to 100%).


Here is the queue depth answer.

Disk priority for I/O can be set by applications with a file system API call when the program opens each file handle.

It is worth noting that the Windows Disk % utilization is sort of a bogus number. It is really just the queue depth x 100. A single threaded application running around searching and reading things one at a time is only going to create a queue depth of ~1. This would = 100% peak, plus the underlying workload for record/playback. Most I/O systems can handle that just fine. This is sort of an implied throttle.

I can generate a sustained load of about 300% by running a defrag, a chkdsk and a file copy simultaneously on my laptop. Each one can effect about 100% on it's own since for a single read operation, they are handled in sequence so the queue depth hovers around 1...

Exacq could be doing it on purpose or by happenstance. I do not know which in this case. A little experimenting should show the answer though...

Windows will let you turn off the ability to prioritize on a specific volume. You can try it and see whether they are asking Windows to maintain control, or not.

Problem with I/O priority management

If you are curious...


As a generalized question, more RAM is mostly desired; for the OS, remote control and health monitoring functions, and large database indexing. (Even if the VMS software is 32bit, if the OS is 64bit and you have 8GB RAM, the VMS can utilize that extra RAM for database indexing. This comes from field experiance.)

If you throw server side motion detection into it, it has to be CPU priority then because it's the CPU that gets utilized the most.

Quad and six core CPUs are the standard for servers these days. RAM is cheap. I don't build a server with less than a quad core 2.0 GHz processor and no less than 4GB of RAM. If I had to choose one to upgrade it would be the processor.

More importantly if I had a chance to upgrade anything on my server it would be the hard drives/RAID Controller. Move up from SATA to SAS or at least Near Line SAS and have a nice high performance RAID card in there.

About throughput, selected RAID system affects performance. Alsfo, for WorksStations RAM would be betterm although I've seeen that doing playback requires more CPU


Two parts here… First your answer..

There is no hard and fast rule that will always predict the "best" RAID. YMWV (Your Mileage Will Vary)... RAID5/6 with parity is a controller specific/vendor specific tunable system. Nearly always, "it depends" on both how you have it set up and how you use it.

For example, a full stripe write of say 1MB chunks (8D+1P RAID5 -> 128KB x 8 = 1MB ) allows the controller to write 128KB to each disk (assuming this is the segment size or a multiple of it). This means that the parity ASIC in the controller does not need to read any disk blocks, only calculate a new parity value on the fly and just write away. That means you have 8X the sequential throughput of a single disk drive for both large sequential reads and large block writes. A RAID 10 array with 10 disks has only 5 unique storage locations and so typically has a peak of <5X for sequential writes as both disk have to keep up. Reads can be either 5X or 10X or somewhere in between depending.

Some RAID vendors even license bottlenecks... The storage system is capable of 3X performance, but they sell it for a lower price with a governor installed providing 1X but still good numbers. Installing a new license key, uncorks it and the system performs MUCH better...

Now the philosophy…

We could go over a bunch of examples, but my purpose is more to illustrate how to fish than just put fish in your basket if possible. Kind of like memorizing a multiplication table, instead of learning how to multiply... The table is very fast to recall, but severely limits your options. It also assumes you memorized every possible scenario. I can’t predict the future that well… The skill of multiplication is much more valuable.

By learning to watch some very basic Windows Perfmon counters (Linux has them as well), you can nearly always make a good choice about the effective use of money and technology to solve or avoid performance problems.

Remember that Windows7-x64 really is Server 2008R2 under the covers. Download one of the Microsoft Server tuning guides. Performance Tuning Guidelines for previous versions of Windows Server - Windows 10 hardware dev | Microsoft Docs They are surprisingly readable. You are really designing a server with a VMS/DVR…. Just on a smaller budget…

Spend an hour or two with a few cameras recording to a disk and play with the camera encoding parameters. Crank up the frame rate and the resolution and turn down the compression. Switch to MJPEG even and run it at night during the snow with flood lamps on or in the dark with the gain WAY up. Have the cameras watch an action movie, a TV channel of noise/static/snow, or point them at a spinning fan which fills the frame…

Just the opposite of what you would normally do. :-)

Use Perfmon to watch the NIC throughput, the disk queue, the disk latency, disk transaction rate, the CPU loads, Memory, etc.. Learn how what you change affects the overall system workload. Try a single disk instead of an array. Try turning on and off the RAID write cache to see what happens… Maybe download one of the many free benchmark utilities and see how your tuning affects sequential versus random I/O numbers.

Modern Nearline SAS or SATA RAID5 volumes are HUGE. What do you do with a 16TB or 24TB volume? How long does it take to rebuild from a failed disk? Days? What if a bad block is found during the 20TB 3-day rebuild (it happens fairly often)?
In this sort of scenario, maybe RAID6 is better? Test it and see… While a RAID volume is rebuilding, how are the Perfmon numbers compared to before? Can you even keep the cameras running? Is a 4-drive RAID6 better than a 4-drive RAID10? How many IOPS does each camera add to the disk system? How many can your design handle with the disk transaction times staying below 5-10msec?

Just tinker with it for even a ½ a day… This sort of time and information will pay you back immensely with your better systems design and overall understanding.

Very few real performance problems exist today that cannot be solved or avoided quite readily. Nearly always, the problem lies in the design choices. Sometimes there is a cost to resolve it. (a good design is not free) Learn to identify where the root cause of the problem and this will greatly reduce the waste of your time and/or money regardless of the source...

Listen and learn from the vendors, but they are selling something so pay attention, but do not blindly follow them. Put more trust in independent studies/reviews than any single vendor. (that is why you are here, right?) A solution has to work for YOU and your customers, or it doesn’t matter how cool/fab/fast/etc… it is to someone else… Ultimately, we are all paid to solve a problem for someone else. They barter their time to earn money, for our time to solve their problem with a solution.

No matter how much someone knows about this stuff, they can’t tell you in advance whether you need more CPU cores, or fewer faster ones, or more RAM or more disks, or RAID Cache or NICs, or switches, etc.. without measuring what you have and how you use it first. But, with a little time spent learning this stuff, you will be able to do so while you build/install/support the systems you put together. You will learn that the A+B+C combination needs XYZ solution and more importantly, you will know why and how it should change for the next one.

(BTW, this works just as well for playback issues as well on the workstation side. Same song, different verse...)

Sorry for the long post, but you would be surprised how often this exact type of problem comes up...

"No matter how much someone knows about this stuff, they can’t tell you in advance whether you need more CPU cores, or fewer faster ones, or more RAM or more disks, or RAID Cache or NICs, or switches, etc.. without measuring what you have and how you use it first."

Bravo, Corey! Exactly my point as well, but much more succinct!

Hah! I have never been accused of being succinct before!


Well, we have thoroughly hijacked the original question. (I blame Brian). I've found it thoroughly educational.

The key takeaway for me has been further investigation into the system profiling and tuning tools available.

John, It might be worth doing a follow up IPVM article that teaches users how to identify where the bottleneck is in an underperforming system.

James, this has been an excellent educational thread.

I think we should follow Corey's advice and spend some time tinkering with different settings and then share our results of what patterns / issues / solutions we found.

That was a great post, Corey!

Thank you Luis.