Modern GPUs (which include the Intel HD 2000+ - do not discount these, they are not toys anymore) have the following acceleration features:
1. Dedicated on-die ASIC style circuitry to encode/decode H264 (a.k.a PureVideo, UVD, Intel Clear Video HD, ...) target at consumer media playback and conversion - these are usually useless because they are normally only designed for decoding a single a a few high res streams, where as in CCTV you need to decode 16+ streams for a grid view (is you are decoding less number of streams a CPU does the job no worries).
2. GPGPU compute APIs like OpenCL and CUDA: these have a lot of potential for video analysis/processing inc advanced motion detection, object tracking, image enchancement, maybe LPR etc... However to get this working requires special development skills and $$$ to reimplement the algorithms in a very parallel fashion to exploit the full potential of the GPU. This will eventually be an important differentiating factor for leading VMSes.
3. 3D APIs like OpenGL/Direct3D/Mantle: These can help with a better UI responsiveness, better quality zoom, filtering, scaling and image enhancements, for an obvious example see Network Optix HD Witness
I get really annoyed when people try to tell me how a nVidia GTX580/560/680/670/660/780/770/760 or AMD R9 2xx or other 150W+ monster 3D/compute GPU is somehow supposed to make current VMSes run 'better' when there is no scientific explaination for it at all.
If you want a future proof a machine get a nVidia GTX 750 - under 60W, 4 display outputs with lots compute power for future apps when they becom mainstream.