John,
In AxxonSoft we do use OpenVINO Toolkit. Our reasons:
1) It is not so easy to convince a partner to buy expensive NVidia Tesla cards those are intended to be installed into server platforms. At the same time NVidia GeForce cards are not fully suited to be used in 24/7 mode in a server. As the result parthers search for an alternative solution - not so expensive but stable.
2) It's better to use Tesla cards via NVidia TensorRT package. But there is no a Windows version of TensorRT and Windows is the main choice of our customers.
3) OpenVINO is the hardware abstraction layer for us. It may use CPU, Intel GPU (Intel HD Graphics), Movidius, FPGA cards. Currently Intel GPU and Movidius (based on Myriad 2) are not able to compete with a good NVidia card, but support of OpenVINO makes us ready for the next generation of devices from Intel those (I hope) are coming.
4) On CPU OpenVINO is really much more efficient in comparison with Caffe
which we currently use for inference on NVidia GPU. And it is able to fully utilize multi processor servers. So in some projects OpenVINO may be a good alternative to a graphics card.
We are working on an upcoming OpenVino post and have already begun our own testing of OpenVino via the Movidius compute stick.
I am looking forward for the results. Especially if you managed to get a Myriad X based device.