I am knee deep in the storage internals and work for a storage and software company (EMC), so I will attempt to be more user centric and unbiased on this response. EMC specifically has a large investment in labs and people that perform testing across many VMS and related security components and I would like to comment on the details we are observing.
To summarize the long winded discussion below, I would say there is place in the surveillance systems for DAS, NAS, iSCSI, Flash arrays, and even object. All for different functions and implementations. As an example of the myriad of options: Axis and Samsung both support NAS on their cameras, Bosch supports iSCSI, while some camera companies support object, and VMS vendors have similar options.
There are unique capabilities to network based storage systems that allow the abstracted disk location to provide some substantial flexibility to the application (VMS, evidence managment, case managament, analytics, etc...). One VMS provider wrote an article to summarize their specific advantages using network storage- As Video Expands on the network, evaluating proper storage options is crucial. The nice thing here is this abstraction allows the storage/software provider for the storage subsystem to implement technologies to allow advances in technology while keeping the interface to the application standardized (SMB, NFS, SWIFT, HDFS, etc...).
For larger systems, typically above 100 cameras, the use of NAS abtsractions can provide substantial simplification of the system design if the network storage subsystem can handle the bandwidth demands of multiple recording instances. I have seen simple NAS implementations handle even 100 1080p@15fps without issues when using some of the more mature VMS software providers. Larger scale system may require a scale out technologies such as and EMC specific document outlines here Scale Out NAS for Video Surveillance. We also see substantial savings when moving this direction on capex and opex, which is usally dependent on the product portfolio's capabilities (aka, best of breed) to be most efficient via various technologies.
For smaller systems, I see much of the value of abstraction to be in allowing a customer/integrator freedom to plug and play multiple components of a sytem. Example: using direct to NAS recording via the cameras. For the majoiryt of customers I have seen in the SMB and even retail space, the use of DAS is most common and will remain that way unless backend anaytics is relevant and leveraging sensor data is part of the approach. Then, I see more of an edge to core approach needed where data needs to be moved to core for analysis (imagery, metadata, 3rd party data, etc...).
We see the pure bandwidth capabilities of the video surveillance system using network based storage compared to block to vary as a function of the VMS implementation and the ability for the per-camera recording/playback process to accomodate higher latency IO operations. This is very important as the same vendor that can handle 700 Mbps on DAS based implementation without a hypervisor can only handle 120 Mbps when using NAS and a hypervisor based implementation.
The industry seems to be shifting towards what is enabling costs savings in many datacenters: server consolidation (1 VMS per 1RU server versus 6 VMS instances per 1RU), scale out systems (reducing need for smaller LUNS, defrag, and data migrations), and then even hybrid cloud based architectures. This is forcing many of the VMS providers to examine their software architectures to ensure the ability to handle the higher latency IO operations.
More specifically to latency, when using block based via a local storage target using SCSI through the motherboard, SAS connection, FC connection, or FCoE connection will be <<1 ms when considering a WRITE() command (maybe .1ms for local and .3ms for SAN).
When using ANY network storage system where the network itself has ~1 ms RTT alone and add to that multiple higher level protocol interactions to accomodate the same WRITE(), thus increasing the average application experience of latency to >1ms (average of 2-5ms is good). The other aspect here is how well the IO implementation of the VMS deals with variable latency (std dev) and maybe an average of 5ms with outlier maximum latencies of 50ms for a very small number of operations. The latter sometimes has much to do with operations like MKDIR() and maybe other file system attribute operations. The use of cloud based storage and object based interfaces (SWIFT for instance), will require building applications that can handle even higher latency (20-50ms average RTT without a CDN) and variation due to the use of the public internet for transport.