Subscriber Discussion

Should VMS Video Storage Be Defragmented?

BH
Bohan Huang
Apr 13, 2013

Windows Vista/7/8 defrags disks automagically every week by default.

Should this be turned off on video storage drives for the purpose of avoiding IO contention and gaps in stored video during said IO heavy defragmentation process?

Avatar
Marc Pichaud
Apr 14, 2013

Hi

My experience is : as hard disk head already got issue to perform correct read / write task in a normal Windows system (with 10,12, 16 cameras , with often only 7200 Tpm disks not using Raid controlers. If we have to defrag it, it will kill performances like when disk parity launchs rebuild when a raid5/6/50 crash, while it tries to keep on writting. It will also bring latencies and shorten disk Mtbf (heat's up). It does only work for demo on a labtop to boost recording and multiple camera synchronization and you generallly stop on recording while you defrag.

MI
Matt Ion
Apr 14, 2013

Besides the above, I'd say it depends on how the VMS handles storage as well. I remember the old Capture DVRs we worked with, when you set them up, would pre-allocate its storage space in "block" files on the drive, and put the video in there... so it was never reading, writing, and deleting files directly on the file system. Something like this would never need to be defragged.

That aside, I don't think I've ever had to defrag a Vigil's data drives, and never found it to be an issue. The disk map is solid red on these things, too.

Avatar
Marc Pichaud
Apr 14, 2013

Yes , I imagine we are all talking about HD and Full HD .. or 3 and 5 MP systems. DVR based on 4CIF and first 4CIF NVR didn't had these issues as read / Writtes throughputs were so low ...

SK
Steven Killalea
Apr 15, 2013

Is there not a way to prevent any hard drive fragmentation? Like Matt Ion says that storage is pre-allocated in blocks. I have seen this before aswell and it seems to make sense. I suggest that all video steams be wriiten contigous to each track on hard drive. There would be not be multiple folders for each camera, but each video stream written to that track on the disk in one continous file. In the stream of each camera there would be metadata to identify it etc. There would be no trying to use any free space to write video data like windows does. So when track one is full, irt moves to track two, three, etc. Thene when hard drive is full it starts by deleting and track one and writing to it again in same manner.

MI
Matt Ion
Apr 15, 2013

It would seem like a good idea, Steven... problem is then, your DVR/NVR/VMS software would have to write directly to the disk rather than the file system - in essence, you'd almost have to create your own custom file system (vs. NTFS, FAT32, ext3/ext4, and so on). Then you'd have to format your video drives using that FS, which would make them inaccessible to any OS that doesn't support your FS, meaning it would be nearly impossible to backup or retrieve video on another machine without that FS support.

I believe some standalone DVRs do something like this, but they're either running a proprietary embedded OS, or some form of Linux, which to my understanding is a LOT easier to add a custom FS to than is Windows.

There actually was a filesystem that operates somewhat like you describe, always trying to keep data in contiguous blocks whenever possible... defragmenting was as simple as moving files to another drive, then back again (or just leaving them there, for that matter). Alas, HPFS along with the OS/2 it was a part of was a casualty long ago of Microsoft's antitrust practices and IBM's indifference. </wistful>

In the case of the Capture software, it's Windows based, so it works on any Windows-supported FS... instead of writing small video files as it went though, it simply created its "bank" files during initial setup, and then migrated the video data in and out of those. As with anything, there were advantages and disadvantages to the concept (like, there was no way to access the video directly on the disk).

MP
Michael Peele
Apr 15, 2013

The built-in defragmenter isn't very aggressive at defragmenting, particularly large files, which most VMS will use. Fragments aren't inherently bad, particularly in large files. If your VMS makes 200MB files, and they are all in 2 fragments, that's not bad.

If you see issues with the performance, then try turning it off. Or on.

I bet that different VMS will perform differently...

In order to help the OS, file system, and defragger do their jobs, try to keep at least 10%, preferably 15% of the disk as free space. You'll get a lot less fragments.

Avatar
Murat Altu
Apr 17, 2013
AxxonSoft
Steven, you are so right! We have developed our own file system for AxxonNext to prevent defragmation. We record sector by sector, track by track to the disk like to the tape. It increases not only performance, but also hard disk's lifecycle.
New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions