Does Anyone Do Recording With Each Camera Assigned To Its Own Drive(S)?

So let's say eight 1080p cameras each assigned to one of eight 2 TB drives. Talking real hard drives here, not SSD.

JBOD with 2 controllers.

Considering the way video is appended to files on a disk, I would think this would provide high performance. By using a dedicated drive per camera you would greatly eliminate the reseeking and rotational latency that naturally occurs as the streams are written/read concurrently.


  1. Excellent write performance
  2. Excellent synchronized read performance
  3. Single drive failure affects only a single camera
  4. Cold single camera chronological archive could be created by hot swapping drive when full


  1. Footprint may be larger, depending on retention period
  2. Cost may be more, depending on retention period
  3. Retention period constrained by drive topology

Are people doing this?

Why or why not?

Which VMS's support granular mapping of cameras to drives like that?

Also, let's say your 8 cameras average 2Mb/s each, or 16Mb/s total. A typical SATA drive can write data at over 100MB/s, even while having to read data back out for other operations. I don't think the HDD is really the bottleneck in this case.

BTW your Pro #4 above is essentially the concept of Veracity Coldstore.

Which VMS's support granular mapping of cameras to drives like that?

(old) ONSSI and Milestone at least, I just assumed others did as well.

I don't think the HDD is really the bottleneck in this case.

Yes, I would agree that many streams can be written to a single disk concurrently.

As for what the bottleneck is, it can and often is the disk, depending what you are doing.

For instance synchronized playback performance is, depending on the stream count, more dependent on the disks than the CPU capacity.

A simple, if extreme proof of that would be to sync play a 16 matrix and compare to the same 16 live view for smoothness/dropped frames/latency. If the disk is not the bottleneck then it should look as good an smooth as the live view, right?

Another example is during exporting of video, where higher disk performance can shorten export times.

Yes, drives can read and write at 100MB/s, but that assuming the best case scenario of large block sequential io. Every time the disk has to reseek drastically diminishes those speeds as it takes up to 10ms (a computer eternity), for the platter to move to the right spot under the head.

Now, on the writes, on-drive caches and RAID striping do help at keep the writes long, but read performance is not going to be as good (in the sync scenario) because multiple camera data will end up being on the same drives causing the reads to degenerate to random reads.

Overall, I can't say it's necesarily worth it, (though I do like the idea of per camera archiving), I wanted to see what the best knock-down argument against it was.


It's an interesting thought exercise.

This seems like the kind of thing that would pay off more as you got to larger camera counts, 50+, give or take. At that scale I don't know that you'd want to be dealing with 50+ individual drives, especially if you were concerned about any kind of RAID for resiliency.

The systems where it's manageable, 8-12 cameras, would not likely see very noticeable improvement. If you are highly concerned about scrubbing video from 16 cameras at once you might be better off finding the VMS platform that is most optimized around that task vs. trying to improve performance this way.

I LIKE your points on this, and agree. And I think people here don't understand just how crappy spinning drives are, no matter WHICH one it is!

When you're talking IOPS of max, 1000 ...?? You're in trouble if you have multiple drives trying to CONSTANTLY write, multiple streams, occasionally playback, and skim around looking for a specific section.

Switching 6 4k cameras from a single SSD to 4 Spinning 7200 was PAINFUL.

As far as Milestone, my next curiosity is the difference between "recording vs archiving" when you're setting up the server's storage.

What do these words truly mean in Milestone's nomenclature?

Personally, I was thinking about putting a bunch of 1TB - 2TB SSDs in my Milestone server, and then use iSCSI as archive if it means what I think it means. Do you have any opinions for me? I LOVE how easy it is to map cameras to drives. That there is a spectacular feature.


Truman you should try a demo of Avigilon or NX Witness. You will quickly see how much faster reviewing/scrubbing video is on these VMS platforms without having to spend a fortune on a super fast storage systems.

FYI, Video Insight has always supported mapping custom paths to any storage device for any camera.

video insight and dahua are ones that come to mind, thought dahua isnt exactly a VMS.

I wouldn't think losing a single cameras entire history would be acceptable. We use DW Spectrum (NX Witness) and the way they write to drives is a round robin schedule per hour. Say you have four drives, they write to drive one for one hour, then switch to drive two, then an hour later drive three, etc. This allows drives to be used for one hour at a time, then rested (unless playback occurs). This is similar to Coldstore, but not exactly the same. I don't know how much energy savings, if any is achieved by the Spectrum method, where Coldstore is known and proven.

If there was no backup I would much rather lose all of one camera than the same amount of video across multiple cameras, but that's me.

You could also mirror the drives if you needed to.

I haven't heard the "give it a rest" round robin drive allocation, is that still a RAID config? Is it more for power or for extending the life of the drives?

Not for us it isn't. JBOD

Not for us it isn't.

I respect your opinion, but I still don't see the logic.

Let's say it's an 8-camera setup, with the Spectrum round robin algorithm. Since the system is only recording to a drive at a time, changing every hour, if a single drive fails you end-up with multiple 1-hr gaps where you have NO cameras recorded at all. Probably the last hour is gone too, since the drives that are resting are less likely to fail (I'm guessing).

Cameras normally provide some redundant information, even if views do not overlap, so even without a critical view you may be able glean information from other channels. In short, I think it's a greater risk to have periods of total blackouts, than complete loss of one view.

Just my opinion. :)

Wait, what? What happens to the data each of the cameras would have for the hours they aren't prioritized? You just get one hour of video at random? O_O

We have done this for certain marijuana operations. Due to the varying bitrate of each camera despite identical settings its much nicer in theory than in reality. I prefer the ability to adjust which drives go where over multiple drives and multiple cameras per drive (if using JBOD). That way you can even out the recording time to more closely match across the board.

Interesting idea. I think the drawback would be the extreme cost and space requirements. The cost of drives themselves is not necessarily a concern but rather the cost of SATA controllers for all of those drives. I think the drive quantity within a chassis will be a limiting factor as well. I am assuming we are taking a sizable site here, not a site with 8-16 cameras.

I'm about to install some HIK 9664's with 32TB (8 4TB drives each). From the HIK stock config menu, I can: 1. leave them raw, 2. create a'quick install' RAID, or 3. assign cameras groups to specific drives.

The latter particularly intrigues me, as I've read HIK's RAID reduces bandwidth, and the idea that with the separate drives per camera groups, (as was mentioned above) we might only lose 2-3 cameras vs all of them if the RAID hiccuped.

From my Windows Server experiences, rebuilding RAID's are not fun.

Anyone have any war stories good or bad regarding these options?