How Many Camera Views Can Be Effectively Monitored?

Is anyone aware of studies that have been conducted to identify video operator effectiveness related to the number of views that can be effectively monitored in a command center setting?

It's a good question. Somehow the consensus answer is evidently 9 or 16 video feeds for no more than 20 minutes but I am not sure what is the original study or when it's from. People just say it as if it's a 'fact'. Indeed, those numbers have been thrown about for years, raising the question of whether technology changes (like higher quality images or better monitors) have improved or changed that.

The bigger thing today is more about different techniques to minimize how many are viewed at once - things like using event triggers, whether from access control or intrusion detection or video analytics to more efficiently focus operators on cameras that most likely are capturing real events / risks.

Our Surveillance Monitoring Station Best Practices guide may be of interest to this discussion.

Effectivenesss is how quick you see and react. At day time operator's attention decreases 50% every 20 mn according to some studies here, and this job is particularly boring. Just try to monitor by night when "all cats are grey", it's even harder, probably 10% effectiveness

At Nighttime, Detection is almost impossible, Recognition difficult (shape) identification .. very difficult (especially without IR adaptation) Place for analytic in upcoming years to try to filter events and decrease false positive from motion detection

But what study? Does anyone have the name/author/year of a real study on this topic?

Zero, apparently.

Those are some cool videos. Are there any specific studies / videos from Dan Simons covering surveillance or multi-camera video monitoring?

The study doest exist in some public security centers here in France. I just have to investigate to give your references and a link, but I use the figures during trainings to justify among others things - Analytics

Thanks. Wherever it is, it would be great to see it, because the 20 minute number gets thrown about so frequently but never with actually any supporting details to explain how the test was done, who did it, etc.

Whatever the 'truth' is, I am sure it is more complex than X number of cameras for Y number of minutes.

John, I am not aware of any studies he has done on surveillance monitoring of multiple screens.

I'm mostly familiar with his work at the University of Illinois Vision Cognition Lab.

I did not see the gorilla.

Monitored how? For what?

We have several central stations where operators are "monitoring" 1000+ plus of our cameras at a time (camera:operator ratio of 1000:1). However, they are using the video analytics to call operator attention to particular cameras, and the cameras are (relatively speaking) concentrating on a fairly narrow task.

If you're talking about live views, the numbers I've always seen are in the 20-30 range, and various SOC's and CS's I've visited seem to indicate that 20-30 is the "typical" number. In those cases, operators are usually not staring at the cameras continuously, they are doing other taks. This, IMO, *increases* the total amount of time they can spend monitoring the cameras, but also *decreases* the probability that they provide near 100% "coverage".

To be fair, relying on analytics (or any other non-human method) can also add the risk of a missed event.

So, I would say, 1 operator could manage 24 cameras with near 100% effectiveness for about 30 minutes in a live view scenario. Or, they could monitor 24-48 cameras at ~75% effectiveness for 2 hours (basically by time slicing among multiple tasks), or 48+ cameras at 50% effectiveness for 8 hours by time-slicing and camera-slicing (eg: viewing different groups of cameras in between other tasks). This is assuming you're looking for macro-level things:

1) Obvious movement in the scene

2) Obvious scene failures (out of focus, poor lighitng, etc)

3) Presence/non-presence of key items (trucks, inventory, gate open/closed).

If you're looking for minor-things (counting to ensure all 28 forklifts are parked properly, ensuring locks are in place, ensuring that a door is not slightly propped open) then it would take more resources to cover the same number of cameras, or less cameras per operator.

One thing that likely impacts performance is monitor size. 24 cameras on a 19' monitor, I think the miss rate would be very high even after just a few minutes - the video panes are simply to small. So if we are doing 24 cameras, how big a monitor?

Also, what type of motion / activity? Certainly performance will be impacted by the time and relative size of an object - e.g., a truck parks in front of a camera - easy. A guy runs across that camera - hard.

I've seen places with ~30" (much smaller even than your 19' example ;) ) monitors and 24 cameras on-screen. But again, much of this depends on *what* you're looking for. In many CCTV applications the operator is looking at what should be an inactive scene, so spotting intruders is fairly easy. You can use basic motion detection with some visual cue on the camera window to call operator attention to that particular window.

But as you point out, the task gets more complex as the things you are looking for become more nuanced. Given the availability of technology today from motion detection to bona-fide analytics, it seems impractical (to me) to expect operators to handle more than a couple of cameras unassisted by technology for even a task of medium criticality.

If I'm not mistaken, I saw a DoD or DHS study on this at some point. I'll try and find that.

And these aren't set standards but:

Still waiting for my French refs..
Bellow an other one from Paul Wilson & Helene Wells, « What do the watchers watch ? An australian case study of CCTV monitoring », - Humanities and Social Sciences papers , Bond University, 2007.
Study is very critical about REAL time spent on events and REAL number of LIFE events detected by chance (problem of PTZ cameras... when I watch here I'm not watching an other screen..)

FYI- I looked at our stats for Sept, for a 1 month period. It involved 20 Videoiq sites with 100 Videoiq cameras. We processed about 3000 events, taking about 48 hours of operator time. During that time the sites were each armed about 450 hours. No missed real events. 2 Apprehensions.

Robert, thanks. To put that into perspective (for other members), what are you talking about is using analytics to trigger what cameras to review whereas the OP (Adam) was most likely looking at non analytics / just watching.

To the extent that you can get analytics to work and the output of them is useful for your situation (e.g., I have an area where no one should be / cross during this time, send me an alert if someone is there) than analytics are far far better than even the best humans at watching monitors.