Subscriber Discussion

Save Bandwidth/Storage By Running Gain On Client / Server Side?

We all have had it beat into us by IPVM that higher gain = higher noise = poorer codec efficiency = higher bit rate.

[IPVM note: Testing: Gain / AGC Impact on Surveillance Video]

It all makes sense except one thing nags. Unlike other exposure related items, like shutter speed and iris aperture, gain is a DSP algorithm run after A/D conversion.

Wouldn't it therefore be prudent to delay the gaining until the final rendering (instead of applying gain on the camera), thereby saving network and storage capacity?

It's so obvious that I'm sure that a) someone is doing it already. b) the tradeoffs in doing it are too great. c) I'm overlooking something basic.

The downsides that I see are:

1) increased processor load on client.

2) possible higher bit depth available to work with on camera before transmission leading to better quality.

So which is it a, b or c?


Answer is c

Although digital gain exists in many forms, i.e. brightness, sensor gain is actually performed before a/d, electronically but not digitally.

Here a superb article on sensors that I found.

Btw, VMSes that do processing / enhancement / display optimization are quite rare. One that does is Avigilon, which has value. See: Testing Avigilon's Image Enhancement