Image Signal Processor On NVR Side Instead Of Camera?

I've been looking at ISP (Image Singal Processor) blocks and noticed that sometimes the ISP can reside in the sensor itslef, sometimes the ISP resides in the camera processor, and sometimes parts of the ISP resides in both. To simplify the camera, does anybody just encode the image, ship it over to a Network Video Recorder (NVR), decode there and conduct the ISP functions at the NVR? Would/could that help to make less expensive IP cameras? If I encode, ship, then decode a raw image (warts and all) and then try to conduct ISP functions on that decoded image... did I loose too much detail in the encode/decode process to do things like CFA crosstalk noise filtering, AE/ASB/AF, fish-eye dewarping, etc. Also, would there be concern about the integrity of the resultant data via a "post processing" at a remote NVR location??? Just curious if anybody is doing this and what the issues surrounding implementation might be.


"Does anybody just encode the image, ship it over to a Network Video Recorder (NVR), decode there and conduct the ISP functions at the NVR?"

I do not know of any IP camera manufacturer that conducts the ISP functions on the NVR side.

One major issue is that it becomes essentially a closed system, blocking you from selling to 3rd party recorders that would unlikely support this. I also suspect if you encode first and then decode you have significant quality / detail loss that would impact the 'far side' ISP process. The latter I am guessing at.

The ISP is usually very tuned to a specific image sensor.

Moving that function to an NVR would create a whole new compatibility layer, since there is really no such thing as a 'generic' ISP. I think you might also have a bandwidth problem, since the ISP can receive more data from the sensor than what the compression chip (h.264, etc.) ultimately spits out as a finished frame. ISP functions are usually something you'd want to do on as raw of an image as possible. Once you've encoded the image you've presumably lost and changed some data. Also, normalizing contrast and exposure first should tend to lead to lower overall bandwidth coming out of the encoder.

So, we probably won't see remote ISP chips for reasons of cost and practicality.