While CODECs, like H.264, H.265, and MJPEG, get a lot of attention, a camera's 'quality' or compression setting has a big impact on overall quality. In this training, we explain what this level is, what options you have and how you should optimize it.
To start, review these two images, (A) and (B):
And answer this question before continuing:
With the information presented, the best answer is that it cannot be determined. We used the same camera for each image and simply increased compression for the 'B' image (while keeping everything else the same, including resolution and codec).
The fact that two exact shots with the same resolution can look significantly different has a number of important implications. Inside, we explain why, covering:
- Quantization levels
- Bandwidth vs. quality loss
- Image quality examples
- Manufacturer differences
- MBR/VBR/CBR impact
- Smart codec impact
Regardless of codec used (H.264, H.265, MJPEG, etc.), all IP cameras offer quality levels, often called 'compression' or 'quantization'.
H.264 and H.265 quantization is measured on a standard scale ranging from 0 to 51, with lower numbers meaning less compression, and thus higher quality. If this seems counter intuitive to you, it is understandable, but these are simply the measurements defined in H.264 and H.265 standards.
Key Tradeoff: Bandwidth Vs. Quality Loss
The key tradeoff in setting quantization is determining how much 'loss' you are willing to accept for a particular decrease in bandwidth. All production surveillance video compression is 'lossy', meaning that some information will be lost when video is compressed, making a crucial configuration decision: