Same (Constant) Bit Rate, Different Scene Complexity

I've been asked about that, but I couldn't figure it out the answer. Given two H.264 video streams with rather the same (constant) bitrate, but with different scene complexity: one video stream of a white wall and the other one with a lot of motion. Which one (if any) is going to consume more computing resources to be decoded?


The big difference is that scene with a lot of motion will be encoded at a far higher quantization level to compensate for the fixed amount of bandwidth. Unless the bit rate is set very high, you are likely to have artifacts / quality problems.

As for computing resources, I'd bet it be similar.

p.s. - please try to convince people not to use CBR. VBR with a cap is better.

Background - CBR vs VBR: Surveillance Streaming, Video Quality / Compression Tutorial

"As for computing resources, I'd bet it be similar."

That would be my bet as well, but since I have never come across such a question I didn't want to bet all my chips on this one :)

"p.s. - please try to convince people not to use CBR. VBR with a cap is better."

No worries here. The use of CBR is for internal testing purposes. We are currently performing stress tests on our video viewer. We are using CBR instead of VBR because we want to find the average bit rate to be used as cap. In other words, we want to find out how far our video viewer can handle H.264 video streams.

I noticed although that even if CBR is used in a complex scene (e.g., considerable amount of motion) the variance in the average bit rate can be significative (many peaks, the I-frames btw). On the other hand, when using CBR with a simple scene (e.g., a white wall), the average bit rate is more smooth and really constant (i.e., there are still peaks, the I-frames, but their difference compared to the P-frames is negligible). And then the original question of this thread was risen.