This explanation is accurate when considering a static scene. When objects are moving, motion artifacts will often be observed. These are often exacerbated, even by "smart" interframe compression and must be carefully evaluated in any particular situation.
John, I agree that the CODEC can increase bandwidth on motion if set for VBR (variable bit rate). If set for CBR (constant bit rate), motion forces increased compression, and can result in extra motion artifacts.
Additionally, I think motion artifacts also have other sources such as fringing on hard edges (also seen in JPGs) and things that happen when the interframe data is blended into the keyframe stream, particularly after a stack of interframes just before the next key frame.
Another source of artifacts is error correction, which is often running in the background in the playback DECODER.
My point is there is a lot of parameter tuning possible in an H.264 profile, and the playback DECODER also can introduce artifacts. Some expertise can eke out better results.
There's no doubt that any CODEC causes risks of artifacts by their very nature of compression. My main point is that H.264 is not simply compressing video more than MJPEG.
I was stunned, and think you might be as well, to see how many surveillance professionals have that conception about the 2 CODECs - i.e., H.264 only saves bandwidth by reducing quality. This is the motivation of this post.
I really really doubt many (or any) are using that in surveillance (including our Vancouver friends) given the massive bandwidth consumption of that variant, but it does exist.
I am not sure how to frame the question but the challenge with many people is that they think H.264 is inherently lower quality (regardless of what you do)and that's what deliver it's reduced bandwidth - not that there is actually something more sophisticated / smarter / intelligent about the CODEC that lets its reduce bandwidth while maintaining quality.
MPEG - motion jpeg is JPEG compression for each image in sequence of images.
H.264- Intra frames - spatial correlation is used compress.
Inter frames - also calles p and b frames , where previous and future/ previous frames are used for predicting current frame, respectively. So here temporal correlation is utilised to remove redundant data.
both codecs does lossy compression via removing high frequency components of each image which difficult to see for a human eye. both used block transforms.
h264 provides more flexibility in size of blocks, can be 4*4, 4*8..., but MPEG has 8*8. This enables h264 to do compression with less blocking artifacts. Also the frequency transform in h264 is exact, which means its inverse transform gives exact match of the input to forward transform, where as freq transform used in JPEG is lossy in its nature that inverse transform does not give exact original input data.
with same bandwidth, h264 will always give better quality for sequence of images, at it uses lots of other advance algorithms than JPEG std, like cavlc, cabac, deblocking filter, half pel Motion estimation, to mention few. That is why h.264 is widely accepted, and used.
obviously for any compression std, lower bitrate means loss in quality. But among all h264 will still do better than others. Over sequence of images.
H264 also provides various profiles, whr we can choose a set of tools to compress video, for different use cases. these profiles are called baseline, main, extended...
These are few thing about h264 which makes it better than most.
hows MPEG better?
its better in the sense that each images can independently decoded. Decoding of next frames does not depend on previous frame. So an error in a frame does effect other frames.
whr in h264, a noisy channel can lead to loss of group of pictures. H264 takes care of this by using IDR frames.
but still a sequence will be lost, or incorrectly decoded
It is not same as JPEG , it used more advanced compression tools like like the ones i explaned in my earlier post.
steps done in Intra H.264
step1. predict each pixel of 16x16 macroblock using various modes(from neighbouring pixels) and using various sizes of sub-blocks sizes couls be 16 -4x4; 4x8, and further combnations possible keepgin smallest size of sub block to be 4x4. (only in h.264)
step2. subtract predicted macroblock from original and get residual data(error).
step3. transform the error with 4x4 exact inverse transform (only in h.264)
step4. quantize the high frequency data (compression happens here) . This value is feedback from CBR or VBR algorithm, depending on the bandwidth.
step5. use CAVLC or CABAC to make bitstream. The bit rate of this stream can further effect quantization parameters if CBR is being used.
step6. reconstruct this I frame in encoder.
step7 apply de-blocking filter.
step8 use this reconstructed frame to predict next P frame.
Horace, I think you meant H.264 encoder in the first line, correct?
We have tested this and found that H.264 with only I frames still consumed 10-15% less bandwidth than MJPEG. Doing that would be well, silly, but it would still have some bandwidth savings. Obviously, even doing an I frame every other frame would increase savings significantly over only H.264 I frames.
Maybe even bigger savings, from the article you linked:
30fps, 30 i frames per second indoor daytime - In this scenario, H.264 bandwidth was 3.48 Mbps. By contrast, MJPEG bandwidth was 11.8 Mbps, which shows a 3x difference. Although having a maximum I frame ratio, this scenario did not show any visible quality gain, but still having a bandwidth savings of 71% from the MJPEG scenario.
Isn't this the same scenario you are describing?
Also John or Harpreet, is there a name for h.264 encoding of still frames only without the intra-frame stuff, i.e. with all the enhancements that Harpreet mentions. How does it compare to JPEG-2000 for images?
Ok, after re-reading the original article and then this article which basically expands upon Harpreet's enumeration of the intra-frame compression advantages in H.264, all I can say is: Pass the Kool-aid!
It seems that aside from legacy compatibility constraints, there maybe no good reason for using MJPEG. In a worst case scenario, you would simply replace MJPEG frames with I-frames 1:1. Then you could either pocket the bandwidth savings or splurge on some P-frames for smoother rendering.
In fact the author of the second article is pushing for adoption of intra-frame h.264 as a replacement for jpeg for still images.
Have you done a 1:1 i frame shootout between jpeg2000 and h.264? Of course the bandwidth savings would be obscene, but neglecting that, as well as the benefits of j2000's progressive compression, it would be interesting to see if H.264 would be of less quality...
How do you measure bandwidth between two points in a parallel network video environment? For example, trying to send a batch of cameras over a cat6 from a mini rack over to the MDF Room. Is there a tool/test you can do to see how much bandwidth you are consuming over that data line to determine if fiber is needed or not?
Typically, a managed switch would show statistics about how much bandwidth is being used going in and out of each port. This way you could determine if a batch of cameras was overloading a link. Some examples include this test and this training report.
Try VLC using adapted rtsp strings according to your favorite manufacturer, then open network stream, (stream1 or stream 2...) and take a look at the tools and codec information menu and you will get an idea of what you received (bandwidth analizers also like wireshark but they are less visual and more technical )
Some bandwidth sniffers also give you what your Ethernet card received when connected to a Stream but you can't isolate sources like Vlc does.
bellow example with a 180° at home tonight . (Especially usefull when willing to test a VBR "real" coding measure before putting a "real life" CBR limit to save from death your hardidsk. )
It's meant to see if people have strong preconceptions that MJPEG is better than H.264. That 38% of voters said the worse quality image was H.264 even though in the next paragraph we revealed it was the opposite proves my point :) A lot of people immediately think worse quality -> H.264 without looking into the details!
CBR is definitely a problem because if you pick a bit rate too low, then quality will suffer during the motion period after the bell rings. If you pick a bit rate too high, quality will be ok during the motion period but huge amount of bandwidth will be wasted during the rest oftime.
H.264, using VBR, will automatically adjust its streaming / bit rate. When class in session and the hallway is empty, the P frames will become very small as there is not much motion / change. Then when students swarm the hallway, P frames will become much larger as they capture / reflect the continuous movement for that period. The only thing to make sure of with H264 here is that the I frame interval is not too long (1 I frame per second should be sufficientand that is the most common default setting for VMSes). See: Test: H.264 I vs P Frame Impact