Is Avigilon Implementing H.264 SVC?

Since they implemented SVC in J2000, why does no one really think that the new HDSM is just their implementation of H.264 SVC? Have they, or their minions denied this outright? Pethaps the only reason that AV has drug its feet so long on deprecating J2000 is that they needed to have the H.264 SVC functionality in place first.

NOTICE: This comment was moved from an existing discussion: Milestone Vs Avigilon - Remote Viewing Bandwidth Usage


H.264 SVC has been available for a long time. The issue is implementing it inside of a camera and specifically at higher resolutions, where the processing power needed becomes immense.

I am sure Avigilon as well as many other manufacturers would like to do H.264 SVC for 4K / 8MP / 12MP / 16MP cameras. The problem is that no encoder supplier supports it at anywhere near those resolution / frame rate levels.

For example, look at the Ambarella S2, a 4K encoder chip popular with surveillance manufacturers. No mention of SVC support but there is a note about "Up to 8 simultaneous stream encodes" which is what a number of people suspect Avigilon is using.

I don't think they are doing H.264 SVC and nothing that I have heard from them, or their minions, indicates that it is.

Minions, please confirm or deny.

Do you (or anyone) know if their SVC J2000 encoder chip design was done in-house or else where sourced? That might indicate what partners they would use for a H.264 SVC design. The mystery is tho, if they did do it, why wouldn't they announce it? Maybe its not (intentionally or not) fully compliant.

JPEG2000 is SVC by design so it's a little confusing to call it 'SVC J2000' or 'SVC in J2000'. I don't believe there is any special encoding needed to do JPEG2000, you just need to be brave / crazy enough to accept its massive bandwidth penalty.

Avigilon uses Ambarella for a lot of their cameras but I am not certain about the older Pro ones.

Here's a spec sheet from nxp on a chipset implementing H.264 high as well as SVC-T with up to a 12 megapixel sensor...

Although they only give

SVC-T support enabling H.264 (1080p @ 30 fps ...)

maybe dropped to 5 FPS it can do 4k? Or is that unlikely to be the case?

SVC-T is only the temporal or frame rate component, not scaling of resolution/regions, which is what Avigilon is claiming.

Also, that spec sheet is claiming "SVC-T support enabling H.264 (1080p @ 30 fps + 1080p @ 15 fps + 1080p @ 7.5 fps + 1080p @ 3.75 fps)" It's not anywhere near 12MP streams.

JPEG2000 is SVC by design...

Right, but "H.264 SVC" is also SVC by design. My point being that the "by design" qualifier doesn't somehow generate code for an implementation just because the requirement is on a sheet of paper.. So Avigilon had to design, buy, build and modify a scalable JPEG2000 codebase for to meet their particular demanding requirements. And apparently they did it...

So I don't understand why its not considered likely that they could do they same with H.264, either by implementing SVC by the book or hacking their own SVC like extension. Why is this more fundamentally challenging in your opinion? They should be way ahead of everyone on a conceptual/practical level by now...

JPEG2000 is SVC by design...

Right, but "H.264 SVC" is also SVC by design. My point being that the "by design" qualifier doesn't somehow generate code for an implementation just because the requirement is in the RFC. So Avigilon had to design, buy, build and modify a scalable JPEG2000 codebase for to meet their particular demanding requirements. And apparently they did it...

So I don't understand why its not considered likely that they could do they same with H.264, either by implementing SVC by the book or hacking their own SVC like extension. Why is this more fundamentally challenging in your opinion? They should be way ahead of everyone on a conceptual/practical level by now.

JPEG2000 is far less demanding than H.264 SVC for 8 to 12MP cameras. That's the difference.

I am not sure why you are pressing on the H.264 SVC angle. No one else is suggesting this, not even from the Avigilon camp.

Harald Lutz also suggested it. But really I guess more than anything else I'm having a hard time understanding the novelty of HDSM even in all its JPEG2000 glory!

So lets say I'm running ACC and 9 - 29 MP JPEG2000 cameras and my monitoring setup is 1080P with a 3x3 matrix live feed.. The monitoring client is some pc on the local Lan. Now HDSM is not going to try to send me 9 - 29 MP streams, instead its going to look at the resolution of my monitor and the current resolution/size of my view windows and magically tailor a stream which only sends the pixels that I need to see at any given moment. Did I state that right? That's good of course, but conceptually how is that different than if I have a monitoring session on the server itself with 9 hi def streams going to each view window and I remote desktop into the server from the client and monitor like that.

Is that technically transcoding? If I zoomed I would get as much detail as available without incurring any more bandwidth. Remote Desktop is only gonna send the pixels I need to see so it seems the same as HDSM in that regard. Now in the real world the performance would be terrible if course, but on one level (not performance) isn't it basically doing the same thing, just more in a more optimized way?