Could A Manufacturer "Beat" An IPVM Test?

Reading about the VW emissions scandal, I had a thought regarding the possibility of video manufacturers beating independent tests, by a similar method of recognizing that they were being tested and adapting their behavior.

Although I specified "an IPVM test", that was because we are all familiar with them, but this could apply as well to photography tests and the like.

And although I think it is probably far more work than it's worth, and I doubt it has even been seriously considered by a manufacturer, is it even possible?

I have a couple ideas in mind specifically, but here's a stupid one:

Using video analytic capability, recognize that you are in the IPVM lab, use the famous eye chart and license plate as clues.

Once you know the shot you are in, (conference room 20' field of view) and the lightning, adjust your image modestly to improve it based on your knowledge of it.

Remove some noise.

Make the letters clearer on the eye chart.

Adjust your IR strength optimally for the room you know you are in, etc.

Maybe it's not worth it today, but with analytics on most ip cameras these days, I don't think it's out of the realm of possibility.

I'm sure the VW trick was infeasible at one time as well...

What do you think?


If manufacturers had the capability to do what you suggest, we would all already know about it because they would be selling that differentiating capability hard - before everyone else figured out a way to do it too.

You mean Analytics?

I mean analytics that are specifically designed to - and can actually - perform as you described.

My point is that analytics with such specific capabilities to perform the 'recognition' in your admittedly stupid scenario could also use those capabilities to perform much more practical duties that have a much higher value in the marketplace.

Do you think that recognizing that you are in the Indoor Conference Room is beyond the capabilities of mainstream analytics?

Serious question, I'm no expert.

Superficially at least, the challenge I saw was what to do when you know you are in the test to increase the image quality.

The one setting that might make a material difference is compression level, i.e. Turn compression to minimum possible. However, we also measure bandwidth so that would be detected.

The one setting that might make a material difference is compression level...

That's a good one, I hadn't thought of that.

Playing Devils Advocate though, how would you detect it, since q and bandwidth are not highly correlated unless scene complexity is considered. Even then they are only correlated for a given model. Certainly you would admit we've seen bandwidth numbers all over the place.

You could of course make an unpublished control shot of a roughly similar scene to compare it to.

VW wrote a program to recognize the connection of an emissions test device and reduce the performance of the car during the test. That increased the mileage and reduced the emissions. Of course the car could have run in this function all the time, but the results would have been a poor experience for the drivers. That performance was hidden from the driver and only exposed to the test machine who didn't care.

If you were to attempt the same thing, you would have to assume IPVM would only be testing for bandwidth and that the final image (performance) could be sacrificed during the test. Or, IPVM could test for image (whereby performance "bandwidth") would be sacrificed. To pull this off the manufacturer would have to write code that would manipulate the test gear!

Or, IPVM could test for image (whereby performance "bandwidth") would be sacrificed. To pull this off the manufacturer would have to write code that would manipulate the test gear!

What you say is true in that case, and in most other cases as well. But there are some parameters that would be undetectable. As a trivial example consider IR in the conference room, you would know the distance to the target and could deliver better illumination, without washing out the subject.

As a trivial example consider IR in the conference room, you would know the distance to the target and could deliver better illumination, without washing out the subject.

This is what Und.2 is getting at - if you had this capability, why would you use it to scam a bench test, rather than tout it as a feature?

Because this is not really a capability that anyone wants. It only would work in the IPVM test conference room, assuming it could detect it was there. It's cheating.

Consider if you know your camera is under IPVM test, (due to recognizing the conference room) you know the prop used for license plates is JDZ 3403. You certainly could make some good compression decisions to insure that number remained legible.(Once in a blue moon you might think you were under test falsely, and spit out that plate number, but you'd probably get away with it.)

A bit far-fatched I agree. But let's now forget IPVM testing for a moment and move to general imaging fakery. Consider this well known chart:

A camera that recognized that it was being tested by this chart could certainly improve its image immensely, since it knows at the pixel level what it would look like.

And yes you can oust this type of fakery any number of ways, but as long as we consider the idea preposterous, we are unlikely to test specifically for it.

But it's not that infeasible really, and it's getting easier every day.

Related: Samsung Rigs Tests

"If the [Samsung tablet] system detects that one of a hard-coded list of apps is running, it turbo-charges the graphics processing unit (GPU), yielding that 20% boost. Then when the test is over, it scales it right back, because turbo-charging uses up the battery."

Slightly skeezy if it's looking for benchmark apps specifically, but not an unusual thing... hell, in the computer world, the ability to tune your performance for specific apps (especially GPU performance for games) is a selling point.

IMO it's simply not worth the effort, even if you have the capability to do it.

IPVM is very popular, but it is still read by a small fraction of the security industry overall.

IPVM also tests only a subset of the available products on the market.

Lets say you were Axis and had the resources to do something like this, you'd be better off just using those resources to make a better product overall. Why try and rig what would ultimately be a slight modification to your performance for a test that would really only be seen by a small percentage of people? If you really wanted to get better results so badly you could buy a journalist a cheap steak dinner and spend 1/100th of the money and have the results immediately.

Unlike VW, which is selling a more mainstream product into a much larger market place, it's just not worth it.

IMO it's simply not worth the effort, even if you have the capability to do it.

Agreed. But it gets easier everyday. Who knows, if anybody ever gets face recognition to work, the detection part might be as simple as registering a few well known faces. "Derek detected confidence level 95%, and that must be Ethan, bingo!, begin test mode."

IPVM is very popular, but it is still read by a small fraction of the security industry overall.

Agreed, again. However, I don't know what the fraction is that read IPVM, but you only need reach the small fraction of industry people who make significant purchasing decisions of IP cameras. Put another way, while there may be a large fraction of people in the industry that don't subscribe to IPVM, any big volume buyer of IP cameras who isn't aware what IPVM thinks may be harder to find.

IPVM also tests only a subset of the available products on the market.

True. Once written the though, the detection part would go into all your firmware as a library.

If you really wanted to get better results so badly you could buy a journalist a cheap steak dinner and spend 1/100th of the money and have the results immediately.

I have agreed with your prior three points, but here you are either intentionally exaggerating or blissfully unaware of the current journalistic preference for pricey Moose Filets ala Shenzhen.

Unlike VW, ... it's just not worth it.

Disagree, VW clearly wasn't "worth it". It would be hard to do worse. :)

The more devious side of me considers an alternate scenario to beating an IPVM test: horribly failing an IPVM test.

We all know that IPVM is heavily biased against most manufacturers. The tests performed are selective, poorly carried out, and done without manufacturer "assistance" to ensure that optimal results are achieved.

What if a camera could detect it was in an IPVM test scene and purposefully perform poorly? IPVM would surely post the results (maybe they'd reach out for comment first, but you could just ignore that), highlighting the poor performance of the latest FooCam.

The manufacturer could then come on IPVM and claim that the test methodology must be flawed or biased. They could ask others to replicate a similar test, which would likely yield much better performance, from multiple sources. Then you could call all of IPVM's tests and opinions into question, toppling the entire IPVM empire.

Or something like that.

Schmode, stop commenting on IPVM and enjoy your retirement, k?

The more devious side of me considers an alternate scenario to beating an IPVM test: horribly failing an IPVM test.

Now that's good stuff, Bryan. Or how about a little DOS attack on the competitors cameras who may be on the same test rig!?

Why stop there?

Why not have your camera - when it knows it's in an IPVM test - generate targeted gamma rays that can bombard other cameras image sensors and make them perform badly in the IPVM test?

Why not have your camera - when it knows it's in an IPVM test - release invisible mind-control gas into the air and also open 2-way audio so you can dictate what test results you want published to the now mind-controlled Ethan and Derek?

...release invisible mind-control gas into the air and also open 2-way audio so you can dictate what test results you want published to the now mind-controlled Ethan and Derek?

Heh, heh. That's the spirit!

Your open mind shows that you will be ready for the deception, if and when, and however it comes.

I, for one, will welcome our lizard overlords.

Actually, considering how some cameras will bootup and attempt to connect to a tftp server (Hikvision for example, as long as that tftp returns the Magic Byte) for a firmware update you could easily host a tftp server and malicious firmware on a lot of cameras these days. That would make those "cheap" cameras perform unreliably in a test, and on a network in general.

The beauty of this is that the cameras only attempt to upgrade when rebooted, so just plugging in a new unit with a malicious tftp and firmware wouldn't cause any immediate problems. But as cameras were rebooted (after a power loss, other legit firmware update) they'd fall offline and appear unreliable.

Many other units will ultimately allow firmware updates from various less-than-verified sources if you know a little bit about their defaults.

I think for ultimately security, every device should be on its own VLAN ;)

But my life is meaningless without you in it.

--FakeBryanSchmode

Thought experiment of the day.

Forget face rec and gamma rays for a moment.

What if an engineer from Manufacturer A was able to sneak into the conference room and change any setting, and dial the image in, without any trace (except in the stream itself) being left in the camera, right before being tested?

Would that be fair?

I am not clear on the "improve the image quality" part of this. If you could improve the image quality, why wouldn't you ALWAYS improve the image quality.

Now the issue of the camera being at defaults vs. optimized settings is a valid point....

If you could improve the image quality, why wouldn't you ALWAYS improve the image quality?

Because you don't ALWAYS know what the picture is supposed to look like.

For instance, if you could detect that this standard chart was in use, which IPVM has used on occasion, you could (for a mild effect), remove the noise from all the known white areas.

And yes, a settings optimizer alone, that would use the camera built in analytics to recognize being in the IPVM conference room wide FOV shot, and then adapt by changing parameters is doable today.

Perhaps a formal challenge will be issued, in which case we might find out.:)