Should IPVM Assign Manufacturers A Credibility Rating?

Manufacturers have been known to stretch the truth. But do ALL of them? And to what degree?

Consider a hypothetical honest manufacturer: Marketing puts pressure on engineering to claim some specification which can differentiate their product from the rest. Engineering pushes back. Marketing sulks.

Furthermore, industry pundits, like John, in an effort to protect the flock, steadfastly warn against trusting ALL manufacturers word on anything not easily verified, advising a skeptical attitude is best.

Though this must have a demoralizing effect on the honest manufacturer, who figures what's the point of issuing real specs that are going to automatically assumed to be inflated/distorted? Might as well keep up with everyone elses version of reality. No incentive to tell the truth.

IPVM could provide that incentive, in the assignment of a manufacturer credibility rating, or a fudge factor, or Ponocchio noses, or whatever.

Maybe re-evaluted annually. Based on IPVM reaserach of the previous year concerning things like low-light, dynamic range, expected vs actual ship date, etc.

With the primary aim to reward quantitatively manufacturers for dealing honestly, as well as to be an aid to members.

Let's give the manufacturers a reason to change!


The general idea is attractive to me, except for the ratings part.

I generally don't like scores like 82 / 100, or 9.5 / 10 or 4.5 stars, etc. because they tend to convey false levels of precision (e.g., is Arecont really a 59 or just a 61, etc.)

But that said, an annual report or post on the specific claims / issues / manufacturers that are fallacious / misleading would be worthwhile. Thanks for the suggestion.

I think that in reality it is a little more nuanced than what you describe, though I like the overall idea.

It seems that more often than not, the marketing people believe that what they say is fundamentally correct, even if not technically so. To people with a "binary" engineering mindset, that tends to equate to anything less than absolute statistical data is false or unreliable.

IPVM could assign a rating or ranking to individual marketing releases, statements, etc., and also provide an overall ranking. An interesting analysis would be on things like " 1 Camera X can replace Y of Camera Z". While potentially technically true, practically it may have limited chance to be applied in reali like. Similarly claims on range, accuracy, low-light performance, bandwidth reductions and so on.

Personally, I think that what would be best is an attempt to train marketing people in what matters and what doesn't vs. trying to shame them into accuracy. Most of the marketing people I've met want to produce accurate info.

On a larger scale, a company that continually produces materials that are inaccurate or just generally FUD is an indication of a company that has no clear differentiator or value. If you did this right you could probably divine the companies that have long-term viability vs. the ones that are only out for short-term numbers.

"An interesting analysis would be on things like "1 Camera X can replace Y of Camera Z"

Interestingly enough, this has died down. I am not sure if it is because marketers got push back on the veracity or if they realize its overblown or they recognize that everyone has multi-megapixel now...

"Most of the marketing people I've met want to produce accurate info."

I think that's a fair point, certainly 'most' but I've seen more than enough sales and marketing leaders that like to push into the gray zone of accuracy.

That noted, I feel that we are at a relative low point in marketing spin, though unfortunately that seems to be tied to the low level of new products...

"Interestingly enough, this has died down. I am not sure if it is because marketers got push back on the veracity or if they realize its overblown or they recognize that everyone has multi-megapixel now..."

IMO, it's because multi-megapixel is commonplace now. I don't think the push back ever got beyond a small vocal minority. I have heard the concept still tossed around, but with a more commonplace application.

That noted, I feel that we are at a relative low point in marketing spin, though unfortunately that seems to be tied to the low level of new products...

But, I don't think that's a Bad Thing (low level of new products). We need some time for a few bona-fide defacto standards to emerge. IP is pretty much the transport of choice, h.264 is the preferred encoding scheme, 1080p is a mainline resolution. Some of the software and access control stuff is still taking form (IMO). I think more than "new products" we need ++ versions of existing things. A solid 1080p camera with good low-light, WDR, motorized focus/zoom, IPv6, endpoint auth, SD card, IP66 rated and street price of $200 or less.

We don't need (IMO) a camera that runs Windows for its OS, or a camera that comes in a new form factor, or a camera with its own solar panel. We just need a camera that fits 80 or 90% of the common applications, then let all the manufacturers compete on price, or warranty, or support, or something other than trying to move things in a new direction. Or compete by keeping all those variables, except for one that moves ahead (5MP camera for $200, 1080p camera for $99, etc.).

Same thing for software, NVR appliances, etc. At this stage, we don't need "new" as much as we need "better".

Agree with the overexacting precision, I was thinking more like A,B,C,D.

I also had a different idea, a real mathematical fudge factor, that would work like this: Whenever IPVM would measure the performance of a camera's stated specification, during normal testing, the difference between that and the stated spec would affect the manufacturer's fudge factor for that spec. Sounds complicated, but an concrete example might help. N

For instance, when measuring the dynamic range of Cocky Cam's U-WDRVII offering, you find only 100 dB of actual dynamic range instead of the reported 125dB, you adjust their DR fudge factor to -25dB. You basically keep the biggest exaggeration of any camera as the fudge factor.

The cool thing is that the manufacturer can fix their abysmal FF without doing anything to the camera. They just change the spec sheet. Voila! From negative -25 to +5 instantly!

Mentally we all do this type of adjustment to some degree, this is just a way to get it out there clearly AND give manufacturers a real reward for bring the hype down...

Btw, the reason I was thinking this was because of your rhetorical question in another thread:

Or else do you really believe that Vivotek cameras max WDR is 140dB vs just 130dB for Sony's best?

I didn't know if you were saying Sony always has better technology, or that they never lie, or maybe that Vivotek was on a par with Speco?

You had an informal FF in your head for Vivotek and Sony that you used to compare them. So let's just make it a bit less mysterious and a bit more public.

Seems like a good idea, and I like the 'scorecard' concept with A,B,C,D,F, etc.

Possibly it could be based on integrator surveys, or a combination of integrator surveys and John's opinions.. :D

Implicitly, this entire site is a big manufacturer's scorecard, it's just not always in an easy to digest central location.

To repeat, I dont like the scorecard because it encourages arguments about whether something is a B or a C, is it a B- or a C+, etc.

I'd rather just make a list of the most egregious that almost all reasonable people can agree on and highlight that.

Would you allow/promote the removal of an offending item as soon as the manufacturer would change the specification sheet item in question to be in line with measured performance?