Inaccurate accuracy claims: many companies said their face rec software had "98%" or "99%" accuracy but were unclear about how they derived them or what that meant in real use. Given the vagueness, we would be skeptical.
And if anyone is looking to build their own functionality, I recommend taking a look at: https://github.com/ageitgey/face_recognition . face_recognition has 99.38% accuracy on the labeled faces in the wild, and is most likely within spitting distance of anything proprietary.
Indeed, that is not a great mistake, but I would have been pleased to have the Suprema new FaceLite reviewed as well. I am sure we have good reasons for that, either on IPVM or on Suprema side
Now, Charles (congrats for the article btw), if you want to have a quick wrap up about Suprema FaceLite:
1- There are two main technologies for Face Recognition:
- Optical solutions (CCTV based): these are based on algorithm/pixel performance only. It can be used as black listing (Stadium, Retail, Vandalism) but it is not enough for white listing (ie: access control)
- Infra Red Solutions (Suprema and others): these are based on Light emission + IR sensors + Algorithm + processing power. Advantage of IR are: distance 15cm to 1.5m (it filters background and all related issues), works in any light conditions (unlike CCTV that can take a face with sun from the side), makeup/painting on face, Face Face/Images detection (easier than Optical). These ones are safe enough to be used for white listing (=access control).
2- FaceLite is working same as Suprema FaceStation2, with Infra-Red templates (it's compatible).
Cool stuff: FaceLite is 43% smaller (size) than FaceStation2, and the price follow the same 43% off trend. That brings the Flite IR face recognition reader to the price of Fingerprint reader (= BioLite Net : BLN2-OAB). But still you have the high performance/reliability/security. No sacrifice on this!
Limitation: Face template is too big to be encoded on a card (>8KB) and Suprema Face Models are "evolutive" (maching learning: each time you check your face on a reader, it is updated). The related drawback is that Face cannot be stored on RFID cards (EV1/EV2 / Seos). Instead it is stored in Central Database or in Reader itself (my preferrence). The # of face models are limited to 3,000 (1:N, Identification) and to 30,000 (1:1, Verification, that case you need to swipe a card or input an ID before authentication). Compared to FaceStation2 (FS2), you also lose the second optical camera (that I like for user interface or Picture logs), you lose the large touch screen, you lose Android OS, you lose the Video Intercom possibility. But that's in line with the 43% off in price point!
Privacy: Face templates are stored on central server (encryption: AES 256) or on readers (AES 128), with possible "Tamper secure" option => if the reader is removed from wall, it factory resets and loses all memory (Users, Face Models, Logs, Encryption keys, ..). Face Models are being transported from Central Server <=> Readers via TCP, using TLS 1.2 encryption/certificate.
You are right. We use Cognitec, but not in Axxon Next. We have the second product - Axxon Intellect. This product is an integration platform and, for example, we have also VisionLabs engine there. Verilook has not been used in Intellect at least for the last several years.
And thank you very much for finding this. I'll pass this information to our marketing team to fix this.
Re: If there are specific companies, you would like to see covered, please ask....
What about INAXSYS ?? I visited them & saw a demo at ISC WEST which amazed me. Photos of the person sideways and old photos when the person was younger could be enrolled/compared with the faces captured and showed in the query results as positive results.
Anyvision's PR guy left Vegas early, so we were unable to conduct an in-person interview. And NEC was not present at the show at all. As for BriefCam and Vintra, I'll definitely keep an eye out for them next time.
"Excelled in “unfriendly, crowded conditions” and that it can facially recognize “35 moving people” in one frame. Also, the system doesn’t need a full face to work - up to 30 degree angles still allow for accurate recognition, Huntley said."
Some conditions and or limitations of note based on the claim above:
1. Faces must be "enrolled" using actual video footage captured from an video surveillance camera that is used in the iOmniscient deployment to achieve the results stated above. While this is viable in a situation where you may be looking to exclude someone from your property, or alert you to someone coming back to your property, it is not viable when you are trying to match faces from a "watch list" that uses a single still "2D" facial image (typical of law enforcement). In most deployments this is the real use case. If you are matching from a single still image the match success rate drops to 50% or less. The company will argue that this is a much higher success rate than a human operator responsible for watching many cameras for 8-10 hour shifts. And, while this is true, the marketing materials and presentation made by their own sales representatives omit this important detail resulting in an end user that can never achieve the success rate in the use case they expected.
2. This company does not understand camera lenses. Plan and simple. To achieve the partial face/"30 degree angles" the subject must walk directly toward the camera for nearly 10 full steps. In order to meet this requirement, cameras must be placed at a long (extremely long) distance from the focal point or ideal target defined by iO. This results is risk of obstructions prior to the subject reaching the required number of steps or "time" spent in front of the camera, due to the distance and height requirements of the cameras. Even when the near impossible is achieved with camera placement and subject "walk time", the statement of achieving match success without full face / 30 degree angles was still not achievable in test deployments. The only deployment in which this statement may hold true is a stadium environment in which the camera is placed across the stadium and watches the seats (still subject) from a very long vantage point, but we did not test this theory.
3. Unfriendly crowded situations. See #2 above. Crowded, maybe. Unfriendly, no. Situations must be ideal, near perfect, for success. Anything defined as unfriendly or chaotic did not perform in our testing.
For instance, to capture a set of doors in which the subjects cannot be forced down a specific path through environmental design will require a minimum of three cameras but ideally four cameras to ensure that any path of walk is within the acceptable parameters. And remember #1 above? The subject must be enrolled from video from each of the three or four cameras in order to meet success rates described by the company. Enrollment and matching from a still image will be met with much lower success rates, if match success rates are achieved at all.
Pricing: While per channel pricing is less expensive (but still in the thousands), facial databases are extremely expensive. Average database pricing runs $15,000 for 4,999 faces (this does not include SQL licensing or hardware). So if your use case is a college campus and you want to alert to an "unknown" person entering student housing after hours and your database consists of students and staff you will easily exceed the database capacity (think white listing vs black listing).
Architecture: Each server can manage roughly 4 facial cameras. However, even when operating within that limit we saw significant video tearing. iO evaluated and recommended that we "reduce frame rate and resolution". We did, however lower frame rate results in a longer time the subject must be within the ideal capture zone (or a longer "walk time".
Each server is essentially its own entity when looking at this from the global use and reporting perspective of an end user. When running a report from the iQ Client, reports can only be run from the server that is actively selected. When you combine this with the fact that only 4 facial cameras can be supported by a single server, running global reports across as few as 12 cameras requires operators to run three different reports which then need to be exported to excel and combined to create a single report. Many features of the software are buggy such as GUI resizing based on monitor resolution, freezing client GUI, auto login issued when accessing facial recognition servers, etc. None are deal breakers and are reportedly going to be addressed in the next software release. A more difficult issue to address quickly is the outdated GUI look and feel. A refresh is needed.
Great Reviews, many thanks for that. We are questioning the GDPR side with a few Facial pattern companies, also talking to UK government and councils. We agree with your startement "GDPR ‘Compliant’ False Claims" and feel this is a bit of a sweeping statement. GDPR covers the collection of data - Camera - the process of data - facial pattern, the storage of data etc.etc - Legitimate use of AFPR is not just covered by GDPR but by lots of other regs such as, DPA 2018, PERC2019, ECHR, Human Rights Act 1998, civil liberty etc. We post a fail bit on LinkedIn and have also written a 20 page document around data protection and Facial pattern - just in the process of adding to that document.
Further discussions needed around the use, i believe ;)