San Francisco passed the legislation 8-1 today. While the face recognition 'ban' has already received significant attention over the past few weeks, there is more to it for the surveillance industry.
What has been largely ignored is that the same legislation increases regulation on various surveillance technologies, including license plate recognition, CCTV, and gunshot detection, among other common surveillance technologies, in addition to the 'ban'.
Inside this note, we examine:
Why face recognition is banned
When face recognition is allowed
How surveillance cameras are included in this legislation
Why and how city surveillance camera users must apply for approval
Who is exempt
How is enforcement to be handled
How violations will be handled
San Francisco is not the first city to pass such a law, we examine who is
Dan, does this affect private companies with exterior public cameras, such as the Transbay Terminal where multiple intersecting municipalities transit systems flow from out side SF jurisdiction. Transit such as Caltrans, BART and Muni. I also find it curious that a lot of banking businesses downtown have public exterior cameras surveying ATMs and kiosk both from a canopy camera FOV and a direct facial FOV. Yes they are running FR as well as other analytics.
I lift an eyebrow at a area known for technology is willing to ban it (insert conspiracy theories here).
Good report, in regards to Santa Clara..there is FR at the 49ers Stadium so is it possible that certain entities can gain exception to the rule by way of the Santa Clara Office of the Sheriff approval?
Hi undisclosed #1. The law affects public departments, not private businesses or homes. City transit like Muni is affected. Though, the primary sponsor of the bill mentioned specifically during the meeting that the intent was not to stop or block any safety measures already in place like the Muni cameras. He said that when they submit their Surveillance Technology Policy, it will likely be accepted right away.
One way that the Santa Clara law differed from San Francisco's is that it did not outright ban anything, face recognition included.
We've added the following input from SIA into the story above:
Input From SIA
The Security Industry Association said the passage of the law represented "misconceptions" about technology and was an example of an "ill-informed policy":
Passage of the ordinance represents yet another recent example where misconceptions about technology can lead to ill-informed policies that weaken public safety. The proposal would impose extreme hurdles on city agencies, clearly intended to severely limit the use of even common technologies like video surveillance. Rather than accepting unproven assumptions, community leaders should examine actual current uses of these technologies in the U.S. and related laws, policies and common best practices that address many privacy and civil liberties concerns. Any effort to establish additional constructive rules should not unreasonably restrict use of modern technology tools that have become essential to public safety.
Painting all face recognition with the same brush by citing one study by one organization on one algorithm isn't a solid foundation for damning an entire class of technology as the ACLU did. While I have no knowledge of what the SIA may or may not be capable of, nor the ACLU for that matter, I don't think that testing capability per se is reason to throw out the observation that this is ill-informed policy.
Testing face recognition is an ongoing process at many federal, state and local agencies as well as private organizations. No single technology is going to be appropriate for all situations and knowing the difference is part of the challenge facing (sorry for the pun) stakeholders. For example, face recognition systems used to de-duplicate motor vehicle databases to ensure individuals have a single identity (license credential) would probably be a poor choice for usage in a real-time application like monitoring passenger entry or egress at an airport. Well-informed policy is only crafted by well-informed individuals. Obviously the key here is making objective and relevant testing data available to address the appropriateness of a technology. NIST does a great job of testing algorithms in a lab environment an takes great care in defining how the tests are constructed and the results tallied.
Unlike NIST, the DHS Biometric Technology Rally is a testing program taking place at the Maryland Test Facility and is an operational look at face, iris and fingerprint technologies for meeting the needs of Homeland Security stakeholders. My impression of the goal(s) (I obviously can't speak for DHS or the staff at MdTF) are to objectively evaluate biometric systems whose goals are to quickly and accurately move people through queues at ports of entry.
Does exemplary performance in either venue mean that a face recognition technology does not pose a threat to "...civil rights and liberty in a manner that surpasses the good it might do." Certainly not - any tool can be misused and abused (anyone every use a screwdriver as a prybar?).
My point is carefully selecting the right tool for the job, and having the experience and data necessary to make the decision in the first place, are the best ways to moderate the use of technologies like face recognition in public spaces.
The legislation highlights/targets face recognition but its equally restrictive to other technologies that serve the public interest. For example, I truly fail to see how technology like gunshot detection invades an individual's civil rights or supports public safety by being banned.
In this case I would defer to your experience with regard to the testing abilities of SIA. I think the real point though (agreeing with you) that the industry should object in the strongest terms possible. Personally I think you are putting a lot of emphasis on the ACLU's test of Amazon's product and then taking the position that all FR is bad. The fact is that the ACLU results were spot on and in that particular situation there (IMHO) is nothing to disprove.
Now, if the San Francisco Board of Supervisors wants to ban Rekognition based on the ACLU tests, I think they are on solid footing. However, the information that Dan put forward shows the BOS want to ban all face recognition and does not itself stipulate how they arrived at the conclusion that face recognition "...has traditionally threatened civil rights and liberty in a manner that surpasses the good it might do." This certainly feel like the negative knee-jerk reaction you and I agree is misplaced.
Thank you for your comment! Just to be clear, the law does not ban gunshot detection. Only face recognition use by city departments was banned in this law.
Other technologies deemed "surveillance technologies," like gunshot detection, need to be approved by the Board of Supervisors based on their appraisal of the Surveillance Technology Policy provided to them by COIT, which they develop based on the requesting department's Surveillance Technology Impact Report.
I think what's important here is why is gunshot detection included in surveillance technology? The law defines surveillance technology as:
Any software, electronic device, system utilizing an electronic device, or similar device used, designed, or primarily intended to collect. retain, process. or share audio, electronic, visual, location, thermal, biometric, olfactory or similar information specifically associated with, or, capable of being associated with, any individual or group. (emphasis added)
While I believe it is typical for gunshot detection to use both audio and thermal detection, I don't think the gunshot it detects can be specifically associated with an individual or group.
Certain solutions claim to be able to identify the type of gun, but not a specific person wielding a gun.
There is certainly location information available for the gunshot, but that cannot be specifically associated with any individual or group.
We received a reply back from SIA in response to John's comments above:
I would refer you to our previous statement. One of the sources that SIA relies upon is NIST’s independent and highly respected research. I would encourage you to review their body of work on video security and facial recognition. These technology studies are ongoing and the published data is open to the public.
To which John replied back:
How are you interpreting NIST results? Yes, there is research but is that proof that it should not be regulated or banned? NIST is not taking a position on its use, just how accurate or not certain algorithms are.
We have not yet heard back from SIA with a response.
I read the SIA's response to be pointed more at the misconceptions of data farming, police state, and privacy issues than with testing of the actual technology. We all know the limits of the current FR softwares and the inherent biases many are working though. But I would tend to agree with the SIA that this ban has much more to do with people's fear of the government "spying" on them than anything else. San Fran would have done much better to create a comprehensive data collection and protection legislation instead of an outright ban.
Easy John, I don't think SIA is out of line. ACLU's "testing" of one facial recognition system and insinuating that all facial recognition systems are faulty and unreliable is not what I would consider a thorough evaluation of the technology. SIA did not claim to have tested solutions, only that there are current use cases in the U.S. that could help moderate San Francisco's hard-line approach on surveillance. I would be interested in knowing what research the city pursued in developing that legislation.
There's a massive difference between some private company using facial recognition for creepy marketing and/or flagging when some undesirable person enters their property and when a government uses it to spy on citizens and potentially take away their liberty. As far as I see this, it just makes government accountable to the people being governed and it's going to force agencies to justify their use of the technology and make the existence of that technology public. If that's going to cause San Francisco to somehow become "unsafe" then that have far bigger problems than this law.
"As far as I see this, it just makes government accountable to the people being governed and it's going to force agencies to justify their use of the technology and make the existence of that technology public. "
excluding the FR ban in this new law, I agree with your take on this 100% - and it is a long time coming.
We all want our LE to be able to do what they do to the best of their abilities. But when those same LE agencies give the appearance of 'hiding' the technology that they have found/developed (think Stingrays - which, not coincidentally, are covered in this legislation) we citizens can sometimes get worried.
If LE had understood the long-term benefits of 'accountability to those they serve' when they were trialing all this new tech, then we citizens would now be far less likely to be worried about how our govt and LE agencies operate - and kindle sales of Animal Farm and Brave New World would fall off significantly.
My biggest concern and skepticism with this, specifically because it is in San Francisco is related to this section:
In a city where dozens (hundreds?) of private, venture-funded companies happen to be developing face recognition solutions, it is simply creating a business environment for these private companies to ingest City/public safety cameras, perform the face recognition as a service, and provide that data back to the city. This would just be the Vigilant LPR data collection model for faces.
Perhaps I am misunderstanding this section of the law, where as long as the City is not performing the face recognition, there is no issue?
From the reading I've done, Sean, I believe you're right. The law states that the city department can use face recognition data from a 3rd party as long as it didn't "solicit" said data and then document's its reception and use:
In the same week that San Francisco voted to become the first city in the United States to ban government use of face recognition surveillance, two New York State legislators introduced a bill to ban the use of facial recognition surveillance by residential landlords.