Facial Recognition's Controversial Rise in Brazil
Brazilian media, academics, and NGOs are voicing alarm at facial recognition's growing popularity in the country, where its use has gone beyond public safety and concerns about its consequences for minority groups intensify.
In this post, IPVM analyzes recent news reports and shares exclusive commentary from Brazilian academics and privacy advocates in a follow-up to UK/US Media Report on Facial Recognition Problems.
- Background
- Over 50% of Brazilians Black or Mixed-Race
- Black Brazilian Man Detained due to False Positive
- News Coverage Highlights 'Biased' Tech
- CompSci Professors: Tech Not 'Biased'
- Tech Deployed Not Mature Enough, Agree Interviewees
- Brazilian Government Not Transparent
- Weak Data Privacy Framework
Background
Facial recognition software has been deployed in public safety implementations in Brazil since 2011, according to the Rio de Janeiro-based think tank Instituto Igarapé. However, although the oldest public sector implementations are already a decade old, the rollout of and investment in public sector implementations have surged since 2019, coinciding with the beginning of current Brazilian president Jair Bolsonaro's (pictured) term.
President Bolsonaro, who survived an assassination attempt during his campaign, made public safety a theme of his presidential run, including a promise of heavy technological investment for local police (in Portuguese, IPVM translated to English):
Heavily invest in equipment, technology, intelligence, and investigative capacity of the police forces[.]
Investir fortemente em equipamentos, tecnologia, inteligência e capacidade investigativa das forças policiais[.]
Following Mr. Bolsonaro's inauguration in 2019, then-justice minister Sergio Moro signed an order implementing a "financial incentive for actions aimed at countering violent crime." This program initially set aside up to R$ 200 million ($35 million USD) for distribution to state and local governments to help finance the costs of anti-crime programs.
The fact that Brazil and other regional Latin American governments have money to invest in surveillance technology, coupled with high crime rates, has provided an excellent breeding ground for the public sector's adoption of facial recognition, according to comments to IPVM from the Laboratório de Políticas Públicas e Internet (English: Public Policy and Internet Laboratory; LAPIN, for its abbreviation in Portuguese):
South American countries have some of the highest rates of violent crimes in the world, with Brazil leading the ranking. The intentional homicide rate is a good example of such a claim. Furthermore, the overall sensation of safety in the region is low. Therefore, there is a strong demand from society for crime fighting and authorities seek solutions that may at least recover people’s security sense—even at a high cost. [emphasis added]
Since the government's financial incentive came into force, Brazil's Justice Ministry alone has invested tens of millions of dollars in facial recognition-related projects, including R$ 40 million ($7 million USD) into ABIS, a not-yet-live national biometric database, according to reporting from Folha de S. Paulo, one of Brazil's two largest newspapers.
Brazil Demographics
Understanding Brazil's demographics is key to appreciating local and international media's criticism of facial recognition deployments locally (related, UK/US Media Report on Facial Recognition Problems).
Like the United States, Brazil is a multiracial society, with sizeable Asian, black, indigenous (i.e., Amerindian), and white communities. For Brazil's most recent census, conducted in 2010 (the 2020 census was canceled due to COVID-19), six racial categories—white, pardo, black, Asian, indigenous, and undeclared—were used, with over 90% of Brazilians identifying as either white or pardo (see graph below):
Although less than 10 percent of Brazilians identify as black, over 40 percent identify as pardo, a term used locally to describe tens of millions of multiracial Brazilians like Paris Saint-Germain (ex-Barcelona) soccer star Neymar (pictured), for example.
As such, given the fact more than half of all Brazilians identify as either pardo or black, coupled with research and news covering facial recognition's lower accuracy among darker-skinned populations, much of the recent criticism takes aim at how public sector use of facial recognition may aggravate existing racial prejudices.
Black Brazilian Men Wrongfully Accused
The Intercept Brasil led a recent article on the rollout of facial recognition in the Brazilian state of Bahia by the state-level Secretariat of Public Safety (which has deployed Axis, Dahua, and Hikvision facial recognition solutions, according to LAPIN) with the account of an Afro-Brazilian man referred to solely as 'Davi,' who was unknowingly monitored during his commute home at 15 separate train stations before being detained by police due to a facial recognition false positive.
While Davi's account is one of the few stories of a black Brazilian man being detained due to a facial recognition false positive, Brazil has a long history of black Brazilian men being wrongfully detained due to what is known locally as reconhecimento fotográfico or reconhecimento por foto, wherein the victim of a crime or the police 'recognize' a suspect on the basis of a visual (i.e., human eye) photo match (e.g., see CNN Brasil, Fantástico, Folha de S. Paulo, G1, Revista Piauí, UOL).
In addition, the cases of Nijeer Parks and Robert Williams, African American men who were erroneously arrested in the US based on facial recognition false positives, were covered in leading Brazilian media (e.g., CNN Brasil, Estadão, IstoÉ, UOL) and have informed local voices arguing in favor of curbing police use of the technology.
Recent News Stories Focus on Bias in Facial Recognition
In light of the above, a recent spate of reports and videos from media outlets Folha de S. Paulo (2), (3), (4), The Intercept Brasil, Rest of World, Revisa Galileu, blog Alma Preta, the University of Brasília, and the University of São Paulo (2) have focused on, or at least noted, the risk of black Brazilians being arrested or detained due to facial recognition false positives, with the University of São Paulo warning that "facial recognition technology in public safety could deepen racism and misogyny." Eduarda Costa and Carolina Reis, who responded to IPVM on behalf of LAPIN, agreed with the university's warning, writing:
Yes, LAPIN agrees. As various studies demonstrate, facial recognition technologies have inferior and unreliable results when used to scan faces of non-white persons, the elderly, or females. The risk of false identification can be even greater in environments whose characteristics are not the same as those used for the training of the technology, as is the use case for public safety purposes.
LAPIN's point above is reminiscent of recent remarks by EU parliamentarians, who similarly argued that "racialized people," women, and "LGBTI people, children, and the elderly" are commonly misidentified by facial recognition software.
Academics: Technology Not Inherently Biased
IPVM received comments from Dr. Luciano Oliveira, an associate professor in computer science at the Federal University of Bahia, and Dr. Flavio Vidal, an associate professor in computer science at the University of Brasília, for this report.
Both professors argued against facial recognition being inherently biased against people of color, with Dr. Oliveira (pictured) noting that human operator error plays an integral role in false detentions/arrests:
[I]t is technically very unlikely that the problem is with the technology or with some supposed racist bias built into it. [T]he technological bias is inherent to the method implemented and executed on a certain type of hardware (in this case, video surveillance cameras). Final errors in identifying suspects are very likely due to human operators, who discern the results of the technology. [emphasis added]
Dr. Oliveira further argued that the fact that facial recognition software tends to be less accurate on a "black face" is indicative of problems with human experts, or operators, and not with developers:
The problem of racism in facial recognition technology could exist in the face detection phase. But let's think about it: if a racist developer decided to 'program' this feature into a system, it would work better to more faithfully recognize the black face than the white face, don't you think? In the identification module, the bias could also occur by only identifying the faces of black people. This is very unlikely to happen in a company that develops this type of technology. This, from a purely technological point of view. But, as I said before, perhaps the problem lies with whoever operates this technology, who, ultimately, must know how to properly retrieve the results of the face recognition system, analyze the results and issue an opinion—basically the work of a human expert. Therefore, I believe that the problem is much greater and involves historical, cultural, religious, political, social issues, etc. [emphasis added]
Dr. Vidal (pictured) echoed Dr. Oliveira's comments in his statement to IPVM:
I disagree with the statements about the risks of deepening racism and misogyny.
Dr. Vidal further noted the importance of determining acceptable confidence scores:
[F]acial recognition technologies make probabilistic inferences, which we call predictions, from a data set that, for the most part, is incomplete data (e.g., low spatial resolution and/or blurry image). As a basic statistical principle of probabilistic inference, a prediction is never completely free of error. With technology, there will always be an aggregate error, and what one must evaluate is whether this error is acceptable or not. [emphasis added]
Dr. Vidal further highlighted the poor quality of most surveillance-quality photos, a point also noted in IPVM's recent Corsight AI's Aggressive Claims Examined report:
The vast majority of images, and I say this from personal experience, analyzed by experts and/or used in facial recognition technologies, come from systems with low quality of capture, storage, encoding, compression, and including, positioning.
Technology Not Mature Enough for Public Sector?
The professors and LAPIN did, however, agree that technology currently deployed in public sector facial recognition implementations in Brazil may not be mature enough for law enforcement use, particularly given the quality of surveillance cameras (a relevant point in Brazil, where, unlike the US, Canada, and Europe, analog cameras remain commonplace). According to Dr. Oliveira:
Relying only on video surveillance cameras certainly makes the system less robust in face of facial identification problems.
For his part, Dr. Vidal critiqued industry companies for not doing enough to publish the results of their technologies and submit them for peer review, making it difficult to assess whose software does what, and how well:
[C]ompanies generally do not work in a way to publish their results on peer-reviewed platforms (e.g., prestigious scientific journals) that ensure that the work developed is effective and methodologically plausible.
[What is needed is] an approximation with academia and adoption of a model that allows for peer-reviewed assessment, in order to analyze the data presented and its performance. And it is noteworthy that this type of data publication is possible, preserving patent and intellectual property of all involved.
The above point was discussed in Corsight AI's Aggressive Claims Examined, as the Israeli company declined to disclose data to IPVM on how adverse events (e.g., near darkness, headwear, etc.) impact the efficacy of its software.
LAPIN, which published a 60-page report on facial recognition in Brazil in June 2021, simply stated that current technologies in Brazil are "not mature enough":
[F]rom the information we were able to collect via the media and direct requests, the technology is not mature enough to be deployed in a law enforcement context.
Government Not Transparent
The university professors and LAPIN agreed that the Brazilian government's rollout of facial recognition has been fraught with shortcomings, with local public authorities failing to promote accountability and transparency, among other concerns.
Dr. Vidal noted that there are no performance standards in place locally and a general disregard for minimizing false positives:
[A]s far as I'm aware, there are laws that preserve the individual's privacy, but nothing specifically that indicates regulation for this technology. A simple example is that there is no definition of minimum performance levels for such technology, in order to fulfill the requirements for identification, with a degree of certainty, focused on minimizing a high amount of false positives. [emphasis added]
Dr. Vidal further noted that authorities do not publish reliable performance data:
[T]here are no clear details on how this process is used by the Brazilian authorities, including information on the performance of the technique (or product) used in the acquired systems in a transparent and accessible way. We are still taking baby steps on these regulatory and transparency issues. [emphasis added]
Dr. Oliveira seconded Dr. Vidal, calling current government regulation "weak" and noting that more pressing, basic health and security concerns contribute to the lack of an overall public debate on facial recognition within Brazil:
I think it's mainly because these governments have weak or little to no regulation. This type of technology is also prevalent in dictatorial regimes. I don't know of any regulation in Brazil about this technology. First, because for a regulation to exist, a public debate must take place, led both by experts in civil society and by representatives in Congress. I believe that the reason for this lack of transparency or regulation is, first of all, the attention given to other bigger problems such as basic sanitation, corruption, education, unemployment, etc. [emphasis added]
LAPIN echoed the lack of regulation locally, further noting that local populations are rarely consulted prior to the implementation of facial recognition deployments:
There is no legal regulation on the federal level and some sparse legislation on the state and local levels. Without any specific rules on how to deploy the technology—if it should be deployed at all—private and public sectors have been adopting facial recognition technology for largely different purposes and in diverse manners. In addition to the lack of law, there are no policies or guidelines that promote transparency in the development and use of the technology, so the risks involved and the harms of using facial recognition technology are not evident. Particularly in the public sector, there is no broad social discussion prior to the implementation decision. When it occurs, it normally happens after the decision-making process and includes very specific groups, such as companies, authorities involved, and eventually the media. In other cases, the whole process is totally unclear and civil society only becomes aware of the deployment after media reports. [emphasis added]
The lack of engagement with civil society was noted by Rest of World, which documented the case of a school in Bahia that implemented facial recognition to track student attendance without consulting parents/guardians beforehand.
Disagreement on Efficacy of Police Facial Recognition Use
In Europe, as reported last month by IPVM, the tide has turned decidedly against police/public safety use of facial recognition, with the European Parliament recommending restrictions and bans just weeks ago.
While Brazil lacks a similar public debate on the efficacy of police use of facial recognition, our interviewees expressed differing viewpoints on the matter, with Bahia-based Dr. Oliveira setting forth a nuanced view, weighing the technology's advantages and disadvantages:
Facial recognition is just a tool that, as they say here in Brazil, will improve the effectiveness of policing systems. The explicit advantage is to automate the search for faces, either to find suspects or to find missing people, with the help of hundreds of cameras scattered throughout the cities. The disadvantage, also evident, is the mistakes made by technology in the process of identifying people; if this automation is not scrutinized by a well-prepared human operator, we can obviously have unfair situations if the technology is used as judicial evidence, for example. [emphasis added]
Dr. Vidal opined in favor of police use of facial recognition while making note of the technology's ability to facilitate abuses of power:
Facial recognition systems should be part of a set of public security policies. This system can be an ally, but it is incapable of making Brazil safer on its own. The advantage, almost inherent to the use of this technology, is the facility it has to identify an individual for purposes of public service uses. Brazil is a continental country that, unfortunately, requires its citizens to prove that they are themselves. Often the bureaucracy required for a citizen to prove it is absurd. Now, with the use of a facial recognition system, this process can be fast and efficient. The biggest disadvantage is the fact that these systems allow, especially for government authorities who have controlling tendencies, the quick and effective monitoring of citizens, knowing what they do on a daily basis, including in their private lives[.] [emphasis added]
LAPIN argued against public sector facial recognition use, citing a lack of government transparency regarding the technology's acquisition, implementation, and efficacy:
The use of facial recognition technology by the public sector offers more disadvantages than benefits to citizens. Law enforcement authorities are deploying facial recognition technology from different manufacturers. In general, the acquisition and implementation processes are unclear and obstructed. The public, including civil society organizations such as LAPIN, faces various challenges to access the information related to the use of facial recognition technology by the police. [emphasis added]
For a more generalized discussion of the lack of transparency in Latin American video surveillance, see Latin America Video Surveillance Sales Not Transparent Says NGO (NGO report cover pictured).
Data Privacy Concerns
Brazil has a checkered record on data privacy, with the Superior Court of Justice, the country's highest appellate court, being the victim of a hack in November 2020 that compromised the court's computer systems for over two weeks. Furthermore, media reported in January 2021 that a 'megaleak' had compromised the CPF (homologous to US Social Security) numbers of over 200 million Brazilians:
In light of these known data privacy shortcomings, the academics and LAPIN advocated for Brazilian authorities to do more to protect citizens' data captured by facial recognition cameras. According to Dr. Oliveira, facial recognition data could allow a nefarious actor to track "the entire journey" of a person's life:
Data is a gold mine for companies and governments. Access to this data can mean unimaginable power. Recognizing someone by a camera on the street completes the entire journey of an individual's data: credit card purchases, social media, location and individual identification. This information can be used for both good and bad things. [emphasis added]
Dr. Vidal noted that other countries are interested in data on Brazilians given the country's ethnic diversity:
It is worth noting, due to my personal and professional experience, that researchers from other countries in the field of facial recognition have a great interest in having facial data on the Brazilian population, mainly due to this wealth of diversity. Despite the General Law for the Protection of Personal Data (LGPD), in force in Brazil since 2018, there is no efficient way to verify the privacy details in the collections that are made, and, primarily, there is no way to monitor its effective application. [emphasis added]
IPVM has previously discussed the possibility, as expressed by some data privacy advocates, that Latin America may be serving as a laboratory or testing ground for international technology companies in 'Donation Diplomacy' From Dahua, Hikvision, and Huawei Examined and Latin America Video Surveillance Sales Not Transparent Says NGO.
LAPIN reiterated the testing ground, or "living lab," point while warning that there is little oversight in place governing how and when government authorities can share facial recognition data:
As mentioned before, the acquisition and deployment processes of such technologies in the public sector are overwhelmingly untransparent. Therefore, little information is available regarding the agreements made between companies and governments, the data flow between authorities and other parties, and the use of the data collected after the deployment. Many places in Brazil may be a current living lab for facial recognition technology without citizens' knowledge and consent. [emphasis added]
Conclusion
As noted by LAPIN above and as reported in Mexico City Public Video Surveillance Faces Scrutiny and Mexico Video Surveillance Market Overview 2020, many Latin American countries, such as Brazil, are plagued by high rates of violent crime, especially homicides.
Regional politicians like Mr. Bolsonaro and São Paulo state governor João Doria (pictured in his previous capacity as São Paulo city mayor) in Brazil, as well as the mayors of Mexico City and Santiago de Chile, have advocated in favor of technology, including surveillance cameras and facial recognition, as a way to combat crime and improve safety.
Nonetheless, as IPVM has reported, there are huge financial and ethical considerations behind any public sector surveillance deployment. These considerations were reiterated by Dr. Oliveira:
[W]ithout training the people who operate these systems, without investment in police preparation, and without a transparent government policy with clear objectives, no city can be considered safe. Prior to all this, there must be an investment in education. The more educated the people are, certainly, the safer the environment around them tends to be.
While facial recognition, if deployed properly, can help solve crime, facial recognition or surveillance cameras alone cannot make Brazil, or any country, safer, with Dr. Vidal writing in closing:
[I]t is an absurd mistake if an authority or citizen believes that these systems alone are capable of making Brazil safer.
* **** **** ******* ******* **** lots ** ********* *** *** ******* facial *********** **********. ***** *** ******!