Alibaba Uyghur Recognition As A Service

Published Dec 16, 2020 17:17 PM
PUBLIC - This article does not require an IPVM subscription. Feel free to share.

Alibaba, the NYSE-listed 'Amazon of China' with a $700 billion+ market cap, openly offers Uyghur/'ethnic minority' recognition as a Cloud service, allowing customers to be alerted any time Alibaba detects a Uyghur.

Watch this 90-second video for an overview:

IPVM Image

This means Uyghur recognition goes beyond China police usage to the country's Internet as the PRC government cracks down on the repressed minority.

Alibaba Cloud quickly deleted mentions of Uyghurs and minority detection on its website after Alibaba was contacted for comment. Alibaba Cloud then claimed, without evidence or explanation, that these features were only used "within a testing environment".

IPVM collaborated on this report with The New York Times which has published its own investigation. Earlier this month, IPVM and The Washington Post revealed that PRC tech giants Huawei and Megvii tested and validated 'Uyghur alarms' in face rec software meant for police video surveillance projects.

Alibaba Cloud Background

Alibaba is China's largest e-commerce firm with ~$78 billion in sales for the latest fiscal year IPVM Image ended March 31st; it is often compared to Amazon and has a well-known founder, Jack Ma, who is currently China's richest person with a net worth of ~$61 billion.

Similar to Amazon's AWS, Alibaba has a huge cloud division called Alibaba Cloud ('Aliyun' in Chinese) which touts "more than 3 million paying customers" worldwide and is China's largest cloud service. Alibaba Cloud reported ~$5.6 billion in sales for fiscal 2020.

Uyghur Detection As A Service

An Alibaba Cloud China API guide lists "is it Uyghur" (是否是维族) as one of several 'face attributes' it can detect (screenshot via Google Translate):

IPVM Image

("Weizu" is a transliteration of 维族 or 'Uyghur ethnicity' in Chinese.)

The API guide, last updated in May 2019, mentions Uyghurs a second time by clarifying that 'minority' detection refers not to minorities in general but specifically to Uyghurs:

IPVM Image

Part Of Alibaba 'Cloud Shield' Content Moderation Solution

The API guide says it is part of Alibaba's "Cloud Shield" solution. Alibaba Cloud explains that Cloud Shield "is a pioneer in the field of Content Security", which:

detects and recognizes text, pictures, videos, and voices containing pornography, politics, violent terrorism, advertisements, and spam, and provides verification, marking, custom configuration and other capabilities.

The PRC government strictly censors the domestic Internet, punishing violators, so there is a large and increasingly-AI driven industry of moderation and censorship solutions for private firms.

Alibaba Cloud Offers "Ethnic Minority" Detection

The API guide mentioning Uyghurs is not available when searching Alibaba Cloud's website but is indexed by Google.

Alibaba Cloud's Content Security website directly includes two API guides for "sensitive video facial recognition" that can detect "whether it is an ethnic minority" (是否少数民族); see Synchronous Detection (last updated June 2019) and Asynchronous Detection (last updated November 2020):

IPVM Image

While Uyghurs are not explicitly mentioned, as noted, the Cloud Shield API Guide states that "minority (Uyghur)":

IPVM Image

Minority Detection Explained

The PRC government has been cracking down on Uyghurs for years and this technology helps that, e.g. if a Uyghur decides to livestream a speech, Alibaba Cloud's AI can automatically recognize a Uyghur face and the video can be flagged for review or removal before it finds a significant audience.

The technology works for any video or pictures with Uyghur faces, so even an anodyne prerecorded video of a Uyghur explaining her first day of university would be flagged if the Alibaba Cloud client had toggled on this feature and it worked as intended.

IPVM could not find any deployment examples of Alibaba Cloud's Uyghur recognition solution. Alibaba owns YouKu, one of China's top video websites, while Alibaba's e-commerce platforms heavily use livestreaming. Weibo, one of China's largest social media apps, also uses Alibaba Cloud.

International Cloud Doesn't Offer Uyghur Detection

Uyghur or ethnic minority detection are not mentioned on Alibaba Cloud's English/global websites, indicating this function is likely only used within China.

IPVM Image

Alibaba Deletes Uyghur & Minority Mentions

Soon after IPVM and The New York Times reached out for comment, Alibaba removed the API Guide mentioning Uyghurs, which now reads "sorry, this content is being updated":

IPVM Image

Alibaba also removed the "ethnic minority" face detection feature from the two pages mentioning it. For archived versions, see Synchronous Detection and Asynchronous Detection.

Alibaba Claims Only Used for Testing, Without Evidence

Alibaba Cloud claimed the Uyghur recognition was only used "within a testing environment" and was "never used outside" this context:

IPVM Image

The ethnicity mention refers to a feature/function that was used within a testing environment during an exploration of our technical capability. It was never used outside the testing environment

Alibaba did not provide any evidence this was a "test" and there are zero mentions of 'tests' in the Alibaba Cloud webpages which include Uyghur/ethnic minority detection. Alibaba also did not explain why it would test Uyghur/ethnic minority detection in the first place.

UPDATE: Alibaba Says "Dismayed" In New Response

Alibaba issued a new statement saying it is "dismayed" that Alibaba Cloud developed this "facial recognition technology" while maintaining it was used "in a testing environment":

Racial or ethnic discrimination or profiling in any form violates Alibaba’s policies and values. We are dismayed to learn that Alibaba Cloud developed a facial recognition technology in a testing environment that included ethnicity as an algorithm attribute for tagging video imagery. We never intended our technology to be used for and will not permit it to be used for targeting specific ethnic groups, and we have eliminated any ethnic tag in our product offering. This trial technology was not deployed by any customer. We do not and will not permit our technology to be used to target or identify specific ethnic groups.

Kingsoft Also Includes Uyghur Detection

Kingsoft is a separate China cloud services provider and its August 2020 API available on its website includes "Uyghur, non-Uyghur" face detection:

IPVM Image

Uyghurs are mentioned again in the API which detects with 48.7% confidence that a sample face is "Uyghur":

IPVM Image

Kingsoft recently went public on the NASDAQ this May is currently worth ~$10 billion. The firms calls itself "a leading independent cloud service provider" in China and made about ~$1.2 billion in sales for 2019.

Kingsoft: "Failed To Sufficiently Review" API

Kingsoft deleted this API from its website, saying it was only used to locate faces and "was not able to distinguish or identify individuals of Uyghur background". Kingsoft added "we have failed to sufficiently review the Subject API" which is "being withdrawn" as "labelling on the basis of any race is inappropriate and inconsistent with Kingsoft Cloud’s policies and values".

Below is Kingsoft's full statement:

Subject API was a package of code that only served to help locate a face in a frame of pictures for other developers to then use. It was not image recognition or authentication software. It has never been sold and the whole API program including the Subject API represents negligible revenues for the Company. The Subject API was not able to distinguish or identify individuals of Uyghur background. The labelling on the basis of any race is inappropriate and inconsistent with Kingsoft Cloud’s policies and values. We recognize we have failed to sufficiently review the Subject API. This misleading product is being withdrawn and we will conduct a full review of our API platform and ensure they have appropriate leadership oversight and control mechanisms in place. Our products will never include any attempt to identify and label specific ethnic groups.

IPVM Image

Kingsoft added that generally "the Company takes this issue very seriously, and has launched an internal investigation." Kingsoft also addressed its API appearing to detect with 48.7% confidence that a sample face is "Uyghur" (see below):

IPVM Image

Kingsoft said 48.7% is a "randomly generated number" not "based on any actual image":

It is inaccurate to say that the Subject API was able to detect with 48.7% confidence that a sample face was “Uyghur.” The 47.% number derives from the sample output relating to a sample image on page 4 of the API documentation. At the top of that same page, however, it is clear that there is no sample image: the sample image is reflected as “image_url: xxxx”. The figure shown in the documentation 0.4870010614395142, and referred to in the article as 48.7%, is a randomly generated number; it is an illustrative output to show the data format, and not an actual output of the underlying software based on any actual image. This dummy number based on a non-existent sample image is not a “confidence level” (i.e., the Subject API’s accuracy in prediction) in the correct meaning of that word and the context of the document.

Previous Race Analytics Example

While IPVM could not find any examples of firms using Alibaba Cloud's minority analytics, there is one reported example of 'race' detection being deployed by a separate Chinese tech firm. This summer, a foreigner living in China reported that his Chinese wife's Douyin (Chinese Tik Tok) livestream was shut down after his face appeared in her video due to regulations against foreigners appearing on livestreams "without permission":

IPVM Image

While it is unknown whether this was from Alibaba, Alibaba Cloud's 'Content Security' does offer "Asian" face detection, allowing a livestreaming firm to be automatically notified if a non-"Asian" face is detected in a video:

IPVM Image

Separately, Douyin (Chinese Tik Tok) is known to have deleted videos en masse of Uyghurs protesting PRC government repression, turning the video site into "an all-singing, all-dancing propaganda platform", Coda Story reported, although it is unknown if face analytics were involved.

PRC state news outlet Global Times has reported that some Chinese companies specialize in censoring videos made by ethnic minorities in China, with one such firm stating explicitly that "we usually don't use Uyghur people to moderate their own language content" in order "to avoid trouble".

Conclusion

It has been well documented that China police use Uyghur 'alerts' in their video surveillance systems. Alibaba's offering of this explicitly racist technology to its vast Cloud clientele shows the repression of Uyghurs goes well beyond law enforcement.

Comments (12)
Avatar
Mark Palka
Dec 16, 2020
IPVMU Certified

This is beyond words and is truely criminal. Obviuosly the chinese are flexing their muscles its a dangerous move and needs to be met with consequence.

(7)
(1)
Avatar
Michael Gonzalez
Dec 16, 2020
Confidential

Testing the waters to see what they can get away with, or simply just a fail in terms of hiding their shady activities? Maybe a little from column a, and a little from column b?

(2)
UI
Undisclosed Integrator #2
Dec 17, 2020

The West is far too reliant on China for cheap labor, products, and materials. There will be just as many consequences for China over this as there were for unleashing COVID on the world.

(1)
(1)
UD
Undisclosed Distributor #1
Dec 16, 2020

Late 1930s anyone?

JH
John Honovich
Dec 16, 2020
IPVM

Bill Bishop of Sinocism has commented on Alibaba's Uyghur recognition, saying:

When these features are discovered the playbook response seems to be that it was just a technical test and then to delete mentions of it from their website…The international ramifications of this for Alibaba and its top executives could be significant if they end up on any sanctions lists. And it may make things awkward for the NBA since Alibaba executive vice chairman Joe Tsai owns the Brooklyn Nets. Also, has Alibaba put as many technical resources into scanning its site for counterfeit products?

(7)
ME
Matthew Eckman
Dec 17, 2020

Excellent research and reporting. If I had not joined IPVM, I doubt I would know any of this.

In other news, the inauguration of the president of the United States will take place in Beijing this year.

(4)
(3)
Avatar
Morten Tor Nielsen
Dec 17, 2020
prescienta.com

Indignation is a powerful engagement tool. Indignation triggers are being used by social media giants to keep you coming back. You just have to check up on that guy who wrote something awful and who is clearly wrong. It's a good bonding agent too. The feeling of comraderie is never more pronounced than when you're part of a gang fighting the baddies with the wrong thoughts.

(Good) politicians know that we can experience indignation fatigue. They intuitively understand that after a while, what triggered us last week, no longer works as a catalyst for anger and outrage. Especially if the politician issues an apology. A likable person (nice smile, symetrical face, good proportions) can burn children alive and still be loved and admired. I, myself, got a little annoyed with me for bringing up the burning of children. I should know that there was a perfectly good reason for setting those kids ablaze? It was long ago too. The other guy would have burned a lot more children. So why am I wallowing in that fowl bog once again? Not only are we no longer offended by the idea, we're now offended by people who bring it up again and again.

I think IPVM is doing a great job reporting on this, but I am starting to feel the fatigue setting in.

To me, the video feels like blatant anti-China propaganda. Yet I know it isn't. To me at least, there's just isn't a lot of new revelations in there. Same content, different packaging.

"Oh, but now it's a service. Totally different!".

Is it though? To me "something as a service" is not some new, magical, super powerful tech created by deities. It's just the same old software libraries, it just runs on someone elses computer, so I won't have to maintain the infrastructure around it.

The way my fatigue system works is that my mind starts cooking up whataboutisms. My id then discards that as lame, and I move on to apathy and having no faith in humanity anywhere. When I followed a link to Washington Post (owned by Jeff Bezos' mail order emporium, and thus competitor to Jack Ma's cheap knockoffs store), and saw this paragraph

Protests on the scale of Black Lives Matter would be nearly impossible to organize in mainland China — partly because of these very surveillance technologies.

I immediately started thinking that a lot of people would actually love to have such a system installed. Obviously not to track Uyghurs (that would be immoral), but what about Antifa members or whomever we perceive as a threat just now (I am fairly confident that you can build a model to infer Antifa potential sympathies from facial images, whereas Proud Boys is probably harder).

Google recently fired Timnit Gebru, someone who was supposed to raise a flag when AI starts to become an ethical problem. There could be a bunch of reasons for why she got fired, but the response from the community was very strong. There's clearly a movement in the realm of AI where people who know this stuff are starting to understand how it will be used - not just in China - but everywhere, and they're worried.

The final phase is that I start looking for excuses. In recent years Europe has been plagued by gruesome crimes carried out by religious fanatics. So I am on the fence here. Would I support racial/ethnic tracking and preemptive incarceration of people if it could prevent these attacks? I feel a bit shameful admitting that I probably would. To me, its polemic as I feel that such a system would probably make things worse, but I'm confident that a very large percentage of the population would support it - and certainly if backed by official propaganda.

IPVM must continue the coverage.

It's a good thing. And I can't really offer any alternative approaches. Report and repeat is probably the best way to handle this. It's just that I am, slowly, and regrettably, starting to feel that maybe that cows udders are getting a bit saggy...

That's all

(2)
(1)
JH
John Honovich
Dec 17, 2020
IPVM

To me at least, there's just isn't a lot of new revelations in there. Same content, different packaging.

Well, if one already believes this is commonplace among China tech companies then there is no new revelation. Many do not believe this or are not aware of it and, beyond that, we are showing direct proof from their own technical documentation.

Would I support racial/ethnic tracking and preemptive incarceration of people if it could prevent these attacks? I feel a bit shameful admitting that I probably would

You can move to the PRC then... That said, even for the PRC / China Communist Party standards, it is very hard to make a case that Uyghur tracking being required in so many China cities is actually stopping 'terrorism', even the obvious and huge ethics issues aside.

(4)
UI
Undisclosed Integrator #2
Dec 17, 2020

Disgusting.

IPVM Image

(1)
Avatar
Charles Rollet
Dec 18, 2020

UPDATE: Alibaba Says "Dismayed" In New Response

Alibaba issued a new statement saying it is "dismayed" that Alibaba Cloud developed this "facial recognition technology" while maintaining it was used "in a testing environment":

Racial or ethnic discrimination or profiling in any form violates Alibaba’s policies and values. We are dismayed to learn that Alibaba Cloud developed a facial recognition technology in a testing environment that included ethnicity as an algorithm attribute for tagging video imagery. We never intended our technology to be used for and will not permit it to be used for targeting specific ethnic groups, and we have eliminated any ethnic tag in our product offering. This trial technology was not deployed by any customer. We do not and will not permit our technology to be used to target or identify specific ethnic groups.

Once again, Alibaba did not provide proof that Uyghur/minority detection was only a 'test', with all evidence cited by IPVM clearly showing this was a standard Cloud feature.

(1)
JE
Jerome Ellis
Dec 18, 2020

hmm that picture looks like George Michael..!

Avatar
Charles Rollet
Dec 21, 2020

UPDATE 12/21: Kingsoft also addressed its API appearing to detect with 48.7% confidence that a sample face is "Uyghur":

IPVM Image

Kingsoft said 48.7% is a "randomly generated number" not "based on any actual image":

It is inaccurate to say that the Subject API was able to detect with 48.7% confidence that a sample face was “Uyghur.” The 47.% number derives from the sample output relating to a sample image on page 4 of the API documentation. At the top of that same page, however, it is clear that there is no sample image: the sample image is reflected as “image_url: xxxx”. The figure shown in the documentation 0.4870010614395142, and referred to in the article as 48.7%, is a randomly generated number; it is an illustrative output to show the data format, and not an actual output of the underlying software based on any actual image. This dummy number based on a non-existent sample image is not a “confidence level” (i.e., the Subject API’s accuracy in prediction) in the correct meaning of that word and the context of the document.

Kingsoft also added that generally "the Company takes this issue very seriously, and has launched an internal investigation." IPVM will keep updating this article on the status of Kingsoft's investigation.

(1)