MacArthur Justice Center vs ShotSpotter Commissioned Report

By IPVM, Published Aug 13, 2021, 10:22am EDT

Amidst bad press, ShotSpotter commissioned Edgeworth Analytics to review Northwestern Law School’s MacArthur Justice Center (MJC) report that ShotSpotter “drains millions of dollars of taxpayer resources every year without any clear benefits to public safety.”

IPVM Image

IPVM spoke with MJC and Edgeworth to report both sides' claims and analyze who is correct.

Executive Summary

After MacArthur Justice Center (MJC) reported in May that 88.7% of July 2019 - April 2021 Chicago ShotSpotter alerts were “unfounded”, ShotSpotter commissioned Edgeworth Analytics, who reported on July 28th that the MJC report was “misleading” and that “based on client reports” ShotSpotter was “highly accurate at detecting outdoor gunshots”.

While both firms agree on the fact that 88.7% of initial police responses to alerts found no incidents involving a gun, Edgeworth said the number is “meaningless” and that MJC’s conclusions based on its significance are not supported by data. But, despite calling ShotSpotter “highly accurate," Edgeworth could not answer a key question about how ShotSpotter records customer errors and admitted that basing their analysis on customer feedback was a "limitation". (90 percent of ShotSpotter's 2020 revenue came from US law enforcement agencies, with universities, corporate campuses, and others making up the rest.)

Edgeworth also told IPVM that ShotSpotter deployments in Chicago were picked due to higher crime rates, not racial lines. MJC told IPVM that ShotSpotter deployments lead to more reported gunshots, increasing the crime statistics of a neighborhood.

ShotSpotter Commissions “Independent” Report

After MJC published its report, ShotSpotter commissioned Edgeworth Analytics to provide an “independent analysis” of the report and audit ShotSpotter’s accuracy claims.

Edgeworth Did Not Know ShotSpotter’s Full Error Reporting Procedures

When speaking with IPVM, Edgeworth did not know how ShotSpotter handled customer-reported errors or what occurred in the investigative process, a key point because ShotSpotter uses customer-reported errors to calculate accuracy:

I don't know exactly how they would respond, that would depend on their interaction with the client and their investigation of what happened.

ShotSpotter later informed IPVM that customer-reported (police and campus security) errors are recorded as errors without mentioning investigation:

If a customer reports a false positive, we treat and record it as a false positive. It is standard practice for our clients to notify us of false positives and false negatives, and we encourage our customers to provide this feedback so that ShotSpotter can continue to improve our technology.

Customer Feedback vs. Test of Accuracy

MJC told IPVM the only way to assess ShotSpotter’s accuracy is independent testing. (Note, ShotSpotter declined IPVM’s request to test their solution):

The only way to get at that question [ShotSpotter’s accuracy] is to do proper validation studies [...] where they test the system in a blind manner against sounds that are similar to gunfire that are likely [...] to confound the system […]. Then you see how frequently the system sends out an alert in response to those kinds of sounds—things like firecrackers or blown tires or construction noises. [...] That's how you test accuracy—by actually testing the system against known samples, under circumstances where ShotSpotter and its operators don't know they're being tested. That's what we demand from other police investigative tools. For example, with DNA evidence, we demand tests showing that the lab has been verified and the particular technician has been tested.

Edgeworth initially responded to MJC’s view that the only way to assess ShotSpotter’s accuracy was by doing a test, saying that was not what they did:

No, I think [...] that's a scientific question. Yeah, that was a science question. And we don't have we didn't do a scientific study.

Edgeworth later said that they believe they did a scientific study, but one based on customer feedback:

Edgeworth did not do a scientific study of ShotSpotter’s acoustic technology, but it did do a scientific study of ShotSpotter’s accuracy based on customer feedback.

MJC told IPVM there is a “problem” with this method:

The problem with that is obvious. Police departments don't report all the errors, and actually they're not in a position to report false positives. When a police officer arrives at the scene and doesn't find anything, they have no idea what the source of the sound was. They are liable to assume it's gunfire because they’ve been trained--and ShotSpotter has told them over and over--that ShotSpotter is extremely accurate and effective. So why would they send in an error report just because they arrived on scene and found nothing?

Edgeworth admits there are limitations to this method, but contends that it is “scientifically valid”:

Customer feedback is a scientifically valid way to evaluate their accuracy. But obviously, a limitation [of the study] is that it’s [...] based on customer feedback

While this may be a good way for ShotSpotter to ensure customers are happy with their solution, it is not a reliable method of measuring accuracy.

“Dead-End Alerts” vs. Should be Deployed

MJC and Edgeworth agree that in July 2019 - April 2021 in Chicago 88.7% of ShotSpotter alerts initially “did not result in police recording any kind of incident involving a gun”, Edgeworth said the number is “meaningless” and that MJC’s conclusions based on its significance are not supported by data.

MJC told IPVM that regardless of if a gun was fired or not, these alerts are a waste of police resources and potentially dangerous:

We call them ‘dead-end alerts’ or ‘unfounded alerts’ because police arrive on the scene and find nothing. We stand by that nomenclature, which reflects what police actually end up finding at the scene. The point is that 8 times out of 9 in Chicago, a ShotSpotter alert leads police to report no evidence of a gun incident at the scene.

While it’s true that police don’t know in advance what they’re going to find before they get to the scene of a ShotSpotter alert, the data show that 88.7% of the time police can expect to find no gun-related incident. A key question is whether—given the high rate of dead-ends—it’s worth sending police out in response to ShotSpotter alerts, particularly when these are all high-intensity deployments more likely to lead to hostile encounters with the public, and particularly when these alerts are given high-priority and displace other things that officers could be doing with their time. [Emphasis added]

While Edgeworth told IPVM the initial police report does not capture subsequent findings and investigations, and that the police should have been deployed in case there was a “gunshot victim” or “perpetrator”:

I would disagree. The characterization [of deployments] as dead-end suggests that the police should not have been deployed. [...] if there was in fact gunfire, I think it would make sense for the police to be deployed as you don't know ahead of time whether they're going to find a gunshot victim, gunshot perpetrator or something else at the scene. And so the fact that the police responded and found nothing doesn't tell you anything about whether that deployment was warranted. It only tells you what results were found.

Edgeworth makes two important points, that an initial police report is not a true metric of accuracy and that saving a gunshot victim or finding a perpetrator may be worth many false deployments. However, they do not touch on the fact that these deployments are particularly dangerous for innocent civilians and stressful for law enforcement because the police expect a shooter. Edgeworth also did not analyze how often ShotSpotter alerts where a perpetrator or victim was found did not also generate a 911 call, or resources wasted on false alerts.

Racial Lines or High Crime Neighborhoods

Edgeworth responded to MJC’s note that “ShotSpotter is deployed overwhelmingly in Black and Latinx neighborhoods in Chicago” by writing that ShotSpotter is deployed “in the Chicago police districts where violent crime is disproportionately greater”. Edgeworth specifically noted that “CPD homicide data show that the 12 police districts where ShotSpotter is deployed are the 12 police districts with the highest number of homicides between 2012 and 2021.”

IPVM Image

MJC told IPVM they did not have a “complete response” to this, but that Chicago has not made a statement explaining why they deployed ShotSpotter where they did, and that ShotSpotter’s deployment would lead to increases in gunshots detected in the neighborhoods it is deployed:

We're still looking at that. ShotSpotter admits that it's the City of Chicago’s decision about what areas to cover, and I haven't seen anything from the City of Chicago itself explaining how it made these decisions. I don’t think ShotSpotter’s paid report is based on representation by the city about how they make their coverage decisions. What I can say, and it’s just true, is that ShotSpotter is only deployed on the South and the West Side, which are overwhelmingly Black and brown. Another aspect of our study that they don't really engage with is that when ShotSpotter is deployed in neighborhoods, it's generating statistics about supposed gunfire only in those neighborhoods. And because there's no reason to believe that ShotSpotter’s false positive rate is particularly good, the system is inflating the number of gunshots detected in those districts as opposed to elsewhere. That can skew the way that police deploy their resources: police have stats about the supposed number of gunshots in ShotSpotter neighborhoods that look much higher than other neighborhoods, but that's just because ShotSpotter is installed there. That can skew and infect decision making in other aspects of policing.

Unconvincing Response

The Edgeworth report does not convincingly respond to MJC criticisms of ShotSpotter’s method for calculating accuracy or the statement that in July 2019 - April 2021 in Chicago 88.7% of ShotSpotter alerts “did not result in police recording any kind of incident involving a gun.” Edgeworth used ShotSpotter’s metrics and did not know all of ShotSpotter’s error reporting procedures, adding more uncertainty to the ShotSpotter accuracy debate.

[Editor's note: To ensure Edgeworth's views were presented, the following language was added to this article: Edgeworth said the number is “meaningless” and that MJC’s conclusions based on its significance are not supported by data.]

Comments (1)

Only IPVM Subscribers may comment. Login or Join.

excellent report Zach.

Edgeworth: "Customer feedback is a scientifically valid way to evaluate their accuracy."

what utter and complete bullshit. this isn't just spin...this is the opposite of truth.

customer feedback (and specifically those customers who love the bullshit statistics that Shotspotter quotes on their website that seek to justify their expenditures on this technology) are absolutely not a 'scientifically valid' way to evaluate anything at all.

Agree: 1
Disagree
Informative: 1
Unhelpful
Funny
Loading Related Reports