The Chicago Police Department (CPD) is pushing back on a report that questions the effectiveness of an algorithm-based, crime-prediction system officials there have been testing since 2013.
The predictive policing program, launched with the Illinois Institute of Technology (IIT), generates a heat list—or Strategic Subjects List (SSL)—of people the system believes are most likely to kill or be killed.
“The goal is to ensure the individual is not only informed of the law enforcement consequences for deciding to engage or continue in gun violence, but also the devastating impact of gun violence within their community,” the CPD wrote three years ago in its pilot program directive.
But according to a new report from the RAND Corporation, which was provided access to the system, it’s not exactly working as planned. Rather than helping the police locate at-risk residents, the system is being used more as a suspect list when officials are trying to solve shooting crimes, RAND finds.
“Individuals on the SSL are not more or less likely to become a victim of a homicide or shooting than the comparison group, and this is further supported by city-level analysis,” RAND writes. “The treated group is more likely to be arrested for a shooting.”
At the time of the RAND study, the department’s list contained 426 names. None, RAND says, are more or less likely to pull the trigger or be shot than a comparison group.
Chicago Police Department Superintendent Eddie Johnson and Director Anthony Guglielmi last week released a lengthy statement that argued, in part, that RAND evaluated an early verison of the model.
“The paper does not evaluate the prediction model itself [and reviews] our earliest person-based predictive model,” the department says. “Since that time, the SSL model has undergone extensive refinement and repeated iterations. We are currently using SSL Version 5, which is more than 3 times as accurate as the version reviewed by RAND (Version 6 is in simultaneous development). Regarding this prediction model, repeated quantitative evaluations have shown that the model produces very accurate findings.”
According to CPD, “RAND only evaluated the first few months of the program, and the findings are no longer relevant.” The organization’s findings do “not support a conclusion that the tools and predictive models to support the strategy are somehow deficient,” the statement says.