Future Tense

ICE Wants to Use Predictive Policing Technology for Its “Extreme Vetting” Program

People wait in a security line at John F. Kennedy International Airport.

Spencer Platt/Getty Images

Donald Trump is following through on his promise to crack down on immigration, and the Department of Homeland Security is asking American tech companies to lend a hand.

On Monday, the Intercept published a set of documents from a two-day event in July hosted by the U.S. Immigration and Customs Enforcement’s Homeland Security Investigations division, where tech companies were invited to learn more about the kind of software ICE is looking to procure for its new “Extreme Vetting Initiative.” According to the documents, ICE is in the market for a tool that it can use to predict the potential criminality of people who come into the country.

Specifically, ICE wants software that “automates, centralizes, and streamlines the manual vetting process” and can also “make determinations via automation if the data retrieved is actionable.” In plain terms, ICE wants a system that can analyze data in government databases to flag potential visitors to the U.S. who might be up to no good.

The Extreme Vetting Initiative also wants to find a program that can automatically “determine and evaluate an applicant’s probability of becoming a positively contributing member of society, as well as their ability to contribute to national interests” and predict “whether an applicant intends to commit criminal or terrorist acts after entering the United States,” according to the report.

If this sounds familiar, that’s because it basically describes predictive policing software, which typically uses algorithms to determine someone’s propensity to commit a crime or whether a crime is likely to occur in a certain area. In recent years, predictive software has been has become a staple of police departments nationwide—at least 20 of the 50 largest police departments in the country use it. But there are lots of reasons to be skeptical about the efficacy, and fairness, of such systems. Predictive policing software typically depends on data about previously reported crimes and records of police responses to crimes, which civil rights organizations like the American Civil Liberties Union and NAACP say amounts to drawing from a history of well-documented racism in U.S. policing to craft the predictions. If police are more likely to patrol certain neighborhoods, they’ll find more crime, which will make the predictive system think that more crime is actually happening there.

ICE hopes to build a system that would rely on government agency and law enforcement databases, as well as other public information available online, like social media activity, which will be analyzed for “continuous vetting” of travelers during their time in the U.S. Just as local law enforcement’s use of predictive policing software can perpetuate existing prejudices by relying on historically biased data, so too could a national extreme-vetting system. That’s because government law enforcement databases are also packed with biased information.

Take the Suspicious Activity Reporting program run by the Department of Homeland Security. The ACLU sued the federal government in 2014 after finding that the SAR program had collected reports on people who had done nothing wrong. One report that the ACLU obtained through public records requests showed that the FBI had opened a file on a man who was deemed suspicious because he looked like he practiced Islam and was looking at his computer. Another report detailed how a known photographer was placed on a federal terrorist database and later questioned by the FBI because he was taking pictures of public art. So, federal law enforcement doesn’t have a solid track record of collecting reliable intelligence for its databases.

The Extreme Vetting Initiative also plans to collect information from social media to keep tabs on foreigners, and American law enforcement doesn’t always use sound judgment when combing through that data, either. In one of the most egregious and heartbreaking cases of social media used to misidentify criminality, Jelani Henry, a teenager whose case was later dismissed, spent two years in Rikers for liking posts on Facebook.

Yet, according to the documents from the July meeting, these are the types of information sources that ICE wants a software system to scour in its vetting process. “We are open to anything right now,” ICE wrote in a vendor Q&A document. “We don’t want to be restrictive so we don’t want to strictly limit it to certain datasets.”

As for the companies that showed up to learn more about how they could have a crack at scoring a lucrative government contract to build a new predictive policing database for ICE, those included­­ IBM, LexisNexis, Booz Allen Hamilton, and Deloitte, among others. Other, big tech firms like Microsoft and Google weren’t in attendance, but Microsoft at least agrees that predictive policing is “the future of law enforcement.”

The problem, though, is that this could be a future where even more innocent people are treated like criminals and added to federal surveillance databases, whether or not they did anything wrong. Technology isn’t necessarily any freer from bias than people are.