quinta-feira, 29 de setembro de 2016

Can ‘predictive policing’ prevent crime before it happens?

Riding high in their squad car, officers Jamie Pascucci and Joe Kania are cruising the neighborhood of Homewood, scanning the streets for trouble. Pittsburgh, Pennsylvania, has one of the highest murder rates among large U.S. cities, and violent crime is particularly severe in Homewood, a 98% black pocket of aging, pock-marked Victorians on the east side. Young, white officers from outside the neighborhood, Pascucci and Kania patrol using a mixture of police radio, calls to their department's communications center, and instinct. They get occasional help from ShotSpotter, a network of sensors that detects gunshots and relays the information to a laptop mounted between the front seats.
City of Pittsburgh Police Officer Shane Kovach sitting in police car with a laptop
But starting next month, Pascucci and Kania may get a new type of guidance. Homewood is set to become the initial pilot zone for Pittsburgh's "predictive policing" program. Police car laptops will display maps showing locations where crime is likely to occur, based on data-crunching algorithms developed by scientists at Carnegie Mellon University here. In theory, the maps could help cops do a better job of preventing crime.
Many other cities have already adopted similar systems, which incorporate everything from minor crime reports to criminals' Facebook profiles. They're catching on outside the United States as well. Drawing on approaches from fields as diverse as seismology and epidemiology, the algorithms can help bring down crime rates while also reducing bias in policing, their creators say. They replace more basic trendspotting and gut feelings about where crimes will happen and who will commit them with ostensibly objective analysis.
That's a strategy worth trying at a time when relations between U.S. police and minorities are at an all-time low, says Pittsburgh Police Chief Cameron McLay, who acknowledges that policing has a long way to go to fix bias. (Last year, McLay showed up at a New Year's Eve celebration holding a sign that read, "I resolve to end racism @ work.") McLay sees the use of big data—combined with more community-focused strategies—as part of a palliative for policing's ills.
They're not predicting the future. What they're actually predicting is where the next recorded police observations are going to occur.
William Isaac, the Human Rights Data Analysis Group
But civil liberties groups and racial justice organizations are wary. They argue that predictive policing perpetuates racial prejudice in a dangerous new way, by shrouding it in the legitimacy accorded by science. Crime prediction models rely on flawed statistics that reflect the inherent bias in the criminal justice system, they contend—the same type of bias that makes black men more likely to get shot dead by the police than white men. Privacy is another key concern. In Chicago, Illinois, one scientist has helped the police department generate a list of individuals deemed likely to perpetrate or be victims of violent crime in the near future; those people are then told they're considered at risk, even if they have done nothing wrong.
To what degree predictive policing actually prevents crime, meanwhile, is up for debate. Proponents point to quick reductions in crime rates. But John Hollywood, an analyst for RAND Corporation in Arlington, Virginia, who co-authored a report on the issue, says the advantage over other best-practice techniques is "incremental at best."
The notion of crime forecasting dates back to 1931, when sociologist Clifford R. Shaw of the University of Chicago and criminologist Henry D. McKay of Chicago's Institute for Juvenile Research wrote a book exploring the persistence of juvenile crime in specific neighborhoods. Scientists have experimented with using statistical and geospatial analyses to determine crime risk levels ever since. In the 1990s, the National Institute of Justice (NIJ) and others embraced geographic information system tools for mapping crime data, and researchers began using everything from basic regression analysis to cutting-edge mathematical models to forecast when and where the next outbreak might occur. But until recently, the limits of computing power and storage prevented them from using large data sets.
In 2006, researchers at the University of California, Los Angeles (UCLA), and UC Irvine teamed up with the Los Angeles Police Department (LAPD). By then, police departments were catching up in data collection, making crime forecasting "a real possibility rather than just a theoretical novelty," says UCLA anthropologist Jeffrey Brantingham. LAPD was using hot spot maps of past crimes to determine where to send patrols—a strategy the department called "cops on the dot." Brantingham's team believed they could make the maps predictive rather than merely descriptive.

Crimes of the future

One commonly used approach in predictive policing seeks to forecast where and when crime will happen; another focuses on who will commit crime or become a victim.
Predictive policing flow chart
Diagram: G. Grullón/Science
Postdoctoral scholar George Mohler, now a mathematician at Indiana University-Purdue University, Indianapolis, suggested that borrowing models from seismology might be useful. Earthquakes take place at a relatively fixed rate along existing fault lines, but quakes can also occur in clusters, when an initial quake is followed by aftershocks occurring near in time and space, Brantingham explains. "Crime is actually very similar," he says. Some crimes are caused by built-in features of the environment, like a bar that closes at 2 a.m. every night, unleashing rowdy drunks onto a neighborhood. Others, such as a series of gang murders or a rash of neighborhood burglaries, happen because criminals' success invites more crimes or incites retaliation. Criminologists call this "repeat victimization"—the criminal equivalent of aftershocks.
Brantingham and Mohler developed an algorithm—now a proprietary software package called PredPol—that predicts what will happen within a given police shift. The software, used by 60 police departments around the country, incorporates a narrow set of closely related crime events from both the immediate and longer-term past, with more recent crimes given heavier weight. The software strips personal details and looks at "only what, where, and when," Brantingham says. At the beginning of a shift, officers are shown maps with 150-by-150-meter boxes indicating where crime is likely to flare up. Fighting crime, the company says in promotional slides, is about "getting in the box."
Here in Pittsburgh, Carnegie Mellon data scientists Wil Gorr and Daniel Neill developed a similar program for Chief McLay not long after he arrived in 2014. A fit, genial man who looks like Mr. Clean, McLay previously held what he calls a "retirement job" as head of a national policing association; he decided to get back into active policing just days after Michael Brown, an unarmed black man, was killed in Ferguson, Missouri, triggering nationwide protests. McLay was convinced that improving the use of data in policing would lead to better outcomes.
Like PredPol, Pittsburgh's CrimeScan program has a geographic focus, but it draws on a broader variety of indicators. Gorr and Neill took their inspiration from criminology research showing that criminals tend to be generalists, and they tend to progress from minor to more serious crimes. As a result, the duo hypothesized, reports of minor crimes could help predict potential flare-ups of violent crime. In a gang confrontation, Neill says, "maybe it starts out with harsh words and offensive graffiti, and turns into fist fights, which turn into shootings, which turn into lots of shootings." Along with observations from the recent past, CrimeScan incorporates scores of minor crime offenses and 911 calls—about things like disorderly conduct, narcotics, and loitering—to spit out predictions about city blocks likely to see upsurges in violent crime in the next few days or weeks.
The Chicago police department (CPD), meanwhile, has taken predictive policing one step further—and made it personal. The department is using network analysis to generate a highly controversial Strategic Subject List of people deemed at risk of becoming either victims or perpetrators of violent crimes. Officers and community members then pay visits to people on the list to inform them that they are considered high-risk.
There are some cities where they have done a great job on hot spot policing, and they have terrible relationships with their communities of color.
Cameron McLay, Pittsburgh Bureau of Police
The Custom Notification program, as it's called, was inspired in part by studies done by Andrew Papachristos, a sociologist at Yale University. Papachristos grew up in Chicago's Rogers Park neighborhood in the 1980s and '90s, at the height of the crack era. Being white insulated him from some of the violence, he says: "The color of my skin meant I never had to join a gang." But one night, Papachristos watched as a gang burned his parents' diner to the ground because they refused to pay extortion money.
Decades later, when he started studying crime, Papachristos wanted to understand the networks behind it. For a 2014 paper, he and Christopher Wildeman of Cornell University studied a high-crime neighborhood on Chicago's West Side. They found that 41% of all gun homicide victims in the community of 82,000 belonged to a network of people who had been arrested together, and who comprised a mere 4% of the population—suggesting, with other studies, that much can be learned about crime by examining the company people keep, Papachristos says.
Intrigued by these ideas, the Chicago police teamed up with Miles Wernick, a medical imaging researcher at the Illinois Institute of Technology in Chicago, to develop the Custom Notification program. Because gang violence was distributed across the city, hot spot policing wasn't as effective in Chicago, says Commander Jonathan Lewin, head of technology for the department. "The geography of the map isn't as helpful as looking at people and how risky a person is," he says.
The list has invited allegations that CPD is veering dangerously close to the flawed "precrime" unit in the sci-fi film Minority Report, which taps the premonitions of a trio of mutated humans to stop potential murderers before they act. And in bringing bad press, the program has contributed to the problems of the beleaguered CPD, which a mayoral task force described last April as having "systemic institutional failures going back decades that can no longer be ignored."
Papachristos—who is not involved with the Strategic Subject List himself—cautions that the program overemphasizes both an individual's potential to offend and the use of policing, rather than other services, to fight crime. That "reinforces the way in which America devalues the lives of young people of color," he wrote in the Chicago Tribune on 1 August.
What's more, the police data that this and other predictive policing programs rely on are skewed toward crimes committed by people of color, says William Isaac, an analyst with the Human Rights Data Analysis Group and a Ph.D. candidate at Michigan State University in East Lansing. That renders any predictions suspect, he says: "They're not predicting the future. What they're actually predicting is where the next recorded police observations are going to occur." Predictions, indeed, can become self-fulfilling prophecies, says Jennifer Lynch of the Electronic Frontier Foundation in San Francisco, California. "We know from past examples that when police are expecting violence, they often respond with violence."
Brantingham, the architect of PredPol, agrees that civil liberties concerns "are really important questions." But he says that predictive policing can be more fair than the status quo: "What's often forgotten is that any time you put an officer in the field there's a risk of civil liberties violations."
Other critics, meanwhile, raise a more fundamental question about predictive policing: Does it even work?
In a 2012 IBM Smarter Planet commercial, a police officer glances at the screen of his squad car, then speeds to a convenience store. He arrives as a clerk is counting money, and moments before a would-be robber shows up. That's science fiction, says RAND's Hollywood—and likely to stay that way. To predict specific crimes, he says, "we would need to improve the precision of our predictions by a factor of 1000."
 City of Pittsburgh Chief of Police Cameron S. McLay stands for a portrait
Crime often clusters in hot spots like those on the map behind Pittsburgh, Pennsylvania, Police Chief Cameron McLay. He hopes that algorithms capable of predicting future hot spots will help make police work less biased.
Stephanie Strasburg
As to whether existing methods of predictive policing work as advertised, by forecasting the likelihood of crime, the evidence is scarce, and the few data points are not encouraging. For instance, an assessment of Chicago's Strategic Subject List program published by Hollywood and fellow RAND researchers last month found that individuals singled out in the pilot phase were no more likely to become victims of homicides than a comparison group. They were, however, more likely to be arrested for a shooting—possibly because, the researchers write, "some officers may have used the list as leads to closing shooting cases." (The program's "scores are not used for probable cause, and individuals cannot be arrested because of a high score," a spokesperson for CPD says.)
Some scholars have tested models' predictive power against historical crime rates, with encouraging results. But evaluating a program once in use can be more complicated. A randomized, controlled study—a design borrowed from medicine—is the gold standard, but few departments are willing to designate a control area or group, where they won't try to predict crime. "The average police chief lasts 3 years," McLay says. "I don't have time for controls." Hollywood adds that with programs like Chicago's, which single out individuals, "no one wants to say, 'I'm not going to perform interventions with the 10 people who are most at risk.'"
The issue is complicated by the fact that algorithms like PredPol's are proprietary, making it difficult for outside scholars or the general public to evaluate their effectiveness. "For the sake of transparency and for policymakers, we need to have some insight into what's going on so that it can be validated by outside groups," Isaac says. But Brantingham says researchers can evaluate the outcome without knowing all the underlying research.
One notable randomized, controlled experiment was conducted by the Shreveport, Louisiana, police department in 2012 with NIJ funding. The study found that the difference in crime reduction between the control and experimental districts was statistically insignificant. But the experiment, which focused on property crimes, also revealed the challenges of such studies. Take-up of predictive hot spot policing among the three experimental districts was high at first, but dropped off after 4 months as enthusiasm waned, likely skewing the results. Commanders in one of the control districts, meanwhile, grew excited by the experimental districts' success at reducing crime and decided to pursue their own targeted operations in known hot spots.
The most extensive independent evaluation of predictive policing so far, the RAND report, is lukewarm about even the most sophisticated predictive methods, stating that "increases in predictive power have tended to show diminishing returns." Hollywood adds that "the places where really sophisticated data mining algorithms shine" are those where "there are very complex nonlinear relationships between input data and output data." (One example is optical character recognition, which is used for digitizing printed texts.) With crime, he adds, "It's much more simple—the more risk, the more crime. There aren't really complicated relationships going on."
Perched at a conference table overlooking the blighted Allegheny-West neighborhood, Chief McLay says he is keenly aware that rolling out CrimeScan will not solve all the Pittsburgh department's problems. "There are some cities where they have done a great job on hot spot policing, and they have terrible relationships with their communities of color," he says.
The key, some experts say, is not to rely only on statistical methods, but to combine them with other approaches. For example, Papachristos is now working with the Chicago Violence Reduction Strategy, a program that identifies individuals at risk of becoming either violent offenders or victims, then gets them access to social services and employment assistance. A few appear on the Strategic Subject List, says Chris Mallette, the program's executive director, but most are selected through old-fashioned observation.
McLay seems to lean toward a similar approach. As CrimeScan launches, he also aims to build relationships with high-crime communities and ensure that big data are used to solve problems rather than simply focus police work. "Therein lies the key: Who finds that sweet spot?" he says. "Who uses just enough data to be really good, and has the relationships that are just robust enough? That's the challenge that policing in this country is facing right now."

Nenhum comentário:

Postar um comentário