Although in previous years scientists, together with the US police, developed a system with Artificial Intelligence to predict crime in the country, a racist bias had always been found in the system. However, the researchers are not giving up; They again announced that they try to predict crimes with AI, as reported on Mashable.
For example, in 2016, the Chicago Police Department used an AI crime prediction model. They tried to make this system free of racist biases but they got the opposite result. With the model they wanted to determine the most likely citizens to participate in a shooting; the list generated by the model included 56% of the city’s black men between the ages of 20 and 29.
However, with the new attempt at the model, it is hoped that they will be able to find out when and where the next crime will occur, leaving biases behind. Scientists from the University of Chicago used the AI tool with which they analyzed data on crimes promoted between 2014 and 2016, in order to predict the levels of crime that would occur in the following weeks in the city. Sure enough, the system predicted the probability of crime in the city with almost 90 percent accuracy. The tool was also used in seven other US cities, with similar results.
It may interest you: The first (and creepy) story written and illustrated with AI
In addition to predicting crime, the scientists were able to observe the response to crime patterns. Ishanu Chattopadhyay, a professor at the University of Chicago, told Insider that, thanks to the AI model, they were able to determine that crimes committed in neighborhoods with higher incomes had a greater police response than crimes committed in low-income neighborhoods. In other words, this suggests that there are biases in the police department when dealing with crimes in the city.
“Such predictions allow us to study perturbations of crime patterns suggesting that the response to rising crime is biased by neighborhood socioeconomic status, draining policy resources from socioeconomically disadvantaged areas, as demonstrated in eight major U.S. cities,” according to the report.
Chattopadhyay also told New Scientist that indeed the data used by the model can be biased, and scientists are working to bridge that gap. For this, they want to focus mainly on identifying the places where a crime will be generated and not on the suspects.