User login

Crime Prevention through Artificial Intelligence

Crime Prevention through Artificial Intelligence

(Planned) Crime and Punishment?

-Data analytics in crime prevention in the UK

Policing in the United Kingdom is currently at a cross roads. Fiscal austerity and the economic uncertainty surrounding Brexit mean that the government needs to find ways to reduce expenditure. Simultaneously, the police are asked to fulfil an ever expanding range of tasks: In light of the recent attacks in Manchester and London, there is a need to reassure the public through increased police presence. This puts additional strain on the service. The question therefore becomes: How can the police do more with less?

The answer, it seems, is technology. More specifically, it is in using advanced data analytics to aid the police in their tasks and to make them more efficient at it. There have been recent pilot projects in Durham, Los Angeles and Santa Cruz, where the use of Artificial Intelligence in crime prevention has been tested. The tests have been seen as hugely successful by most, but they also raised a number of concerns about privacy, legal accountability and bias. This Briefing will do three things:

1)      Highlight the findings of recent studies and pilot projects

2)      Outline areas of concern in using the technology

3)      Put forward proposals to minimise intrusion in innocent citizens data and maximise utility

 

The pilot project taking place in LA was the largest of its kind thus far, which make the findings derived from it slightly more representative than either the Santa Cruz or Durham pilots. The programme used, Predpol, was still in its infancy in 2010, when it was first used. Hence, the pilot focused on burglaries and vehicle thefts in order to narrow the data the AI had to process. The algorithm was given vast amount of crime data to identify patterns in addition to some assumptions manually coded into the AI by the program developers.

 

The algorithm developed a grid for the entire city. Based on both temporal and spatial occurrences of crimes and their frequency, the algorithm developed certain patterns it then used to identify so-called ‘hotspots’, areas where certain crimes are probable to occur in the future. These hotspots are narrowly defined areas of 500 feet square.  Within them, the algorithm can make specific predictions as to the type of crime (i.e. which types of properties are likely to be burgled) and when the crime is most likely to occur.

 

Underlying these predictions are certain assumptions and patterns, such as: criminals are essentially territorial and often operate within the same area for long periods of time. They also tend to identify similar targets (i.e. condo burglaries). A developer of the AI likens it to the detection and prediction of earthquakes: once an earthquake/ crime occurred in a place, there are likely to be aftershocks/ follow up crimes. Additionally, the AI also considers local factors, such as an increased likelihood of certain crimes around bars, high schools and large parking lots.

 

Measured in numerical terms, the project was enormously successful, reducing burglaries by 33% and violent crimes by 21%. Responses by LA police officers testing the software, even those initially sceptical, were overwhelmingly positive. Fighting crime in a metropolitan city can feel like a game of ‘Whac-a-mole’, where detection and prevention of crime in one spot leads to increased criminal activity in another. The ability to make reasonable predictions as to where criminal activity will shift to is therefore most warmly welcomed by law enforcement and enables effective prevention.

Yet, while crime statistics suggest that the pilot project was a sweeping success, the project also illustrated some issues and concerns surrounding the application of such technology. One criticism occasionally levelled against the use of data analytics and AI in combatting crime is that it can only tell the police things they already know: A good, experienced policeman would know the neighbourhoods he/she works in well enough to anticipate criminal activity in certain spots based on past experience and specific locations making crime easier (dark alleys, absence of CCTV, high local crime rate).

These criticisms neglect two crucial aspects though: The insights police have are based on the individual experience of policemen, taking years to develop. Predictive policing based on AI allows law enforcement to ‘parachute’ police officers in any given area as needed and for them to be instantly effective. Secondly, Prof. Brantingham, who helped to develop the Predpol algorithm, insists that : ‘crime hotspots pop up and spread and disappear and pop up again in really complicated ways that are just very, very difficult, if not impossible, for the individual to intuit’.[1] This fluidity of hotspots is indeed difficult for individuals to grasp and keep track of. Additionally, Predpol and its competitors are meant to be assistive only, highlighting potential crime spots, rather than prescribing concrete actions and timetables.

More serious are concerns voiced by privacy groups over other applications of data analytics to crime prevention. Prof. Brantingham is credible when he asserts that Predpol is ‘about where and when crime is most likely to occur, not who will commit it’.[2] This is intuitively correct as the objective of police work is to reduce burglaries, not arrest specific burglars. Yet, advanced analytics permit such a pursuit of individual criminals by incorporating crimes committed in the past to draw conclusions on likely future crimes of individuals.

Concerns over such applications to individuals amplify concerns over racial bias inherent in the data the algorithm is fed. The argument attached to such concerns runs as follows: Assume the police stop and search more vehicles driven by ethnic minorities due to subconscious bias. This in turn increases the number of arrests of ethnic minorities. If this data is then fed into the algorithm, it would transplant the bias into the supposedly objective algorithm.

A lawsuit by the Electronic Frontier Foundation (EFF) has forced the FBI to release data indicating that the algorithm used in L.A. has not only been fed information from convicted criminals, but also of innocent citizens who took pictures for entirely innocent reasons, such as employment background checks. While this is deeply unsettling from a data protection and privacy perspective, it is easy to understand the underlying rationale from a technical perspective: One fundamental principle of AI is that the more data an algorithm receives, the better it will become at its task.

 

However, in light of the dangers of indiscriminate data gathering, it is crucial to find ways to collect and use data more intelligently than this was the case in the pilot programmes. This can be achieved through improved interconnection of technological innovations. Automated license plate readers, CCTV cameras, and the use of drones are already common practice in many police forces around the globe. These technologies hold great promise, enabling fewer police officers to provide a better service to citizens. In my view, there is still enormous untapped potential for efficiency gains to be unleashed by combining these technologies more effectively. Time is a crucial commodity, particularly in preventing crime. Real time analysis therefore depends on the AI being able to tap various sources of information to create a more comprehensive crime map from which it can then make predictions.

 

In order to retain the trust of the UK citizens it serves, policing must be subjected to a set of concrete, enforceable and most importantly transparent rules. It must be clear to each citizen which data the police hold about them and why. Indiscriminate use of data is singularly unhelpful, as it casts blanket suspicions on people never convicted or even suspected of committing a crime. However, since AI is only ever as good as the data it processes, independent data gathering through drones, traffic cameras, CCTV and automated license plate readers is essential. While never completely absent, such independent data gathering could lower the risk of human bias being inserted into the algorithm in the way delineated above.

 

A duty to inform citizens about the data they hold would promote more parsimonious use of citizens data, particularly when coupled with a time limit on how long innocent citizens’ data can be stored. In light of the broad range of tasks and duties British police is fulfilling, substituting it with technology is neither feasible nor desirable. Given an appropriate legal framework, adequate funding and respect for public sensitivity to Big Data issues, it could form an essential supplement to the manpower of the police.

 


[1] https://www.theguardian.com/cities/2014/jun/25/predicting-crime-lapd-los...

[2] https://www.washingtonpost.com/local/public-safety/police-are-using-soft...

cyberpolice.jpg

cyber policing