Automated Racism: How Predictive Policing Targets Black British Communities.
Predictive policing, once the stuff of dystopian fiction, is now a harsh reality in Britain and it’s disproportionately targeting Black communities. Nearly three-quarters of UK police forces are using these algorithm driven “crime prediction” tools, despite mounting evidence that they are unfair, dangerous, and reinforce discrimination in policing. Amnesty International UK’s new report Automated Racism sounds the alarm that predictive policing is effectively automating racial bias in law enforcement. For Black communities, “predictive” policing is simply the latest chapter in a long history of over-policing, now digitised through flawed data and algorithms.
This article, part of BLAM UK’s Social Justice Series, explains what predictive policing is, why it’s so dangerous, and how we can join the fight to ban these racist systems as a step toward Black liberation.
What Is Predictive Policing?
Predictive policing refers to data-driven practices where computer programs analyse past police data to forecast future crime risks. This can involve mapping out crime “hotspots” (locations where crime is deemed likely to occur) or assigning “risk scores” to individuals who might commit or be involved in crime. Police claim these tools help them deploy officers more efficiently and prevent crime before it happens.
In reality, a computer cannot truly predict the future; it can only recycle the past. If the input data is biased or incomplete, the outputs will be biased as well. Rather than eliminating human bias, predictive policing often ends up amplifying it. One community member put it bluntly: it’s not really “predictive” policing, it’s predictable policing because it predictably targets the same marginalised groups who were over-policed to begin with.
The New Face of Racial Profiling
There are strong parallels between today’s predictive policing and the notorious 1970s-80s “Sus laws” that enabled police to harass Black Britons on mere suspicion. In fact, Automated Racism calls these systems “the modern face of racial profiling”. In other words, this “innovative” policing is really just old bias in new packaging.
Bias by design: Predictive policing systems are built and trained on police datasets and British policing data is rife with institutional racism. Decades of biased stop and search, surveillance, and profiling of Black people have skewed the data that algorithms now consume. Feeding biased data into an algorithm doesn’t remove bias; it codifies it. The system will simply predict more crime in the same over-policed neighbourhoods and flag the same individuals already unjustly under suspicion.
This creates a vicious feedback loop. If an algorithm tells police to flood a particular area (often a deprived area with a high Black population) with patrols, officers will inevitably find “more crime” there; more stop and searches, more arrests, more recorded incidents. Those inflated numbers get fed back into the algorithm as proof the area is high risk, and the cycle continues. As Amnesty researchers note, the outcome is repeated targeting of certain areas and people, “creating a cycle of discrimination and criminalisation”. Even UK police leaders have started to acknowledge their forces are institutionally racist; adding algorithmic tools on top of that reality simply automates the racism. The result is that entire communities are treated as potential criminals by default; a high tech version of the over-policing Black Britons have long endured.
Predictive Policing in the UK: Where and How is it Used?
Despite these dangers, predictive policing has quietly spread across the UK. Amnesty International’s investigation (including Freedom of Information requests to police forces) found that out of 45 local police forces, 32 have used geographic crime‑prediction tools and 11 have used individual risk‑profiling tools. Here are a few examples of how this technology is being deployed in Britain:
- Essex (Basildon): A predictive policing pilot in Basildon led to a spike in stop and searches from September 2020 to March 2021. The force stopped more people in Basildon than in the rest of the county combined, and Black people in Basildon were stopped almost 3.6 times more than white people. Officers also used force against Black people nearly four times as often as against white people during this period.
- London (Lambeth): After the Metropolitan Police introduced predictive policing in Lambeth in 2020-21, that borough saw one of the highest stop and search rates in London. In fact, Lambeth recorded the second highest volume of stops of all London boroughs, a jump that coincided with the new “crime prediction” program.
Violating Human Rights
These data driven policing tools don’t just mimic the biases of traditional policing, they also trample over fundamental human rights. Amnesty International warns that predictive policing as used in the UK breaches the country’s human rights obligations on multiple fronts , including:
Recommended by LinkedIn
- Right to Non-Discrimination: The use of these systems leads directly to racial profiling and the disproportionate targeting of Black people and other racialised groups (often in lower-income areas). This entrenches racial discrimination in policing.
- Presumption of Innocence: Predictive policing treats people as potential criminals before any actual crime has been committed. By flagging individuals or communities as “high risk” preemptively, it undermines the presumption of innocence and the right to a fair trial.
- Right to Privacy: These systems function through what is essentially mass surveillance vacuuming up data about where we live, our associations, and our past interactions with police. Such indiscriminate surveillance is a gross intrusion that cannot be justified in a democratic society. It also risks chilling free expression and assembly in heavily monitored areas.
- Freedom of Association: When a neighbourhood is algorithmically labeled a hotbed of crime, residents may avoid gathering or going out in their own community due to police scrutiny. This has a chilling effect on ordinary community life, as people fear being harassed simply for being present in certain “target” areas.
Overall, predictive policing treats whole communities as suspects and violates rights that are meant to protect citizens from exactly this kind of overreach.
Impact on Black Communities
For Black Britons on the ground, the impact of these tactics is traumatic. People living in areas flagged by predictive policing report being relentlessly stopped, questioned, and even subjected to force by police, simply for being in their own neighbourhoods. “It’s labelled a crime hotspot. So when the police enter the area, they’re in the mindset of ‘we’re in a dangerous community, "the people here are dangerous… and they do police them violently,” one resident of the Grahame Park estate in London told researchers. In such a climate, just walking to the shops or gathering with friends can make you a target. The community comes to feel under siege, creating fear and mistrust of law enforcement. As another individual explained, after years of being repeatedly harassed by police, “They made me feel like I don’t have any rights at all.”
Critically, this heavy handed surveillance hasn’t been proven to make anyone safer. “The evidence that this technology keeps us safe just isn’t there; the evidence that it violates our fundamental rights is clear as day,” says Sacha Deshmukh, Amnesty International UK’s Chief Executive. Predictive policing is failing on its own terms; offering a false promise of “security” while actually just intensifying over-policing of Black communities. We all want our communities to be safe, but safety cannot come at the cost of our rights and dignity. Black communities in particular deserve real public safety solutions developed with them, not tools that treat them as the enemy.
Time to Act: Stop Automated Racism!
Amnesty International is calling on the UK government to ban the use of “crime predicting” police technology, given the dangerous discrimination and rights violations it produces. BLAM UK stands firmly behind this call.
This is about confronting a policing culture that has criminalised Black people under one pretext or another for far too long. We need to dismantle these tools of automated racism and invest in approaches that actually keep communities safe and respect their rights.
In the meantime, there must be transparency and accountability. Authorities should reveal where and how predictive policing is being used, and affected communities should have meaningful ways to challenge any decisions made by these systems.
No algorithm should operate in darkness, without public oversight. Ultimately, however, the goal is to abolish these predictive policing systems outright. If data driven tools cannot be used without supercharging racial bias and the evidence is clear that in policing they cannot, then they have no place in a just society.
Now is the time to act.
Add your voice to the growing movement against automated racism. We urge you to sign Amnesty’s petition to ban predictive policing in the UK and to read the Automated Racism report in full to understand the scale of this issue. Share what you’ve learned with your networks and demand that our political leaders pay attention. This is about defending the rights of Black British communities to live free from constant suspicion and surveillance. It’s about ensuring technology is used to empower people rather than oppress them. Together, as a community, we can hold authorities to account, push for true justice in policing, and stop the cycle of automated racism in its tracks.