Japan's Bold Move Towards Predictive Policing: AI Cameras Set To Preempt Crimes

Written by Published

In a move reminiscent of the dystopian film "Minority Report," Japan's National Police Agency is poised to launch an unsettling experiment involving the use of advanced artificial intelligence (AI) security cameras to preempt serious crimes.

The AI-enhanced cameras will focus on machine-learning pattern recognition across three key areas: behavior detection to identify suspicious activities, object detection to spot potential weapons, and intrusion detection to protect restricted zones.

This initiative is slated to commence within the current fiscal year, concluding in March 2024. The decision follows the shocking assassination of former Japanese Prime Minister Shinzo Abe and a subsequent attempted attack on the incumbent Prime Minister Fumio Kishida. These high-profile crimes, often perpetrated by isolated individuals known as 'lone offenders,' have spurred Japan's law enforcement to explore innovative crime-prevention strategies.

Advocates of the technology argue that the AI's 'behavior detection' algorithm can learn by observing patterns suggestive of suspicious activities, such as repetitive, anxious glances. Earlier attempts at AI-assisted security have focused on behaviors like restlessness and fidgeting, potentially indicative of unease or guilt. This represents a significant advancement in what is achievable for contemporary security agencies.

Meanwhile, in China, the population is under constant surveillance, from omnipresent police cameras on street corners to online monitoring and censorship. A new wave of technology is now delving into the vast pool of data collected from daily activities, aiming to predict crimes and protests before they occur. However, these predictive systems are not solely targeting individuals with a criminal record; they are also identifying vulnerable groups, including ethnic minorities and those with a history of mental illness.

This state-of-the-art technology hinges on algorithms that sift through data, looking for patterns and anomalies that could signal potential threats. While these algorithms are viewed with suspicion in the West, they are celebrated as victories in China.

Reports have highlighted instances where the technology flagged suspicious behavior, leading to investigations that uncovered fraud and pyramid schemes. However, these technologies extend far beyond mere surveillance. They are potent tools for a society intent on maintaining near-total social control over its citizens.

Under the leadership of President Xi Jinping, China's focus on preserving social stability is unwavering, and any perceived threat to it is swiftly quashed. The security state has become increasingly centralized, employing technology to suppress unrest, enforce stringent COVID-19 lockdowns, and stifle dissent. Regrettably, China appears to be the blueprint for leaders like Justin Trudeau and others.

Predictive policing, hailed as a groundbreaking innovation by TIME Magazine in 2011, has been quietly rolled out across the United States. Numerous police departments are experimenting with predictive software, envisaging a future where law enforcement could anticipate and prevent crimes before they occur. Developers promote this technology as a means to eliminate human bias, enhance the accuracy of policing, and optimize resource allocation.

This approach gained traction with substantial federal grants directed towards smart policing solutions. The Los Angeles Police Department (LAPD), under the leadership of Police Chief William Bratton, pioneered one of the initial trials in 2009 with $3 million in federal funding. The objective was to predict crime-prone areas and preemptively deploy officers to deter criminal activities.

The involvement of respected figures like Bratton lent credibility to the technology, leading to its adoption by other departments nationwide. By 2014, a survey revealed that 38% of 200 surveyed departments were using predictive policing, and 70% were planning to implement it in the coming years.

The use of data to identify high-crime areas and allocate more resources is a logical application of data. However, with the rapid advancements in AI technology and the universal tracking of our devices, how long before this pre-crime tech is unleashed on the populace? As AI rapidly approaches "Black Mirror"-esque levels of surveillance, society will need to confront these questions head-on.