Smart Cities: Wired for Injustice

This article originally appeared en español in Bulletin #13 of Oficina Antivigilancia (also en português) on 1 March 2016. It is Creative Commons Licensed. 

Smart Cities: Wired for Injustice

The cities of the future might be wired for efficiency and sustainability, but without serious changes in how they’re being built, they’re going to be wired for injustice, too.

Smart cities use technology to improve quality of life. But for who? And to what end? They may be able to deliver services more efficiently and sustainably, but unless they are built with human rights in mind, they also replicate existing structures of injustice. The pervasive surveillance and reliance on biased algorithms that characterize smart cities increase criminalization of society’s most marginalized people, without truly increasing their quality of life.

The smart city concept

There are many definitions of smart cities, but the general consensus is that they use technology to improve networks and services. Smart cities often rely on “sensorsembedded in roadways, power grids, buildings and other assets.” Software analyzes data from these sensors and determines how to improve services for a given value, such as energy efficiency.

Smart surveillance

It’s all those sensors that make smart cities such perfect panopticons. Corporations like Microsoft and Schneider Electric have been key in pushing for the spread of technologies such as:

Most critiques of smart cities have focused on privacy in general, not law enforcement per se. That’s understandable, since privacy and freedom are interdependent. But that broad focus ignores the fact that how law enforcement behaves is the most important marker of freedom for many people around the world. As the US Director of National Intelligence James Clapper confirmed just recently, there’s no question that law enforcement will use “the [internet of things] for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials…”

The Internet of Things includes smart cities technologies. What’s more, law enforcement can aggregate and analyze the data collected by smart cities. The data the NSA collected by spying on phone call records allowed the government to determine social networks, medical conditions, religious beliefs and more. Smart city data will be even more comprehensive and revealing.

Predictive policing and smart cities

Law enforcement practices are already being altered by technology. But is this making policing better? If your definition of better is doing more of what police have always done, maybe.

Predictive policing uses historical crime data to determine where crime is most likely to occur, so that departments can direct resources to those areas. It sounds innocuous enough, but as myriad social scientists and statisticians have pointed out, that historical data is based on biased police practices.

In fact, predictive policing software like PredPol isn’t doing much more than law enforcement has always done. As Sgt. Steve Armon of Dallas Police Department admits, “a lot of times, these companies can’t tell us things we don’t already know. We have our own crime analysis group, and we already apply the same principles they’re using.”

Drug arrests in the United States are a perfect example of bias in crime data. Black Americans are 3.6 times more likely to be arrested for selling drugs, even though whites are actually more likely to sell them. Black Americans are also 2.5 times more likely to be arrested for possessing drugs, even though about the same percentage of black and white Americans use drugs.

Data-based policing may be very good at predicting where to arrest black people, but that’s hardly an improvement. In the United States black people are incarcerated at 6 times the rate of whites, and black Americans already “represent 12% of the total population of drug users, but 38% of those arrested for drug offenses.”

Chicago’s “custom notifications” program provides a chilling example of how this bias plays out. Officers make personal visits to individuals based partly on an algorithm that determines that they are highly likely to be involved in a violent crime. The officers may offer social services or “a tailored warning” to let them know that the police are watching them.

Chicago police tortured black residents of the Southside for decades and covered it up. They continue to shoot dead the city’s black residents and hinder investigations into those shootings. Increasing police contact with the city’s residents of color without addressing these problems hardly seems like the best use of city resources.

What’s more, the algorithm being used by the police department is key, but it’s shrouded in secrecy. The people marked for custom notifications haven’t ever necessarily committed any crime themselves, much less a violent crime. Instead, the algorithm looks at factors such as “arrests, warrants, parole status, weapons and drug-related charges, acquaintances’ records, having been a victim of a shooting or having known a victim, prison records, open court cases, and victims’ social networks.” Police denied a public records request for the algorithm itself, foreshadowing another major problem with smart cities: algorithms make decisions that intimately affect people’s daily lives, but are completely opaque.

What isn’t opaque is that bias cannot be escaped simply by digitizing it.

Should we give up?

Making cities more livable, sustainable, and workable is clearly a laudable goal. But residing in a surveillance state run by biased algorithms is hardly livable. If smart cities are built without the goal of ensuring basic human rights, they will instead worsen quality of life for many.

Fortunately, there are other ways to improve urban environments, even within the smart cities framework. For example, “citizen co-creation” of better cities can be “grounded more in issues of equity and social inclusion.” Medellín, Colombia, has been a leader in this area. The city ‘s “favelas were reintegrated into the city not with smartphones but with publicly funded sports facilities and a cable car connecting them to the city.” The city also took on projects like enhancing infrastructure and “conduct[ing] workshops in low-income communities to provide access to free Wi-Fi and improve digital literacy.” That focus on the most under-served, along with the ability to see that technology isn’t always the best answer, have made Medellín successful.

It’s easy to look for a technological cure for the world’s most pressing problems. But in the case of livability in cities, we should learn to walk before we run. That means ensuring our cities have a baseline level of human rights for all.

 

Smart Cities: Wired for Injustice

One thought on “Smart Cities: Wired for Injustice

  1. I enjoyed reading this. This week NPR had a discussion on predictive policing in Chicago. The critiques of how predictive policing functions is that it is veiled in the secrecy. The other critque was lack of meaningful engagement. Apparently when an indiviual is characterized as being at risk for being shot or shooting someone the police visit the person’s residence. They then offer social supports ( unclear what that means) and only make one additional visit. The idea that how data is analyzed and that it can be biased is crucial. I gained from your writing the need to look at the intent for the use of technology and that it needs to be in the framework of human rights. Thanks for aharing this.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s