Skip to content
Article Article June 13th, 2019
Cities • Technology • Innovation

Do we need new policy for facial recognition & privacy in smart cities?

Article highlights


With @sfgov & other govs banning the use of #facialrecognition by police and other agencies, policymakers may be left wondering: what to do?

Share article

Zung Nguyen Vu & Rebecca Chau of @ArupGroup look at the rise of #facialrecognition tech & what it means for policymakers. #FutureGovernment

Share article

Is the #FutureofCities a place where your face is your ID? Zung Nguyen Vu & Rebecca Chau of @ArupGroup explore #facialrecognition for govs

Share article

Partnering for Learning

We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.

Partner with us

With several governments recently banning the use of facial recognition technology, it's becoming an important topic of conversation among policymakers. Our recent Future of Cities handbook looks at the relationship between technology, policy, and innovation - offering city problem solvers some tangible guidance for how to responsibly integrate new technologies into city services. Here Zung Nguyen Vu and Rebecca Chau from Arup outline one approach for policymakers to consider in regulating the use of facial recognition software.

***

Imagine hopping onto an autonomous city shuttle straight from your door to get to work. Facial recognition software installed on the shuttle means that payment is automatically deducted without needing to take out your phone, or card. Likewise, you pass through security at work without having to show any additional ID. Over lunch, you browse some shops, all of which are also using facial recognition as both a means of securing the stores against theft, as well as offering a frictionless payment service.

The above scenario is one version of a future smart city where user biometric data is automatically collected in order to help the city and service providers customise offerings to residents. While the complete “smart city vision” might feel dystopic, the different technology use cases for facial recognition described above are already being rolled out by technology providers around the world.

  • For retail, in Amazon Go stores today, sensors and video cameras already allow shoppers to pick up items from shelves, put into their bags, and leave without needing to check out or formally pay. While the first iteration relies on people scanning an app on their phones upon entry, Amazon patents reveal the company's intent to possibly bypass this step by relying on facial recognition.

  • For transportation, facial recognition technology is already being rolled out in airports from Heathrow to Atlanta.

  • In public spaces, CCTV cameras are already widely installed in streets, squares, and parks. South Wales police have used facial recognition technology in public spaces at least 20 times since May 2017. It's estimated that half of all American adults are in some sort of facial recognition database.

These developments mean that new “smart city” projects will likely consider or promote facial data across these different use cases. So how can local government be proactively putting regulation in place ahead of these trends?

When and if data across these different applications are shared, the implications for privacy are enormous, and as a consequence, the public is concerned. San Francisco, for example, is taking a conservative route and attempting a blanket ban of local government use of facial recognition all together.

On the flip side, there are myriad benefits that facial data use can potentially bring to cities. It can, for instance, make public transportation more seamless and efficient, therefore encouraging wider adoption of public options over private car use. It opens new ways for retailers to design service offerings, and for security forces to protect the public through counter-terrorism efforts. Can policy encourage positive, nuanced usage without “tossing the baby out with the bathwater”?

A common language for personal data collection?

Part of the difficulty in discussing and legislating all types of urban data governance (including facial ID) is how new, complex and abstract this territory is for most of us. We don't have the “layman” language to discuss the nuanced implication of personal data collection in public settings. Without this framework, the conversation quickly becomes binary and oppositional.

Here we attempt to make sense of facial data collection in cities by drawing parallels to something everyone has become familiar with in the past few years: web data collection. Can this new trend be seen as a physical extension of the model that has worked for online web services: I hand over my user data to a website, I get free services, they make revenue off ads?  Where can parallels be drawn and where do the online and offline worlds diverge?

It is important to acknowledge that as a society we are just starting to formalise rules for data protection on the web. Awareness varies widely between people, but overall there are are a few key concepts or behaviours that have emerged over time:

  1. People are generally aware that individual web pages track their activities by using “cookies.” In the EU, GDPR regulations explicitly request web pages to tell people how - when their data is being collected, the first time they enter a site. It also prevents web pages from selling personal data to another owner without consent.

  2. A person can choose to turn on incognito mode on their web browser or “delete their cookies.” These are clear rules for how to stop being tracked or delete tracking history.

  3. Unless you explicitly “log in” to a web page and create a personal account, a web page can't necessarily tie your online activities to a particular identity. Creating an account - and signing into it - therefore become explicit acts of giving consent to collect data.

  4. You can change your online identity by changing accounts. It is not uncommon for people to have multiple Instagram or Twitter handles today.

We recognise that being tracked using your face in a city is very different from being tracked through a browser. Many of the rules and conventions we've described above on how to protect privacy can't be translated to a public urban setting. For example:

  1. Unlike an online account, face ID is intricately linked to one's identity. One can't change one's face easily or become invisible. Unless you are Harry Potter or Arya Stark, there is no “incognito mode.” When your face is the “account,” tracking behaviour and tracking identities are one and the same.

  2. Boundaries of ownership in a city are blurry, making it difficult for people to know when they are being tracked or who is tracking them. Imagine getting off at King's Cross to get to work: you step off the train, onto the platform, walk through the station, cross the street, cross a privatised public courtyard, cut across the lobby of another building. Most people would not be aware that they have walked across six spaces owned by six different legal entities.  It is more difficult to imagine how a user would give “consent” upon entry to each of these spaces, and manage how data is shared across these owners.

A new language for regulators

These differences make designing and regulating data collection systems in cities even more challenging and complicated. Therefore, we cannot simply apply existing rules and standards around data protection in the web-based world onto cities.

Rather, completely new design paradigms need to be invented in order to ensure that privacy rights are protected. Regulation by policymakers at both the national and city level is now necessary; not to hamper the ability of private companies to innovate, but to encourage and ensure that innovation protects users.

Policy could, for example, impose these common design principles onto facial recognition technology:

  • Analogue must always remain the default option. Tracking should become something people “opt into” rather than “opt out” of. This would ensure people are aware of when they are tracked, this would be akin to the way we choose when to ‘sign in' online.

  • People should have choice over who to give data to. For instance, a person may trust the national train company to collect their data for seamless boarding, but not the privatised taxi company.

  • People should be able to turn choice on and off. People should be able to change their mind up to the last minute.

An exception for these rules could possibly be considered for national security reasons, but this needs to be debated and decided by a public referendum.

As a society, we are a long way from resolving these design challenges for facial recognition in public spaces. There is potential for both hardware, software, and policy solutions to program facial recognition so that it flips the current paradigm and reimagines new modes of interaction. City and national governments have a major role to play to ensure that technology develops in a direction to return control and benefits to people, putting the onus on service providers to deliver attractive enough services for users to opt-in - as we do online everyday.

Written by:

Zung Nguyen Vu Strategy Lead, Arup Digital Studio
View biography
Share this article: