Facial recognition technology: force for good or privacy threat?

Credit to Author: Christopher Boyd| Date: Mon, 12 Aug 2019 15:00:00 +0000

All across the world, governments and corporations are looking to invest in or develop facial recognition technology. From law enforcement to marketing campaigns, facial recognition is poised to make a splashy entrance into the mainstream. Biometrics are big business, and third party contracts generate significant profits for all. However, those profits often come at the expense of users.

There’s much to be said for ethics, privacy, and legality in facial recognition tech—unfortunately, not much of it is pretty. We thought it was high time we take a hard look at this burgeoning field to see exactly what’s going on around the world, behind the scenes and at the forefront.

As it turns out…quite a lot.

The next big thing in tech?

Wherever you look, government bodies, law enforcement, protestors, campaigners, pressure and policy groups, and even the tech developers themselves are at odds. Some want an increase in biometric surveillance, others highlight flaws due to bias in programming.

One US city has banned facial tech outright, while some nations want to embrace it fully. Airport closed-circuit TV (CCTV)? Fighting crime with shoulder-mounted cams? How about just selling products in a shopping mall using facial tracking to find interested customers? It’s a non-stop battlefield with new lines being drawn in the sand 24/7.

Setting the scene: the 1960s

Facial recognition tech is not new. It was first conceptualised and worked on seriously in the mid ’60s by pioneers such as Helen Chan Wolf and Woodroe Bledsoe. They did what they could to account for variances in imagery caused by degrees of head rotation using RAND tablets to map 20 distances based on facial coordinates. From there, a name was assigned to each image. The computer then tried to remove the effect of changing the angle of the head from the distances it had already calculated, and recognise the correct individual placed before it.

Work continued throughout the ’60s, and was by all accounts successful. The computers used consistently outperformed humans where recognition tasks were concerned.

Moving on: the 1990s

By the mid to late ’90s, airports, banks, and government buildings were making use of tech essentially built on its original premise. A new tool, ZN-face, was designed to work with less-than-ideal angles of faces. It ignored obstructions, such as beards and glasses, to accurately determine the identity of the person in the lens. Previously, this type of technology could flounder without clear, unobstructed shots, which made it difficult for software operators to determine someone’s identity. ZN-face could determine whether it had a match in 13 seconds.

You can see a good rundown of these and other notable moments in early facial recognition development on this timeline. It runs from the ’60s right up to the mid ’90s.

The here and now

Looking at the global picture for a snapshot of current facial recognition tech reveals…well, chaos to be honest. Several distinct flavours inhabit various regions. In the UK, law enforcement rallies the banners for endless automated facial recognition trials. This despite test results so bad the universal response from researchers and even Members of Parliament is essentially “please stop.”

Reception in the United States is a little frostier. Corporations jostle for contracts, and individual cities either accept or totally reject what’s on offer. As for Asia, Hong Kong experiences something akin to actual dystopian cyberpunk. Protestors not only evade facial recognition tech but attempt to turn it back on the government.

Let’s begin with British police efforts to convince everyone that seemingly faulty tech is as good as they claim.

All around the world: The UK

The UK is no stranger to biometrics controversy, having made occasional forays into breach of privacy and stolen personal information. A region averse to identity cards and national databases, it still makes use of biometrics in other ways.

Here’s an example of a small slice of everyday biometric activity in the UK. Non-European residents pay for Biometric Residence Permits every visa renewal—typically every 30 months. Those cards contain biometric information alongside a photograph, visa conditions, and other pertinent information linked to several Home Office databases.

This Freedom of Information request reveals that information on one Biometric Residence Permit card is tied to four separate databases:

  • Immigration and Asylum Biometric System (Combined fingerprint and facial image database)
  • Her Majesty’s Passport Office Passports Main Index (Facial image only database)
  • Caseworking Immigration Database Image Store (Facial image only database)
  • Biometric Residence Permit document store (Combined fingerprint and facial image database)

It’s worth noting that these are just the ones they’re able to share. On top of this, the UK’s Data Protection Act contains an exemption that prevents immigrants from accessing data, or indeed preventing others from processing it, as is their right under the Global Data Protection Regulation (GDPR). In practice, this results in a two-tier system for personal data, and it means people can’t access their own case histories when challenging what they feel to be a bad visa decision.

UK: Some very testing trials

It is against this volatile backdrop that the UK government wants to introduce facial recognition to the wider public, and residents with biometric cards would almost certainly be the first to feel any impact or fallout should a scheme get out of hand.

British law enforcement have been trialling the technology for quite some time now, but with one problem: All the independent reports claim what’s been taking place is a bit of a disaster.

Big Brother Watch has conducted extensive research into the various trials, and found that an astonishing 98 percent of automated facial recognition matches at 2018’s Notting Hill carnival were misidentified as criminals. Faring slightly (but not much) better than the Metropolitan Police were the South Wales Police, who managed to get it wrong 91 percent of the time—yet, just like other regions, continue to promote and roll out the technology. On top of that, no fewer than 2,451 people had their biometric photos taken and stored without their knowledge.

Those are some amazing numbers, and indeed the running theme here appears to be: “This doesn’t work very well and we’re not getting any better at it.”

Researchers at the Essex University of Essex Human Rights Centre essentially tore the recent trials to pieces in a comprehensive rundown of the technology’s current failings.

  • Across six trials, 42 matches were made by the Live Facial Recognition (LFR) technology, but only eight of those were considered a definite match.
  • Approaching the tests as if the LFR tech was simply some sort of CCTV device didn’t account for its invasive-by-design nature, or indeed the presence of biometrics and long-term storage without clear disclosure.
  • An absence of clear guidance for the public and the general assumption of legality for this tech used by police, versus a lack of explicit legal use in current law leaves researchers thinking this would indeed be found unlawful in the courts.
  • The public might naturally be confounded, considering that if someone didn’t want to be included in the trial, law enforcement would assume that the person avoiding this technology may be suspect. There’s no better example of this than a man who was fined £90 (US$115) for avoiding the LFR cameras for “disorderly behaviour” (covering his face) because they felt he was up to no good.

https://www.youtube.com/watch?v=KqFyBpcbH9A

A damning verdict

The UK’s Science and Technology Committee (made up of MPs and Lords) recently produced their own findings on the trials, and the results were pretty hard hitting. Some highlights from the report, somewhat boringly called “The work of the Biometrics Commissioner and the Forensic Science Regulator” (PDF):

  • Concerns were raised that UK law enforcement is either aware or “struggling to comply” with a 2012 High Court ruling that the indefinite retention of innocent people’s custody images was unlawful—yet the practise still continues. Those concerns are exacerbated when considering they’d potentially be included in image matching watchlists for any LFR technology making use of custodial images. There is, seemingly, no money available for investing in the manual review and deletion of said images. There are currently some 21 million images of faces and tattoos on record, which will make for a gargantuan task. [Page 3]
  • From page 4, probably the biggest hammer blow for the trials: “We call on the Government to issue a moratorium on the current use of facial recognition technology and no further trials should take place until a legislative framework has been introduced and guidance on trial protocols, and an oversight and evaluation system, has been established”
  • The Forensic Science Regulator isn’t on the lists it needs to be with regards to whistleblowing, so whistleblowers in (say) the LFR sector wouldn’t be as protected by legislation as they would in others. [Page 10]

There’s a lot more in there to digest but essentially, we have a situation where facial recognition technology is failing any and all available tests. We have academics, protest groups, and even MP committees opposing the trials, saying “The error rate is nearly 100 percent” and “We need to stop these trials.” We have a massive collection of images, many of which need to be purged instead of being fed into LFR testing. And to add insult to injury, there’s seemingly little scope for whistleblowers to call time on bad behaviour for technology potentially deployed to a nation’s police force by the government.

UKGOV: Keep on keeping on

This sounds like quite the recipe for disaster, yet nobody appears to be listening. Law enforcement insists human checks and balances will help address those appalling trial numbers, but so far it doesn’t appear to have helped much. The Home Office claims there is public support for the use of LFR to combat terrorism and other crimes, but will “support an open debate” on uses of the technology. What form this debate takes remains to be seen.

All around the world: the United States

The US experience with facial recognition tech is fast becoming a commercial one, as big players hope to roll out their custom-made systems to the masses. However, many of the same concerns that haunt UK operations are present here as well. Lack of oversight, ethics, failure rate of the technology, and bias against marginalised groups are all pressing concerns.

Corporate concerns

Amazon, potentially one of the biggest players in this space, has their own custom tech called Rekognition. It’s being licensed to businesses and law enforcement, and it’s entirely possible someone may have already experienced it without knowing. The American Civil Liberties Union weren’t exactly thrilled about this prospect, and said as much.

Wanting to roll out Amazon’s custom tech to law enforcement, and ICE specifically, was met with pushback from multiple groups, including their own employees. As with many objections to facial recognition technology, the issue was one focused on human rights. From the open letter:

“We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights. As ethically concerned Amazonians, we demand a choice in what we build, and a say in how it is used.”

Even some shareholders have cold feet over the potential uses for this powerful AI-powered recognition system. However, the best response you’ll probably find to some of these concerns from Amazon is a blogpost from February called “Some thoughts on facial recognition legislation.”

And in the blue corner

Not everyone in US commercial tech is fully on board with facial technology, and it’s interesting to see some of the other tech giant responses to working in this field. In April, Microsoft revealed they’d refused to sell facial tech to Californian law enforcement. According to that article, Google flat out refused to sell it to law enforcement too, but they do have other AI-related deals that have caused backlash.

The overwhelming concerns were (again) anchored in possible civil rights abuses. Additionally, the already high error rates in LFR married to potential bias in gender and race played a part.

From city to city, the battle rages on

In a somewhat novel turn of events, San Francisco became the first US city to ban facial recognition technology entirely. Police, transport authorities, and anyone else who wishes to make use of it will need approval by city administrators. Elsewhere, Orlando passed on Amazon’s Rekognition tech after some 15 months of—you guessed it—glitches and technical problems. Apparently, things were so problematic that they never reached a point where they were able to test images.

Over in Brooklyn, NY, the pressure has started to bear down on facial tech on a much smaller, more niche level. The No Biometric Barriers to Housing act wants to:

…prohibit the use of biometric recognition technology in certain federally assisted dwelling units, and for other purposes.

This is a striking development. A growing number of landlords and building owners are inserting IoT/smart technology into people’s homes. This is happening whether they want them or not, regardless of how secure they may or may not be.

While I accept I may be sounding like a broken record, these concerns are valid. Perhaps, just perhaps, privacy isn’t quite as dead as some would like to think. Error rates, technical glitches, exploitation of certain communities and using them as guinea pigs for emerging technology are all listed as reasons for the great United States LFR pushback of 2019.

All around the world: China

China is already a place deeply wedded to multiple tracking/surveillance systems.

There are 170 million CCTV cameras currently in China, with plans to add an additional 400 million between 2018 and 2021. This system is intended to be matched with facial recognition technology tied to multiple daily activities—everything from getting toilet roll in a public restroom to opening doors. Looping it all together will be 190 million identity cards, with an intended facial recognition accuracy rate of 90 percent.

https://www.youtube.com/watch?v=lH2gMNrUuEY

People are also attempting to use “hyper realistic face molds” to bypass biometric authentication payment systems. There’s certainly no end of innovation taking place from both government and the population at large.

https://platform.twitter.com/widgets.js

Hong Kong

Hong Kong has already experienced a few run-ins with biometrics and facial technology, but mostly for promotional/marketing purposes. For example, in 2015, a campaign designed to raise awareness of littering across the region made use of DNA and technology produced in the US to shame litterbugs. Taking samples from rubbish found in the streets, they extracted DNA and produced facial reconstructions. Those face mockups were placed on billboards across Hong Kong in high traffic areas and places where the litter was originally recovered.

Mileage will vary drastically on how accurate these images were because, as has been noted, the “DNA alone can only produce a high probability of what someone looks like” and the idea was to generate debate, not point fingers.

All the same, wind forward a few years and the tech is being used to dispense toilet paper and shame jaywalkers. More seriously, we’re faced with daily protests in Hong Kong over the proposed extradition bill. With the ability to protest safely at the forefront of people’s minds, facial recognition technology steps up to the plate. Sadly, all it manages to achieve is to make the whole process even more fraught than it already is.

Protestors cover their faces, and phone owners disable facial recognition login technology. Police remove identification badges, so people on Telegram channels share personal information about officers and their families. Riot police carry cameras on poles because wall-mounted devices are hampered with laser pens and spray paint.

https://platform.twitter.com/widgets.js

Rules and (bending) regulations

Hong Kong itself has a strict set of rules for Automatic Facial Recognition. One protestor attempted to make a home-brew facial recognition system using online photos of police officers. The project was eventually shelved because of lack of time, but the escalation of recognition tech development by a regular resident is quite unique.

This may all sound a little bit out there or over the top. Even so, with 1,000 rounds of tear gas being fired alongside hundreds of rubber bullets, protestors aren’t taking chances. For now, we’re getting a birds-eye view of what it would look like if LFR were placed front-and-center in a battle between government oversight and civil rights. Whether it tips the balance one way or the other remains to be seen.

Watching…and waiting

Slow, relentless legal rumblings in the UK are one thing. Cities embracing or rejecting technology in the US is quite another—especially when the range of stances is from organizations and policies all the way down to the housing level. On the opposite side of the spectrum, seeing LFR in Hong Kong protests is an alarming insight into where the state of biometrics and facial recognition could lead if concerns aren’t addressed head on before implementation.

It seems technology, as it so often does, has raced far ahead of our ability to define its ethical use.

The question is: How do we catch up?

The post Facial recognition technology: force for good or privacy threat? appeared first on Malwarebytes Labs.

https://blog.malwarebytes.com/feed/

Leave a Reply