Monthly Archives: November 2019

How Facebook’s New AI is Thwarting Facial Recognition Programs

Sentiment on facial recognition varies somewhat by region. In Australia, for example, more than two-thirds of the population supports facial recognition, particularly when it’s used by law enforcement. In the UK about 70% concur, though little more than half the people in the US agree. Of course, these statistics relate specifically to law enforcement using the tech. Approval plummets for virtually all other applications, especially when there’s no clear benefit to the general public, such as for marketing purposes or whilst selecting job candidates.

The challenge is, the technology has grown so fast that nearly any company can wield its power, giving major rise to privacy concerns. Facebook, however, aims to do something about it.

Computers Use Facial Landmarks to Identify a Person

Unlike humans, who can more or less identify a face at a glance, software programs have to use mathematical calculations to establish uniqueness. Each person has 80 landmarks, also called nodal points, and when the distance between them is measured, a clear faceprint, or code, emerges. For example, the distance between the eyes represents one measurement, while the shape of cheekbones and length of jawline represent two more.

Facial Recognition Software Can Be Exploited

In many cases, the technology is beneficial. For example, Facebook leverages it to identify faces in photos that are posted, allowing the poster to tag who’s in the photo easier. The company rolled out privacy measures that warn someone when their photo is uploaded and also lets people opt out.

However, systems can be fooled. In the early days of facial recognition, police in Tampa, Florida installed cameras and leveraged the tech in an effort to reduce crime. People were seen wearing masks; foiling the pricey system in the most low-tech way. Researchers at Carnegie Mellon University took it a step further and created specialized glasses that exploit the very way computers read faces—by making the landmarks appear as if they are in different places. These “perturbed eyeglasses” make it possible to impersonate anyone and work roughly 92-100% of the time.

Since then, many ways to impersonate others via facial recognition software have emerged, and the unfortunate consequence is that these techniques can be leveraged for nefarious purposes. For example, someone can be placed at the scene of a crime or “deepfake” videos may be created that depict an individual saying or doing virtually anything.

Facebook is Working on De-Identification Tech

Carnegie Mellon researchers also worked with a technique called dodging, which essentially distorts a person’s landmarks and prevents recognition, but does so in a way that’s not detectable by the human eye. In these cases, a person may be indistinguishable or mistaken for any other person, and by their tests, is effective 100% of the time.

It’s this method that Facebook is working on, but their team has identified how to make it work in video, including live video streams. ““Recent world events concerning the advances in, and abuse of face recognition technology invoke the need to understand methods that successfully deal with de-identification,” the researchers say. “Our contribution is the only one suitable for video, including live video, and presents quality that far surpasses the literature methods.”

Although it’s unclear how Facebook plans to use this tech, they’ve approached dodging or de-identification in a novel way that helps maintain the quality of the images while making it impossible for computers to identify the subject. Going forward, it could lead to further developments that can aid in privacy measures or prevent the creation of deepfakes.