Researchers release free AI-powered Fawkes image privacy tool for 'cloaking' faces

Researchers release free AI-powered Fawkes image privacy tool for 'cloaking' faces
ФОТО: dpreview.com

Researchers with the University of Chicago's SAND Lab have detailed the development of a new tool called Fawkes that subtly alters images in a way that makes them unusable for facial recognition.

The tool comes amid growing concerns about privacy and an editorial detailing the secret scraping of billions of online images to create facial recognition models.

Put simply, Fawkes is a cloaking tool that modifies images in ways imperceptible to the human eye. The idea is that anyone can download the tool, which has been made publicly available, to first cloak their images before posting them online. The name was inspired by Guy Fawkes, the mask of whom was popularized by the movie V for Vendetta.

The Fawkes algorithm doesn't prevent a facial recognition algorithm from analyzing a face in a digital image -- instead, it teaches the algorithm a 'highly distorted version' of what that person's face looks like without triggering errors; it cannot, the researchers say, be 'easily detected' by the machines, either.

By feeding the algorithm these cloaked images, it subtly disrupts the machine's attempt to learn that person's face, making it less capable of identifying them when presented with uncloaked imagery. The researchers claim their cloaking algorithm is '100% effective' against top-tier facial recognition models, including Amazon Rekognition and Microsoft Azure Face API.

As well, the team says their disruption algorithm has been 'proven effective' in many environments through extensive testing. The use of such technology would be far more subtle and difficult for authorities to prevent compared to more conventional concepts like face painting, IR-equipped glasses, distortion-causing patches or manual manipulation of one's own images.

These conspicuous methods are known as 'evasion attacks,' whereas Fawkes and similar tools are referred to as 'poison attacks. ' As the name implies, the method 'poisons' the data itself so that it 'attacks' deep learning models that attempt to utilize it, causing more widespread disruption to the overall model.

The researchers note that Fawkes is more sophisticated than a mere label attack, saying the goal of their utility is 'to mislead rather than frustrate. ' Whereas a simple corruption of data in an image could make it possible for companies to detect and remove the images from their training model, the cloaked images imperceptibly 'poison' the model in a way that can't be easily detected or removed.

As a result, the facial recognition model loses accuracy fairly quickly and its ability to detect that person in other images and real-time observation drops to a low level.

Yes, that's McDreamy.

How does Fawkes achieve this? The researchers explain:

'DNN models are trained to identify and extract (often hidden) features in input data and use them to perform classification. Yet their ability to identify features is easily disrupted by data poisoning attacks during model training, where small perturbations on training data with a particular label can shift the model’s view of what features uniquely identify . . .

But how do we determine what perturbations (we call them “cloaks”) to apply to [fictional example] Alice’s photos? An effective cloak would teach a face recognition model to associate Alice with erroneous features that are quite different from real features defining Alice. Intuitively, the more dissimilar or distinct these erroneous features are from the real Alice, the less likely the model will be able to recognize the real Alice. '

The goal is to discourage companies from scraping digital images from the Internet without permission and using them to create facial recognition models for unaware people, a huge privacy issue that has resulted in calls for stronger regulations, among other things. The researchers point specifically to the aforementioned NYT article, which details the work of a company called Clearview. ai.

According to the report, Clearview has scraped more than three billion images from a variety of online sources, including everything from financial app Venmo to obvious platforms like Facebook and less obvious ones like YouTube. The images are used to create facial recognition models for millions of people who are unaware of their inclusion in the system. The system is then sold to government agencies who can use it to identify people in videos and images.

Many experts have criticized Clearview. ai for its impact on privacy and apparent facilitation of a future in which the average person can be readily identified by anyone with the means to pay for access. Quite obviously, such tools could be used by oppressive governments to identify and target specific individuals, as well as more insidious uses like the constant surveillance of a population.

By using a method like Fawkes, individuals who possess only basic tech skills are given the ability to 'poison' the unauthorized facial recognition models trained specifically to recognize them. The researchers note that there are limitations to such technologies, however, making it tricky to sufficiently poison these systems.

One of these images has been cloaked using the Fawkes tool.

For example, the person may be able to cloak images they share of themselves online, but they may find it difficult to control images of themselves posted by others. Images posted by known associates like friends may make it possible for these companies to train their models, though it's unclear whether there exists the ability to quickly located people in third-party images (for training purposes) in an automated fashion and at a mass scale.

Any entity that is able to gather enough images of the target could train a model sufficiently enough that a minority of cloaked images fed into it may be unable to substantially lower its accuracy. Individuals can attempt to mitigate this by sharing more cloaked images of themselves in identifiable ways and by taking other steps to reduce one's uncloaked presence online, such as removing name tags from images, using 'right to be forgotten' laws and simply asking friends and family to refrain from sharing images of one's self online.

Another limitation is that Fawkes — which has been made available to download for free Linux, macOS and Windows — only works on images. This means it is unable to offer cloaking for videos, which can be downloaded and parsed out into individual still frames. These frames could then be fed into a training model to help it learn to identify that person, something that becomes increasingly possible as consumer-tier camera technology offers widespread access to high-resolution and high-quality video recording capabilities.

Despite this limitation, Fawkes remains an excellent tool for the public, enabling the average person with access to a computer and the ability to click a couple of buttons to take more control over their privacy.

A full PDF of the Fawkes image-cloaking study can be found on the SAND Lab website here.

.

fawkes model from their

2020-8-12 16:27