Fawkes – An OpenSource project tries to prevent unauthorized face recognition!

A research project by an American research group alters images to the extent that facial recognition is prevented. This development is mainly directed against a company that has carried out unauthorized facial recognition.

Fawkes is to prevent face recognition

A research group of the SAND Lab at the University of Chicago has developed an algorithm that works in the Fawkes software. The task of the software or the algorithm is to distort images with faces in such a way that a person cannot recognize a difference, but programs can no longer recognize faces.

In order to understand the algorithm in more detail, we must first look at how face recognition works in general. The software searches for distinctive points that make up a face, e.g. a round shape, 2 eyes and a mouth, etc. Most faces in the world have these characteristics and it is exactly these points that the software searches for in a photo.

Therefore most programs go pixel by pixel over the picture and analyze how much the adjacent pixels differ. So the program is able to analyze the photo and recognize patterns like for example a head.

Face recognition
This shows the face recognition of an own software (Source: Instagram AWARE7 GmbH)

To prevent programs from being able to perform this face recognition, the research group has developed an algorithm that minimally alters the original image at the pixel level. This means that as a human being, one sees no difference between the original and the processed image. The face recognition software, however, can no longer identify faces, because the pixels have been modified in such a way that the software interprets a strongly distorted image.

Research group against Clearview

As the Süddeutsche Zeitung reported, the Clearview AI program knows billions of faces from the Clearview company. The company is said to have developed a gigantic “vacuum cleaner” that searches various public sources, such as Facebook or Instagram. A software downloads countless images and then tries to run facial recognition on them and match similar faces. This assignment is done with the help of an artificial intelligence (AI). AI is a big topic in the field of IT security and not only since yesterday, already 2016 neural networks were able to recognize drawings.

Allegedly several hundred authorities pay this company to be allowed to access this enormous database. This whole process is highly questionable, as the company is extremely unknown to Clearview. The addresses given on the homepage are incorrect (according to Clearview a typing error), furthermore there are no entries about employees or other information about this company.

Another major point of criticism is the fact that the company has violated various terms of use, e.g. of Facebook. Commenting on this criticism, Clearview told the New York Times “Many people do this”.




To prevent such unauthorized facial recognition, among other things, the Chicago-based research group developed the Fawkes program. Fawkes is open source and can be found at GitHub and other places. Besides the code, a technical paper on this program will be presented at the USENIX Security Symposium in mid-August.


Sevencast – der IT-Security Podcast

Von unterwegs, im Büro oder zu Hause hören und auf dem aktuellen Stand bleiben!

Sevencast bei Spotify
Sevencast bei Apple Podcast
Sevencast bei Overcast


Leave a Comment