Facial recognition is a technology used to recognize or identify a person from a photograph or a video. It generally works by using biometrics to map out a person’s facial features and then compares those features with other images from the database. Its use in everyday life is expanding and we will soon be able to see it in action at airports, schools, universities, restricted areas, or at building entrances – basically, in all spheres from marketing to surveillance. As it will have such an important role, it is of high importance that it performs accurately. Unfortunately, the reality is that errors start to happen as soon as the skin color gets darker.
How accurate is facial recognition exactly?
One of the most accurate features of facial recognition technology is differentiating between genders. It is about 99% accurate if a person in the photo that is being examined is a white man. However, when we start to experiment on different races and genders, we notice that the accuracy is rapidly going down. For example, in a set of photos of lighter-skinned females, gender was misidentified in up to 7% of the photos, which still makes the software 93% accurate. But, in a set of photos of darker-skinned males, the percentage of errors in gender identification went up to 12% and as much as 35% for darker-skinned females. It seems as if some real-life stereotypes are transferring onto artificial intelligence which is a serious issue in a world where AI is becoming increasingly capable every second.
Why is facial recognition technology biased?
The reason why this technology is highly accurate when it comes to white males and not so accurate with darker-skinned females is quite simple – there are more lighter-skinned men and women in the database. One facial recognition system, in particular, was determined to be more than 80% white and more than 75% male. As long as white males are making the majority of the database, the system will continue to be inaccurate.
What does this look like in real life?
Imagine a situation where facial recognition software is used by a bank to determine who is eligible for a loan. The system will first learn the files of past loan applicants and use them to make its decisions. If this is an area where women are more engaged in domestic work than in business, the system will decide that males are more likely to pay off the loan and it will reject women applicants. Making these kinds of decisions based on the percentage of men and women in a database cannot be taken as a good decision-making process as it discriminates and doesn’t take individual capacities into account.
How can we make facial recognition technology less discriminatory?
One of the features that could be added to this software is a ‘value sensitive design’ which aims to take human values into account. Freedom, equality, and benevolence are some of the principles it uses as guidelines to determine what values matter and what don’t. Moreover, there is a new field of algorithmic accountability with a growing number of advocates whose goal is to make these kinds of decisions more transparent and fair.