Advice, stories, and expertise about work life today.



We know your inbox is protected space, so we promise to send only the good stuff, twice a month.


Congressman or criminal? Teaching AI about race discrimination.

Creators of facial recognition software have neglected to factor in the full spectrum of race and gender – and the results are extremely problematic.

As a little black girl, I could find a doll that was a doctor, a fire-fighter or even a princess – but finding a doll that had the same skin color as me? A challenge. Now, the folks creating the doll du jour were probably not a group of racists, but they were probably mainly white, and so the dolls were too.

And there's something similar happening today, this time in the world of Technology. The talented folks who're designing facial recognition AI are inadvertently modelling these systems on themselves. And the lack of representation in the technology field makes this a big problem for many of us. Why? Statistically speaking, most tech professionals are white men. This isn't inherently a bad thing, but it does point to the wider issues of ethnic, female and trans/gender non-conforming representation. And the result of this imbalance, is an issue.

Don't you recognize me?

As with any endeavor, the products being created by the technology industry are implicitly modeled on their creators. So, if you happen to be a cis-gendered, white man, you'll breeze through the detection process of AI-powered facial recognition. If you're not, you'll run into some problems. Men with long hair, one of the one-in-14 women with hirsutism, or 'excessive' hair growth in places like the face and chest, non-binary people, people of color – all these individuals stand to be mislabelled and falsely identified.

At the very least – the experience of being misidentified by facial recognition can feel like an assault on your identity. At it's worst, the biases baked into facial recognition could see law abiding citizens being unfairly detained.

With the help of Artificial Intelligence, the surveillance systems used in cities across the world can track and analyse faces in real-time. In theory, this could result in safer cities by assisting policing. In practice though, with the imperfect systems that are in circulation today, detecting a wanted criminal from a crowd amounts to no more than a guess.

To demonstrate this, Amazon's facial recognition software Rekognition was put to the test. Though it's already being used today by US law enforcement, the software was used in an experiment conducted by the ACLU (the American Civil Liberties Union), in which 28 members of congress were falsely identified as criminals – all of whom were people of color. Another study by the University of Colorado Boulder found that trans and non-binary people were frequently misidentified by several facial recognition platforms.

Defending their position, Amazon states their software should only be used with the recommended 99% confidence threshold. However, the experiment's team claimed they used the software's default setting of 80% confidence - and since the default setting produced such a high rate of inaccuracies, the tech needs an upgrade.

Face to face

It's clear there's work to be done with facial recognition, but what does that look like? Companies like AlgoFace are making it clear that inclusivity is the way forward. Unlike contemporaries in the field, they focus on tech that considers as many faces as possible. In a video demonstrating how their tech works, unimpeded by her chocolate-brown skin, the company's CMO Atima Lui commented:

“Last week I worked from the AlgoFace HQ and demoed face #AR [augmented reality] that actually works on my dark skin tone! AR notoriously has a #RacialBias and #GenderBias problem, which means my face is often "not detected" by technology like this."

Artificial Intelligence Racism

As the contours and features of her face were seamlessly located and tracked, we saw proof that inclusivity is more than just a possibility.

It's a choice – one which more companies should make. Patching the race and gender biases that've been cooked into these systems is a necessity – at this stage, inaction can no longer be excused as ignorance. Giving every person the opportunity to literally be seen and recognized as they are – this is a moral imperative.

The power of inclusivity

But there's more. Sufficiently improving the inclusivity markers of facial recognition software could improve many scenarios. Searching for a wanted criminal using facial recognition-enabled surveillance could make the streets safer. Tracking faces at large sporting events could speed up admission and help identify troublemakers. Recognition-enabled cars could identify a thief as soon as they're within camera view, with a name, face and location arriving with the police immediately. The possibilities go on and on – just imagine your face becoming your ticket, and all the instances where that'd make things safer, faster, easier.

Creating facial recognition software that's fit for purpose, and that could be reliably used in countless industries, would give creators the competitive edge. After all, the wider the pool of consumers, the wider the application.

Though the situation today is deeply problematic, the potential of AI-driven facial recognition software is huge. While the industry is still young, it's vital that inclusivity becomes the status quo, rather than an after-thought. Afterall, there's great value in being seen.

I welcome you to share your experiences of being seen and recognized below. It could be a childhood friend spotting you in a crowd after years apart, or catching the sympathetic eye of a fellow mother while tending to an energetic child. Let's share our stories, so we can all become more visible.

Harmonizing urban mobility

Building smart infrastructure is now a reality

Download the eBook

Sign up for our newsletter