No More Filters: How AI Leaders Can Turn The Tide On Algorithmic Bias

Share this post

By Phil Schraeder, CEO at global technology and media company GumGum

We, humans, stand in the shadow of a moral eclipse right now. We’ve taught machines to make complex cognitive decisions for us, and they’re doing so with all the frenzied zeal of a billionaire venture capitalist. AI is set to contribute up to $15.7 trillion to the global economy in the next decade, governing major choices in areas of healthcare, law, education and employment. So what does it say when this powerhouse – the tech God in whom we trust –  gets things wrong?

From racially-biased facial recognition tools to sexist recruiting engines and homophobic APIs, algorithmic “fairness” is cut through with all the subconscious prejudices of the people who condition it. And for those of us at the eye of the storm – the designers, the coders, the CEOs – it’s not enough to simply stand by and wring our hands. If we’re not solving the problem, we’re part of it.

But how do we step up and take responsibility? Machine-learnt bigotry is a huge and systemic issue: there is no off-the-shelf fix. Instead, we at the forefront of AI need to attack it at both a micro and macro level, using the same human-machine tradeoff that stands at the heart of the issue. Here’s a four-step strategy for getting started on the road toward crucial change:

Continually mitigate for bias

 While some AI programs can provide a balance on racism or sexism (e.g. by ignoring certain demographic information about candidates in automated recruitment), others reinforce it. Left unchecked, this discrimination snowballs. Not only that but, given the transient status of AI, the data behind it can mutate unexpectedly: and often in ways that technologists aren’t aware of. If discriminatory behavior can be hard to spot in real life, the same issue is amplified tenfold by machine learning.

The good news is that it can change. Just as the human brain has elements of plasticity, so biases too are malleable: machines can learn and relearn to correct for any ingrained prejudice.

At GumGum, we specialize in using machine learning to achieve contextual intelligence, allowing our systems to analyze huge volumes of web pages and understand the text, images and videos in the same way a human does. We then use this capability to serve contextual advertising and identify content that could be damaging to a brand.

Our product teams are involved in an ongoing effort to weed out stereotypes and discriminatory attitudes from our programming. Partly this takes place at a granular level, with the ground truth of training data that feeds our AI models. For example, we found that an initial dataset we outsourced around the threat class “arrest” was overwhelmingly skewed by images of African Americans. So our Computer Vision team had to go back and manually replace these to avoid a false association that would infect our image recognition model.

Question rather than trust

GumGum developers also use a system called “counterfactual fairness”: so, for example, we’ll swap pronouns with the opposite gender in any given dataset, and train models against those as well. This helps to ensure that our Natural Language Processing tools are not making judgments solely on the basis of gender pronouns but also on the structure of language.

As well as tackling bias inherent within our AI machinery, we can also use the same mechanisms to flag covert bias within the content of our media partners. For example, in a recent hackathon, our team levered offerings such as Named Entity Recognition and Content Classification to invent a trial system that evaluated gender representation in different content verticals.

In one case, we were able to identify a clear discrepancy in publisher content between the adjectives used to describe men, such as “proud” and “presidential”, and appearance-driven adjectives to describe women, such as “beautiful” and “healthy”. A similar approach could be used to detect bias around ethnicity in a piece of text, too.

It’s this forensic level of scrutiny that is central to combating digital prejudice and flawed decision-making. Rather than blindly trusting machines to work for us, we need to go in with the approach that AI bias is inevitable and sometimes quite hidden. So, the challenge lies in constantly finding new ways to identify it internally (fixing bias within AI tools) and externally (using AI tools to fix bias): while recognizing the impact of human processes on both sides of this coin.

Open the doors to community and experience

If people are central to fixing AI bias, then we also need to address the people who are creating the technology; and who or what they are creating it for. For example, an LA-based Harvard graduate may anticipate and design an AI model in a very different way to a young mom from Istanbul, or a middle-aged manager living in Hanoi.

So how can we make the AI field more diverse? This is a big challenge for GumGum and we have to hold our hands up and say we don’t have all the answers. But equally, we’re determined not to shy away from the facts. AI urgently needs to throw its doors open to wider strata of people: not only technologists, each of whom will spot biases according to their own background and set of beliefs but also to society at large.

The problems we are solving with AI will be felt across many different communities all over the world. So a failure to involve these communities and accurately capture voices within them constitutes a new kind of AI bias that compounds what is going on in methodology.

At GumGum, we’ve created an employee council that makes recommendations on diversity and inclusivity – including how to bring AI awareness to the wider community, such as partnerships with gang intervention enterprise, Homeboys Industries. Inclusivity shouldn’t be an afterthought, so this work is a paid part of council members’ daily roles.

Throw away the filters

These are just small steps, and we have so far to go. But they’re significant in terms of acknowledging the push-pull energy between making the AI world more accessible and simultaneously connecting with the wider sphere of societal issues that AI can solve.

In a year of tumultuous events, I think it’s really clear that we, as technology providers, need to be acutely aware of the calls for change that are echoing across our digisphere. Instead of balking at the “sensitivity” of content around Black Lives Matter, sexism or social injustice, we need to create ways for media partners to lean into these dialogues and ask: what can we learn?

As a gay man at the helm of a global tech company, I’m very aware I don’t fit the mold of a traditional CEO. But that’s exactly why I am determined that GumGum and other major players in the AI movement confront bias head-on. This means being open, honest and transparent in the way we do things. We need to talk about our mistakes and encourage a culture of individuality and expression.

This is especially pertinent when it comes to the question of code-switching at work: a behavioral event that sees people alter the way they behave, dress, or speak in order to “fit in” within a corporate setting. If these inflections seep down to the creation level in AI companies (as they inevitably will), the result will be that machines learn to speak in a way that is not necessarily reflective of our individual truths: rather they are echoing the edited versions that we show to the world, as honed by a particular cultural environment.

As a leader, I try to rip off my filters and show my team that it’s OK to have empathy and just be your true self in all your colours and quirks. We need to have the same vulnerability when it comes to tackling AI bias. Yes, it’s a big and somewhat daunting challenge. But if we’re open and honest about the issues we’re tackling, we stand a 100% better chance of getting to their crux.


Share this post
No Comments Yet

Leave a Reply

Your email address will not be published.