Machine Learning
Apr 3, 2020

The Unintelligence of Artificial Intelligence

by
Jenalee Janes

Even the best intentions can end in disaster. In the past few years, Facebook has implemented a new artificial intelligence that was built to be a valiant warrior against offensive speech. Last year, however, it was discovered during a test run for a new Facebook-designed video chatting device that one of its algorithms was unintentionally discriminatory against black skin. So what happened?

A Story: The Fault in Facebook’s Artificial Intelligence

Lade Obamehinti, who is the program manager for Facebook’s new Portal video chat device and also happens to be black, was testing a prototype of the device when she realized that something was very, very wrong. The way they had designed it, the device was supposed to use the computer’s camera and internal audio to discover and zoom in on the face of the person speaking.

When she took her own turn and began a monologue about the breakfast she’d eaten that morning, however, the program completely ignored her, continuing to focus on one of the colleagues she was testing the prototype with--a white man. It wasn’t something that had been maliciously programmed into the device. But artificial intelligence’s ability to create and recreate new algorithms based on the machine learning models a company employs is far from perfect.

Because it uses a specific machine learning model to help it pick out patterns of the data a company collects, and because a company itself is rarely actually able to keep track of all the data being imported, the algorithm artificial intelligence puts out is unpredictable. And what we can’t predict, we need to scrutinize even more--especially when people are involved.

Even if Facebook didn’t have any say in the discriminatory algorithm that their artificial intelligence put out, it still reflects poorly on them.

What Facebook Did to Solve the Problem

Obamehinti relayed her experience to other developers later that same week at Facebook’s annual developer conference, and the usually lively meeting took a sober turn. Facebook needed to find a solution to their algorithm problem if they wanted to stop their products from alienating anyone to the point of no return. They pondered it over the course of the meeting, but it was actually Obamehinti, having been affected by the algorithm personally, who was most driven to find a solution.

No one wants to be ignored by the technology they’re trying to use, but especially for someone like Obamehinti, whose skin color has been a source of discrimination for centuries, an error of this calibre in the algorithm could have devastating effects--both for company whose algorithm didn’t compute correctly and for the people who are experiencing this kind of discrimination.

With this thought reeling endlessly in her mind, she set out to develop a new process--one that called for much more inclusive artificial intelligence. Her process has since been adopted by a number of other product development groups at Facebook to be implemented so that other Facebook algorithms don’t discriminate in the same way.

Obamehinti’s Process

Her process was simple enough: search and calculate for any gender or racial biases within the data that was used to create the Portal’s vision system, and measure the system’s task performance as well. When she did this, she discovered that there was a clear underrepresentation of both women and people of color in the training data, and that the product itself was less accurate when it came to seeing those groups.

Once she discovered this, Obamehinti was able to adjust the machine learning model to eliminate the apparent skin tone bias within the algorithm--although not completely. She was able to patch up the Portal’s blind spots for the most part before the product officially shipped out, but admittedly, it’s still not perfect. It’s still considerably less accurate at detecting women and people with the darkest skin tones than it is at detecting men and people with lighter skin tones.

We know that artificial intelligence is flawed. It has a tendency to create algorithms that we don’t necessarily give it permission to create, and these have the potential to give an unintentional brand of discrimination to otherwise reputable companies. Obamehinti has shown that there is a way to repair artificial intelligence to prevent this, but knowing that even some reparations aren’t enough to completely eliminate discriminatory algorithms, leaves the question of why companies continue to use it burning away.

Fixing artificial intelligence that is biased against certain groups of people is a bit like throwing a plaster over a three-inch incision. It covers the problem at the surface level, but the wounds run much deeper than that.

Why Companies Like Facebook Use Artificial Intelligence

Facebook’s main reason for implementing artificial intelligence is to help stop the spread of disinformation across its platforms. After the disastrous 2016 presidential election, Facebook discovered through an inventory of their data that several groups had been created with the intention of influencing people’s decisions for the election. The groups ranged anywhere from diehard supporters of Donald Trump to diehard supporters of Bernie Sanders, but they all had one thing in common: their goal was to paint a negative picture of Hillary Clinton for anyone who came across their pages to see.

Evidently, they were successful. The issue with these accounts, however, the issue that Facebook wants to hopefully address by implementing artificial intelligence, is that nearly all of them were fake, and they did nothing but spread disinformation. The introduction of artificial intelligence has been to try to stop that disinformation into spreading as misinformation. If the artificial intelligence can detect the disinformation and its sources before it can spread, the goal is that it will remove those codes from the algorithm that shows people posts they’re most likely to be interested in.

How Facebook’s Artificial Intelligence Works

One of Facebook’s most recent artificial intelligence implementation was designed specifically for the purpose of stopping the spread of misinformation. It’s a content-filtering system that detects posts that may be spreading political misinformation by looking at the regions where the posts were created and keywords that could potentially be included in disinformation.

When it finds something that could be problematic, it highlights the post for human review. It also makes sure that the system can operate a variety of languages that might appear in a given country to ensure as much accuracy as possible for detecting disinformation.

Candela, the director of Facebook’s artificial intelligence committee, has said that their software engineers take great care when comparing the system’s accuracy across all the languages it may encounter. The more accurately the artificial intelligence can pick up on languages, the more likely it is that Facebook will be able to enforce its guidelines equitably.

Other Efforts to Stop Spreading Disinformation

Equitability is important to Facebook. They’ve encountered similar concerns about equitability in project testing for flagging fake news more efficiently through crowdsourcing. To do this, they’ve purposefully introduced evidence that either supports and refutes information its users have already engaged with and checked their engagement with the additional posts. The frequency with which users engage with true or false content lets Facebook know how to alter their system so that the content that the algorithm puts out is more factual.

Teams who have been working on bias in artificial intelligence have been trying to work out how to make sure that the group of volunteers who offer to review posts is diverse enough that no specific region or community can impart too much bias.

Mike Schroepfer, who holds the company’s Chief Technology Office, has said that Facebook research has allowed for high accuracy in processing images or text, even when working with smaller amounts of training data, although he has yet to provide the statistics to back up this claim.

Part of the reason Facebook is so adamant that they continue to use artificial intelligence in spite of its flaws is because it actively fights against the spread of misinformation.

The Risk of Artificial Intelligence

Obamehinti points out that anytime AI encounters people, there’s a risk for marginalization. This is because machines are data driven, while people are spurred on by their feelings. And artificial intelligence isn’t able to detect those feelings.

Facebook isn’t the only media mogul whose algorithms have brought about unintentional discrimination against people of color. In 2015, Google experienced an artificial intelligence problem in which its photo organizing service lumped pictures of black people in with the “gorillas” tag.

They were able to rectify this, but only by completely blinding the product to the recognition of any gorillas, monkeys, or chimpanzees. It did the trick, but the actual algorithmic bias problem still hasn’t been solved.

The Rundown

Artificial intelligence isn’t always nice to people. This isn’t something that only people who don’t “understand” it encounter; even some of the most successful companies in the world aren’t really able to control the outcomes of its artificial intelligence. We’ve seen this through what Lade Obamehinti discovered during the trial run for Facebook’s Portal video chat device.

The hope, however, is that the challenges of making sure their artificial intelligence run equitably will decrease as time progresses and the technology becomes more advanced. For now, Obamehinti’s process of measuring for potential gender and race discrimination within a system’s data seems to be an effective way to minimize the damages of an unintelligent artificial intelligence--as much as it can, anyway.