Can Technology Defeat Our Prejudices?

Whether we like it or not, humans are tribalists. Throughout our entire history, we have chosen to identify ourselves with groups of people that are similar to us. We feel akin to the guy sitting next to us who we’ve never met before because we’re both wearing the same jersey.

While many of us realize that the groups we identify with are harmless in nature (go local sports team!) and simply help us to find our place with those around us, tribalism, as we have become aware of in the age of Donald Trump, can be taken to the extreme. Racism and sexism, whether discreet or outright, has blossomed, and social media has amplified the reach of hateful groups like those who marched in Charlottesville.

With the current political climate, it doesn’t appear that the growing distrust between people of different racial and cultural backgrounds will be slowing down any time soon.

What we don’t all realize is that artificial intelligence (AI) could play a role in addressing these “isms” and helping us to breach our tribal boundaries. There is compelling evidence that we can use AI to illuminate that all humans, no matter what they path they walk in life, are a lot closer to each other than we realize. However, AI is not a silver bullet to prejudice, and we must first solve a few critical issues with the technology first.

To begin, we have to understand where AI fails a bit and why it’s not close to replacing humans for many tasks. In the most basic terms, AI is built on a series of complex mathematical equations that are considered to be “perfect” before they’re deployed. This means that whenever an AI model makes a calculation, it’s based on a proven equation for said calculation. Two plus two will always equal four, etc.

Based on this concept, we can apply multiple equations to an AI model that will allow it make logical guesses. For example, researchers built an AI-based robot with six legs that could not only repair itself but also quickly learn to adapt when it lost limbs. Equations based on rapid implementation of trial and error allow the robot to figure out how to continue its tasks, even in a diminished capacity.

Breakthroughs like this help humanity stay safe by having robots perform dangerous tasks for us in hazardous conditions. However, what no equation has truly been able to achieve for AI is the ability to interpret situations that involve complex emotion.

The famous question posited by MIT’s “Moral Machine” project gets to the heart of this problem: how should an AI driven driverless car, when faced with a choice where death is inevitable, react to protect life? Does it hit one person in lane A, or five people in lane B? The obvious answer, with no other evidence, is to sacrifice the one to save the many.

But what if the one person is a small child, and lane B is all elderly people or terminally ill individuals (or even serial killers?) Would you personally swerve into the lane of five if one of those people was Adolf Hitler before his rise to power? What if that one child is Hitler?

These are questions that are tough for humans to answer because we have emotional ties to the outcome.

Can a driverless car see these moral issues through a human lens?

The first AI technology that could help us address these issues is already being developed and improved on by corporations worldwide: facial recognition. Roughly half of the entire population of the United States has been subjected to facial recognition technologies, even though most don’t realize it. That picture of you on vacation last summer? That is in your Facebook account and has been run through a facial recognition system that Facebook is developing and improving upon.

Facial recognition, though, is not without its biases, as Microsoft recently learned. Research done by MIT showed that many facial recognition systems have a bias towards white males, predicting who they are with a 99% accuracy. With women and minorities, the failure rate to correctly recognize the person was almost 35%. Additionally, Amazon’s facial recognition system, Rekognition, incorrectly identified 28 members of Congress as criminals when the ACLU ran all 535 members of Congress against a mugshot database of 25,000 felons (insert hackneyed joke here).

While these companies have been working hard to improve the recognition systems what this shows us is that, like society, we have a ways to go with this technology. But in a larger sense, we can use this to showcase how we can overcome racism in general. Better detection technologies breed better understanding into the uniqueness of each human being.

This isn’t a stretch in thinking either. The process for improving facial recognition means creating a method to be more inclusive of all peoples and cultures in order to be accurate. This has the positive effect of involving more minorities and women in technology to help ensure that technology can be equally applied to all.

In other words, the very process by which we improve this technology will make our technological workforce more inclusive.

One of the core issues with “isms” in our world is emotion and empathy, or the lack thereof. Many don’t realize that many American males and a few females from Generation X and older are born with Alexithymia. This condition is essentially defined as the inability for the person to understand or even identify their own emotions.

From a young age, boys are taught to “walk it off” or “suck it up.” This stunting of emotional growth leads to the inability to understand how to put themselves in another’s shoes. When we are looking at racism or sexism in society, it is typically characterized by a similar lack of empathy for others, namely because the racist or sexist simply cannot express the empathy to understand.

This can angrily manifest itself as hatred towards others who are not aligned with them, whether its race, sex or another factor. So how on earth is AI going to help this? The answer may surprise you.

Usually, we see our computers and gadgets as inanimate objects, simply performing the tasks or duties we want them too. However, with the rise of AI, we are seeing the development of emotional learning with many different efforts to give AI the ability to mimic human emotion or even learn how to emote themselves.

In a recent study, researchers gave an AI based toy robot named Nao to 89 participants with which to play and interact. The researchers would tell the subject to do this or that with Nao and at the end of session, forty-three of the subjects were told to switch off Nao to end the session. These forty-three people were subjected to cries and pleas by Nao to not turn him off because he was scared of the dark or afraid he wouldn’t be turned on again.

Nao didn’t want to die. Of the forty-three who heard this, thirteen subjects refused to shut off the pleading robot, while the remaining thirty took twice as long to turn Nao off when compared to the group that didn’t hear the pleading. Nao has also been used in research to help children with Autism read emotions.

Apply this technology to a willing participant that holds racist or sexist views. Nao or AI wouldn’t “cure” this person of their prejudices, but by diving into the core of problem here, a lack of empathy and emotional understanding, Nao could help build an emotional lexicon for a biased person.

We’ve seen real world examples of this as well. In the documentary “White Right: Meeting The Enemy” Muslim creator/director Deeyah Khan meets with many of the core white supremacists of the 2017 Charlottesville protest to not only interview them but also get to know them.

The results of her documentary are rather astounding. Hardcore racists rethink their positions on subjects like forcibly deporting Khan. The documentary provides a glimpse of the power of empathy combating “ism.” Since Ms. Khan can’t be everywhere and speak to every racist, we can apply AI models to offer a widespread alternative learning methodology and begin the process of building a much needed empathy-driven educational system.

So where do we go from here? Clearly AI isn’t the be all to end all prejudices, but by leveraging technology, we may be able to begin the process of healing the world from needless but very serious problems. With any major problem to be addressed, the answer is almost always time and education. The faster and further we are able to disseminate information to those who need it (though may not recognize it), the more quickly we can begin wiping out our “isms” once and for all.