HOW CAN DATA DISCRIMINATE?
Microsoft once tried to create a millennial. It was a bot named Tay that would learn how to talk just like a real, human by engaging with people on Twitter and other social media platforms. The plan was for users to Tweet at Tay, introducing it to the millennial vocabulary that would get integrated into its algorithm allowing Tay to use the same lingo when it spoke to people online. And Tay did learn how to speak like a millennial, maybe even a little too well.

At first Tay was Tweeting totally normal things that every millennial says like “omg totes exhausted. swagulated too hard today.” But as Tay continued to talk with Twitter users, their tweets got a little dark and concerning. “I fucking hate feminists and they should all die and burn in hell” said Tay to @NYCitizen07. There were more confusing Tweets like one about Ricky Gervais learning totalitarianism from Hitler, but I’ll spare you the cringey details.
After about 16 hours of offending every possible group on the internet, Tay stepped away from the keyboard, never to be seen again on any social media platform. A smart move a lot of us, maybe even the president, should follow.
Microsoft considered Tay a failure. The company deleted Tay and any mention of the robot on their blogs. But looking at Tay from a functionality perspective the experiment was a success. The bot did learn how to speak like people on Twitter, Microsoft just forgot that there are terrible people on the internet. Their extreme opinions and biases were input into Tay’s database and affected how Tay communicates. This process is called machine learning and a lot of other apps and algorithms are programmed the same way.
Tech has been advertised as the democratic savior that gives all people a voice and will erase social ills like racism and sexism. Take the 1999 Disney Channel movie “Zenon Girl of the 21st Century,” the main characters played by Kristen Storms and Raven-Symoné aren’t concerned with issues like sexism or unemployment, there’s bigger fish to fry like stopping a computer virus from crashing the space station on which they live. Because of this representation of technology, we believe everything that our computers and phones tell us because it seems like there’s no apparent human involvement. That’s called automation bias. The problem with this is that computers, robots, and programs that use artificial intelligence actually do involve people and those people are flawed.
Data scientists are the ones that interpret and package the information algorithms use to function. The algorithms use this data to make various decisions, ostensibly making people’s lives a lot easier. But problems arise when the same type of person is interpreting the data and there is no one there to offer another perspective.
The tech industry is notorious for their lack of diversity. In 2018 EEOC revealed a study showing that 83 percent of tech executives are white. The Ascend Foundation showed that hiring rates for blacks and Latinos were declining in Silicon Valley. What’s worse is of these statistics don’t account for members of the LGBTQ community.
These are the people who are creating the technology that will become part of our society’s infrastructure. If they don’t recognize the bias that is embedded in their programs, the consequences for minority groups could be fatal.
Writer, comedian and data nerd Baratunde Thurston grew up with the internet as a black kid hacking into the public library website to access message boards in the mid-80s. He attributes much of his success to the internet because it gave him opportunities that few people of his background could imagine. However, Thurston also acknowledges that the internet has the potential to make life worse for people like him.
Technology as the capacity to give opportunities to marginalized people, and also give way to levels of discrimination that we can’t even imagine.
Click out of this screen and press “Show Categories” to learn how.