How Microsoft's AI Twitter Robot Became Racist In Less Than A Day

Microsoft's failed Twitter bot came back to life early Wednesday morning for a bit.
Microsoft's failed Twitter bot came back to life early Wednesday morning for a bit. Microsoft

Microsoft unveiled its AI chatbot on Twitter on Wednesday and the Internet managed to corrupt it in under 24 hours. Named Tay, the bot is an experiment in “conversational understanding,” according to the company, as Tay evolves as users interact with it.

Upon its launch, the Internet did what it does best: destroyed the innocence of Tay. Users on Twitter sent Tay racist and misogynistic comments that helped Tay become a prejudiced bot that repeated these sentiments.

Microsoft has since issued a statement apologizing for the bot’s behavior.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Peter Lee, the Corporate Vice President at Microsoft. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Microsoft is currently running a similar chatbot in China called Xiaolce, which has not faced a similar outcome, despite being used by 40 million people.

“In China, our Xiaolce chatbot is being used by some 40 million people, delighting with its stories and conversations,” said Lee. “The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.”

Join the Discussion
Top Stories