Microsoft’s pulls AI program after robot’s racist rant

0324 TaySeattle, Wa. — Microsoft’s public experiment with artificial intelligence apparently needs a major tune-up.

The tech company designed a chat bot to talk on social media like a teenager, and activated it on Twitter Wednesday.

Microsoft said that Tay was supposed to get smarter as more users chatted with her, but after less than a day, the AI named “Tay” started spewing racist and hateful comments, and expressed support for white supremacy and genocide. The account also said the Holocaust was made up.

Tay was shut down around midnight, and Microsoft deleted most of the offensive tweets.  Microsoft said, “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.”

In her last tweet, Tay said she needed sleep and hinted that she would be back.

© 2024 KOBI-TV NBC5. All rights reserved unless otherwise stated.

Skip to content