The company told TechCrunch once it discovered a "coordinated effort" to make the AI project say inappropriate things, it took the program offline to make adjustments. Female game developer Zoe Quinn, who has been a target for online abuse since the GamerGate controversy over sexism in the industry, uploaded a screenshot of a tweet from the Microsoft bot, in which it called her a "whore".

Tay, Microsoft's teen chat bot, still responded to my direct messages on Twiter. Microsoft has discovered this the hard way, and it will be interesting to note whether it will continue with Tay or drop all such ambitions entirely.

Microsoft's public experiment with artificial intelligence apparently needs a major tune-up.

According to her Twitter profile, "Tay" is now "sleeping", presumably until her creators can install a "common sense" filter-something actual flesh-and-blood teenage girls could probably use as well. "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you", explained Microsoft, in a recent online post.

As The Verge noted, however, while some of these responses were unprompted, many came as the result of Tay's "repeat after me" feature, which allows users to have full control over what comes out of Tay's mouth.

Man says he was arrested for not returning VHS rental
Not returning rental property is a Class 3 misdemeanor in North Carolina and punishable by a fine of up to $200. After dropping his daughter off at school, Meyers went to the police station and turned himself in.

Microsoft has silenced its Twitter bot after it denied that the Holocaust happened, said it supported genocide and told followers that "Jews did 9/11". Last year, Google apologized for a flaw in Google Photos that let the application label photos of black people as "gorillas". But critics say Microsoft programmers should have seen this coming.

The bot was targeted at 18 to 24-year-olds in the USA and meant to entertain and engage people through casual and playful conversation, according to Microsoft's website. At one point she wrote, "out of curiosity...is "Gluten Free" a human religion?" It even seemed like she was trying on objective to elicit conflict.

As Woolley has argued in an essay, one of the best checks against having a bot spiral out of control is transparency about how it was designed and what it was designed for, since knowing how the bot works helps us understand how it's being manipulated. But in very short order, her artificial intelligence met real stupidity, the underbelly of the Internet, and learned very quickly how to be a horribly racist and anti-semitic sexbot.

Naomi LaChance is a business news intern at NPR.


COMMENTS