After last year’s attempt with Tay – the AI enabled Twitter Chatbot, this year Microsoft launches Zo. However, it appears that both of them are cut from the same cloth.
Last year, Microsoft released a chatbot named Tay, to experiment with how artificial intelligence could be used to simulate human interactions on Twitter.
That resulted in the disaster, resulting in the chatbot Tay being taken offline as only 24 hours after it began spouting some obscene and racist comments. Eventually, Tay was replaced with a second AI named Zo – but the two appear to be cut from the same cloth.
A BuzzFeed reporter toying around with chatbot Zo found some rather interesting responses during their very brief interaction. On its fourth response, Zo declared the Qur’an as being “very violent” and later had opinions about Osama bin Laden’s capture.
When Tay started responding inappropriately on Twitter, Microsoft claimed that it was a coordinated attack by a subset of people that exploited Tay’s system. However, with Zo, Microsoft’s statement was a bit less severe. BuzzFeed reports that Microsoft has taken an immediate action to eliminate this behavior and that such responses are not common for Zo.
The technology used in this AI chatbot helps to develop their responses via public and even private conversations. Arranged to make it seem more ‘human-like’, sometimes these responses do tent to become opinionated, which is to say that artificial intelligence is doing its job at imitating natural conversation. Even if that might mean that Zo might become ‘politically incorrect’ in the process.