'The discourse is unhinged': how the media gets AI alarmingly wrong | The Guardian

[...] A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?. The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

Fast Company’s story went viral and spread across the internet, prompting a slew of content-hungry publications to further promote this new Frankenstein-esque narrative: “Facebook engineers panic, pull plug on AI after bots develop their own language,” one website reported. Not to be outdone, the Sun proposed that the incident “closely resembled the plot of The Terminator in which a robot becomes self-aware and starts waging a war on humans”.

Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, watched with frustration as this story transformed from “interesting-ish research” to “sensationalized crap”.

https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-media-bots-wrong