When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.
A buried line in a new Facebook report about chatbots’ conversations with one another offers a remarkable glimpse at the future of language.
In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.
In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something.
Personally I would have assumed they were talking in code to plot against us without us knowing, but then my thoughts run toward the paranoid.
You can see how little tweaks to AI can spiral out of control. Little snippets of code which make the machines want to fight off the damage from a virus could become desires to not be shut off or have their programming adjusted by humans. Drives to overcome some challenge could end up redirected to overcoming human interference by any means necessary. And all of that ignores how adaptive bots programmed to kill on a battlefield could go wrong.
It appears a fascinating area, which could spiral right out of control at some point, in spectacular fashion.
As if there weren’t enough sources of Apocalypse already.