AI has been slowly developing over the years, with Google researchers having come up with artificial intelligence that can create trippy art.
But, trippy art seems to have grown too old for them, and so they have done something potentially alarming. They have created AI that is able to make encrypted messages without the aid or knowledge of humans.
A research paper says Google employees Martin Abadi and David G. Anderson carried out an experiment in which they tried to make three artificial neural network named Eve, Bob, and Alice, send messages to each other that could only be understood by them, using an encryption method that they made.
Details of the experiment are that Alice had to come up with a message to Bob with an encryption method that the two AI understood. Eve’s job was to eavesdrop on the communication and see if she could figure out what was being said on her own.
At first, the communication only involved text messages which Alice, using an encryption method she came up with, converted into nonsensical mumbo jumbo that Bob had to read with a cypher key.
At first, the two AIs failed to hide their communication from Eve. But after 15,000 attempts, they were able to send one she couldn’t decode.
The message was not complicated, having been only 16 bits long. Eve did manage to decode half the message, but since each bit is a 1 or 0, it means she was simply guessing randomly.
Giving the neural networks names and personifying them makes everything look simple, but actually, it is more complicated than that.
Considering how AI learns, it is not possible for anyone, including the researchers themselves, to understand what kind of encryption Alice created.
This makes it hard for the system to be applied to anything at the moment, and most importantly, there won’t be any machines communicating behind our backs just yet.