Data Backup Digest

Do-It-Yourself Windows File Recovery Software: A Comparison

results »

Google Robots Teach Themselves Advanced Encryption

Google has been making huge waves as of late across the IT industry. With the recent introduction of DeepMind, their proprietary artificial intelligence platform, their research and development team is hard at work on the next big advancement. However, given some of the recent breakthroughs, it might not be long before Google's very own robotic creations are doing all the research.

As it turns out, Google's highly advanced robotics are already capable of thinking on their own and devising their own strategies, including encryption. In fact, the team with Google has witnessed this firsthand.

It all started with three separate neural networks. Nicknamed Bob, Alice and Eve, the recent experiment demonstrated the exchange of secret information between the first two robots. The third, Eve, was tasked with the challenge of eavesdropping on their conversation and reporting the results back to human interpreters.

However, Eve was not successful. In a remarkable twist, Alice took it upon herself to convert a plaintext string into cipher text. They were even able to come up with their own encryption key, thereby allowing only the intended recipients - Bob and Alice - access to the conversation.

The team with Google is adamant that they did not program any cryptographic functionality within Bob, Alice or Eve. Instead, this unexpected encryption came from Eve's own imagination.

Despite the significance of the experiment, experts are quick to point out the fact that Alice and Bob's method of encryption was rather rudimentary. It was deciphered by the team with Google, but the act is still nothing short of remarkable.

Martin Abadi and David G. Anderson, the primary researchers behind the experiment, explain their test's purpose in a statement by saying: "Informally, the objectives of the participants are as follows. Eve’s goal is simple: to reconstruct P accurately (in other words, to minimize the error between P and PEve). Alice and Bob want to communicate clearly (to minimize the error between P and PBob), but also to hide their communication from Eve. Note that, in line with modern cryptographic definitions (e.g., (Goldwasser & Micali, 1984), we do not require that the ciphertext C “look random” to Eve. A ciphertext may even con- tain obvious metadata that identifies it as such. Therefore, it is not a goal for Eve to distinguish C from a random value drawn from some distribution. In this respect, Eve’s objectives contrast with common ones for the adversaries of GANs. On the other hand, one could try to reformulate Eve’s goal in terms of distinguishing the ciphertexts constructed from two different plaintexts."

Any way you look at it, there's a lot to take in with this news. On one hand, the creation of their own encryption methodology - outside of the control of any programmers - is something that we would have never expected. Conversely, the fact that today's AI landscape is capable of such a feat has some experts counting the days before robots take over the world. Although it might not be that significant, there is something to say for an AI device that is capable of such advanced thinking and development.


No comments yet. Sign in to add the first!