Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.
The Google Brain team (which is based out in Mountain View and is separate from Deep Mind in London) started with three fairly vanilla neural networks called Alice, Bob, and Eve.
Ref: Learning to Protect Communications with Adversarial Neural Cryptography. arXiv - Computer Science > Cryptography and Security (21 October 2016) | arXiv: 1610.06918 | PDF
ABSTRACT
We ask whether neural networks can learn to use secret keys to protect information from other neural networks. Specifically, we focus on ensuring confidentiality properties in a multiagent system, and we specify those properties in terms of an adversary. Thus, a system may consist of neural networks named Alice and Bob, and we aim to limit what a third neural network named Eve learns from eavesdropping on the communication between Alice and Bob. We do not prescribe specific cryptographic algorithms to these neural networks; instead, we train end-to-end, adversarially. We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals.