Differential neural computer from DeepMind and more advances in backward propagation

10/18/2016 - 04:10

Image: DeepMind

Two very interesting technological advancements within the sphere of neural networks and neuromorphic systems we felt needed to be shared together:

In a recent study in Nature, we introduce a form of memory-augmented neural network called a differentiable neural computer, and show that it can learn to use its memory to answer questions about complex, structured data, including artificially generated stories, family trees, and even a map of the London Underground. We also show that it can solve a block puzzle game using reinforcement learning.


Ref: Hybrid computing using a neural network with dynamic external memory. Nature (12 October 2016) | DOI: 10.1038/nature20101


Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory.

Researchers have developed a neuro-inspired analog computer that has the ability to train itself to become better at whatever tasks it performs. Experimental tests have shown that the new system, which is based on the artificial intelligence algorithm known as "reservoir computing," not only performs better at solving difficult computing tasks than experimental reservoir computers that do not use the new algorithm, but it can also tackle tasks that are so challenging that they are considered beyond the reach of traditional reservoir computing.


Ref: Embodiment of Learning in Electro-Optical Signal Processors. Physical Review Letters (16 September 2016) | DOI: 10.1103/PhysRevLett.117.128301


Delay-coupled electro-optical systems have received much attention for their dynamical properties and their potential use in signal processing. In particular, it has recently been demonstrated, using the artificial intelligence algorithm known as reservoir computing, that photonic implementations of such systems solve complex tasks such as speech recognition. Here, we show how the backpropagation algorithm can be physically implemented on the same electro-optical delay-coupled architecture used for computation with only minor changes to the original design. We find that, compared to when the backpropagation algorithm is not used, the error rate of the resulting computing device, evaluated on three benchmark tasks, decreases considerably. This demonstrates that electro-optical analog computers can embody a large part of their own training process, allowing them to be applied to new, more difficult tasks.