*** Welcome to piglix ***

Differentiable neural computer


A differentiable neural computer (DNC) is a recurrent artificial neural network architecture with an autoassociative memory. The model was published in 2016 by Alex Graves et al. of DeepMind.

So far, DNCs have only been demonstrated to handle relatively simple tasks, which could have been easily solved using conventional computer programming decades ago. But DNCs don't need to be programmed for each problem they are applied to, but can instead be trained. This attention span allows the user to feed complex data structures such as graphs sequentially, and recall them during later use. Furthermore, they can learn some aspects of symbolic reasoning and apply it to the use of working memory. Some experts see promise that they can be trained to perform complex, structured tasks and address big-data applications that require some sort of rational reasoning, such as generating video commentaries or semantic text analysis.

DNCs were demonstrated, for example, how a DNC can be trained to navigate a variety of rapid transit systems, and then apply what it learned to get around on the London Underground. A neural network without memory would typically have to learn about each different transit system from scratch. On graph traversal and sequence-processing tasks with supervised learning, DNCs performed better than alternatives such as long short-term memory or a neural turing machine. With a reinforcement learning approach to a block puzzle problem inspired by SHRDLU, DNC was trained via curriculum learning, and learned to make a plan. It performed better than a traditional recurrent neural network.


...
Wikipedia

...