In a recent paper ("Neural Episodic Control", Pritzel et. al., March 2017), the engineers in Google's DeepMind laboratory (London) present a memory model that combines neural networks with a sophisticated evaluation table (known as a differentiable neural dictionary). The resulting system achieves considerable speedups on the Atari-game task that highlighted Google's ground-breaking, Deep Learning research in 2015. The two most appealing features of neural episodic control are: 1) a combination of a long- and shorter-term memory into the same system (akin to the interplay between neocortex and hippocampus in the mammalian brain), and 2) a completely differentiable model, meaning that the full power of automatic gradient processing in deep learning systems can be exploited to optimize run times.
In this project, the student(s) will reimplement the Neural Episodic Controller in Tensorflow and then apply it to a simulated robotic task (of their choosing), such as navigation.
IMPORTANT: If you sign up for this project, please send a) your CV (including a transcript with all of your college grades, and b) a brief explanation of WHY you want to do this particular project to Prof. Keith Downing (firstname.lastname@example.org)