Interview with Austin Marshall, Numenta

Our past Technical Chair, interviewed Numenta’s Austin Marshall about HTM’s Numenta’s view in Neural Networks/AI.

It has been 12 years since the publication of ON Intelligence from Jeff Hawkins. In this book the founder of Numenta was setting the vision for AI. Now everybody talks about AI, as it transforms our life. Did this book come too early? What was the missing chain? Why did it take twelve years for AI to mature?

AM) Wow! Has it really been that long?! I first read On Intelligence as a neuroscience grad student and found it to offer a refreshing perspective that certainly helped steer my career. In On Intelligence, Jeff proposes a simple and straightforward theory of intelligence in terms of the structure and circuitry of the neocortex. Jeff also makes the claim that the approaches to AI at the time were not on track to making computers intelligent. Specifically, Jeff challenges the commonly held belief that computers will be intelligent when they are powerful enough. The timing of his claim is notable in that in the time since publishing On Intelligence, there has been a renewed interest in applying artificial neural networks catalyzed, in part, by vast improvements in computing performance yielding some impressive results and useful models. It has been interesting watching the fields of Artificial Intelligence and Neuroscience converge. There’s still so much to learn from the brain and we’ve only barely scratched the surface in applying what we’ve learned. Despite its progress, 12 years later, the field of AI still feels very much in its infancy with lots of room to mature.

How different is HTM (numenta’s product) from the successful deep learning networks?

AM) Deep Learning is loosely based on a very simple interpretation of how neurons work. Looking back at traditional Artificial Neural Networks, a classical ANN features cells which are essentially a collection of weights and a simple learning rule that adjusts weights according to some heuristic. Deep Learning extends the classical ANN model by adding additional layers in the network topology and in some cases a more complex cell model, connectivity pattern, and learning rule. You will get different results and properties by tweaking one or more of those parameters and in the end you will typically get a highly specialized network that doesn’t resemble anything that could be explained in terms of neuroscience and doesn’t help us understand the brain any better.

HTM, on the other hand, models the columnar and laminar structure of the neocortex and incorporates a neuron model that more closely resembles a pyramidal cell with differing types of connections. While HTM does incorporate some aspects of traditional Neural Networks, as a whole it’s very different from Deep Learning.

One important aspect is that HTM performs online learning, typically in a streaming scenario in which order and timing is important. Deep Learning separates learning and inference requiring training to be done offline in batches. Deep Learning requires very large data sets during training before producing meaningful results and once trained a neural network remains static until the network is retrained or replaced. Whereas with HTM, a network is updated with every new data point, can begin making inferences immediately, and adjusts over time as patterns in its input change.

What parts of the brain HTM models well and which ones it does not?

AM) HTM models one layer in the neocortex. Cells in HTM are organized into columns with common feed-forward activation properties and a pattern of connectivity incorporating inhibition, dendritic segments, and synaptic weights. This structure allows HTMs to learn and retain high-order sequences in a way that is consistent with the biology. You can read more about how neurons learn high-order sequences in the paper “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex” recently published to Frontiers in Neural Circuits (http://journal.frontiersin.org/article/10.3389/fncir.2016.00023/full). In that paper, we also demonstrate the robustness and fault tolerance of HTM by running a simulation in which we train a network and deactivate a certain percentage of cells with minimal impact on performance following a brief recovery period. In that sense, HTM models neuroplasticity, in which the brain rewires itself by forming new connections.

Can HTM play go and win?

AM) HTM is particularly well suited for learning from patterns in streaming data and adapting to changes in patterns over time. What makes Go interesting to AI researchers is its relative complexity. Like Chess, each board state in Go can be considered on its own, regardless of prior states and at each iteration a player selects a move that gains them an advantage over their opponent until one player eliminates possible moves for their opponent. At each iteration, the search space for computing the next move that gives you the best advantage is computationally unrealistic, so you have to get creative. I suspect a good Go player would do well to avoid playing in a predictable manner. For that reason, I don’t know that HTM is a good fit for Go, at least not on its own, and not in its current state. I’d be interested to see someone try, though!

Does the HTM model include Reinforcement learning functionality?

AM) HTM on its own is not a Reinforcement Learning model and the type of scenario in which RL would be useful is not necessarily the same scenario in which you would run HTM. Reinforcement Learning is typically used when the environment is known, or at least observable, and there is some measure of optimality with respect to how an agent moves about in that environment. Fundamentally, HTMs are learning about the world with every new data point presented to them, but there is no notion of exploration or optimization of movement within an environment built into HTM theory alone.

Bio:

Austin Marshall
Austin Marshall
Engineer
Numenta

Austin is an engineer at Numenta with a background in cognitive science and intelligent systems. When he’s not programming computers, Austin enjoys spending time with his family, riding bikes, and cooking.