Big ideas: Focus on computation

Matteo Carandini has an editorial in the latest issue of Nature Neuroscience arguing that we should focus energy on studying neural computation. In this context, neural computation is understood as an intermediate level of complexity between low-level neural circuits and high-level behavior. He argues that trying to go from physical descriptions of circuits to large-scale behavior is too large of a step to take, and that focusing on the intermediate computation level should help us understand both circuits and behavior.

This is illustrated by an analogy with computer systems (above). Focusing on the physical substrates of computation (hardware) or on its high-level manifestations (software) doesn’t really tell us what a computer does. Rather, an intermediate level description, focusing on algorithms, computer languages and operating systems allows to see both “the forest” and “the trees”.

Matteo gives several examples of what he considers canonical neural computations. This includes linear filtering, normalization, as well as:

thresholding and ­exponentiation, recurrent amplification, associative learning rules, cognitive spatial maps, coincidence detection, gain changes resulting from input history and cognitive demands, population vectors, and constrained trajectories in dynamical systems.

I agree with the general idea. Looking at intermediate levels of representation and thinking about things in terms of what is computed is I think more enlightening than looking at the biochemical or psychophysical levels.

The real driver of where we are looking is arguably technological advance. At the high level, one could argue that tools have remained stagnant for the past ~20 years, ever since the development of fMRI. At the low and intermediate levels, however, technology has advanced tremendously: genetic engineering, chemical manipulations, two-photon imaging, glutamate and GABA uncaging, optogenetics and array recordings have facilitated investigations at micro and meso scales.

So, if anything, these developments probably imply that people will start focusing on smaller scales in the near future. Actually, this is already the case: studying vision in mice has gone from an almost laughable idea to an industry in the last few years.

Matteo gives a roadmap for understanding neural computation:

The task ahead is to discover and ­characterize more neural computations and to find out how these work in concert to ­ produce ­ behavior. How shall we proceed? The known neural computations were d­iscovered by measuring the responses of single ­ neurons and ­ neuronal populations and relating these responses quantitatively to known factors (for example, sensory inputs, perceptual responses, cognitive states or motor outputs). This approach clearly indicates a way forward, which is to record the spikes of many neurons concurrently in multiple brain regions in the context of a well-defined behavior.

If I’m reading this correctly, he’s saying that a lot of the insight into neuroscience in the past has come from systems identification in single neurons. In the context of vision and other sensory systems, that means receptive-field-ology. Fair enough. In the last sentence I think he’s saying that we should therefore do receptive-field-ology with multi-electrode arrays (MEAs), which is not a bad idea in itself, but I don’t think is the way forward.

Don’t get me wrong, I love MEAs. They make recording so much easier than single unit recording. But I don’t think they are a transformative technology. One issue is that currently the neuronal sampling in MEAs is very sparse; you can’t identify functional connectivity with “real” connectivity, and that’s a drag.

More fundamentally, though, the vast majority of analyses done on MEA data are not any different than the kind that people have been doing with single electrodes since the H&W era. There are some tools that allow us to think of populations of neurons as collective identities rather than a bunch of unrelated things: Markov models, Ising models, phase-space approaches. But currently much of the research enabled by MEAs is incremental rather than transformative.

There are certainly, however, questions that are now attackable at the computation scale that have been bugging people for a long time. For example, how is divisive normalization implemented in cortex? I mentioned a couple of posts ago that this question could presumably be attacked by current methods (optogenetics and pharmacology) in mouse models. Furthermore, the same method could be used to figure out how a number of other operations (max-pooling, AND-like tuning) that have been hypothesized to be implemented through the same basic canonical circuit (Kouh and Poggio 2008).

In any case, I think it’s a pretty thought-provoking article. Strangely enough, my RSS feed was bombarded with “big idea” articles from different people. Labrigger reports on the upcoming debate on connectomics between Seung and Movshon. Tony’s reputation as a pugilist precedes him, it should be a hoot. Oscillatory Thoughts reports on Big Data and the automation of science. You should read this article on a robot scientist (not mentioned in the linked blog post), it’s pretty awesome. And in almost completely unrelated news, there’s a great article by Pitkow and Meister in the latest Nature Neuroscience on efficient coding in the retina and how it’s mostly due to nonlinearities and not receptive field organization.

ResearchBlogging.org

Carandini, M. (2012). From circuits to behavior: a bridge too far? Nature Neuroscience, 15 (4), 507-509 DOI: 10.1038/nn.3043

Kouh M, & Poggio T (2008). A canonical neural circuit for cortical nonlinear operations. Neural computation, 20 (6), 1427-51 PMID: 18254695

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s