Enhanced spatial resolution during locomotion

The brain uses a lot of energy. Neurons communicate with each other electrically via action potentials – spikes – a process which requires a ton of ATP to activate ion pumps on the cell membrane. What the brain uses in energy it must gather it from the outside world; and the world is a scary place, filled with dangers.

The brain is thus under intense evolutionary pressure to use energy efficiently. To do this well requires a tight coupling between action and perception. When a certain part of the visual field is relevant to action, neurons that are sensitive to this part of the visual field respond more robustly. They increase in sensitivity and firing rate – as well as decorrelate – and that generally leads to better performance, in a process called attention.

Attention enhances the representation of a certain – relevant – part of the visual field at the expense of the rest of the visual field. Spikes are better allocated; they’re deployed when they’re needed. I’m sure you’ve all seen attention in action, by now, in this lovely video:

Higher spatial resolution during locomotion

During my postdoc with Dario Ringach at UCLA, we wanted to understand how this process works, so we looked at a related phenomenon in mice; the paper just got published in J Neurosci [PDF]. When a mouse moves around, it exposes itself to danger. There’s a large increase in the gain of neurons in the primary visual cortex of mice during locomotion. It’s as though the visual cortex powers down when the mouse is stationary, and when it’s moving and in danger, the visual cortex turns on.

Screenshot from 2016-06-26 15:12:01
Spike rate increases during locomotion, as decoding error decreases. The reverse is true when the mouse stops moving.

Under fairly general assumptions about the neural code, increasing gain should mean that stimuli are better represented. We indeed found that all stimuli were better represented during locomotion. This was very tightly coupled with the onset of locomotion (above). The mouse starts moving; visual cortex powers up; and it sees better.

Screenshot from 2016-06-26 15:13:42
Improvement in decoding efficiency is concentrated at higher spatial frequencies

We found, surprisingly, that the increase in gain is concentrated at higher spatial frequencies. What that means is that not only is contrast better represented – blacks are blacker, whites whiter – but the effective spatial resolution of the representation becomes higher. In other words, the mouse sees clearer – and by a wide margin, too.

That’s pretty surprising, and it might help explain a similar phenomenon in human attention – that attention appears to increase spatial resolution at the attended location. In times of need, the brain allocates extra energy to relevant stimuli, and that means that it can perceive visual stimuli more clearly. It’s an efficient and adaptive code.

Optimal decoding is easy

There’s a twist, and it’s a rather subtle, technical one that is buried in our paper. One worry about adaptive coding is that it changes what the neural code means. An adaptive code means that if I measure 2 spikes from a neuron when a mouse is now moving, it means a different thing than when it is moving.

Now if I change the way that I encode stimuli in primary visual cortex, does that mean that every stage higher up must be aware of this?  Do I need a new decoder? That would  undermine the efficiency of the new code; sure, spikes are more efficiently allocated, but now you twice the neurons downstream need to upkept to read the two different codes.

It turns out that for a very special type of modulation – changing the gain of neurons – you don’t need to do anything special to read the modulated code; you only need one, simple decoder. It falls out of an analysis of optimal decoding of Poisson codes that was pioneered by Jazayeri and Movshon (2006).

Well, math, actually -via Giphy

Let’s say that I choose to present a stimulus \theta, chosen out of a family of stimuli \theta_j. I measure the number of spikes n_i output by N neurons in response to this stimulus. Neurons have tuning curves f_i(\theta_j), and follow Poisson statistics. I want to make a decoder – equivalent to a softmax classifier – that can optimally read this code and determine which stimulus was presented.

By Bayes’ theorem, the log likelihood of the stimulus, given the observations, is given by:

\log p(\theta = \theta_i|s_i) = L(\theta_i) = \sum_i n_i \log f_i(\theta_j) + b(\theta_j)

\sum_i n_i \log f_i(\theta_j) says that to decode, I should take a weighted sum of spikes and the log tuning curves of the corresponding neurons. In this scheme, every spike counts as a vote; when I measure neuron A firing 10 spikes, and I know neuron A likes vertical stimuli, that’s 10 votes for “the stimulus is vertical”.

You can interpret the b(\theta_j) as a normalization constant that compensates for how well a stimulus is represented. When a stimulus is over-represented in the population, I’ll get a lot of votes from neurons tuned to this stimulus. The normalization constant “stuffs the ballots” of under-represented stimuli so that all stimuli, a priori, have an equal chance of being decoded. In statistical terms, it calibrates the decoder.

When the neural code is modulated multiplicatively, tuning curves themselves are modified multiplicatively. But what’s important for the decoder is the log of the tuning curves. The multiplicative factor falls out of the \log f_i(\theta_j) term, and gets integrated, in an additive manner, into the normalization constant b(\theta_j).

So we simply need to change the baseline of the decoder to recalibrate it depending on the state of the system. What that means, in this case, is that we have to slightly downvote high spatial frequency signals during locomotion.. That’s very straightforward to do in neural hardware, via, e.g. an efference copy of the locomotion signal and two synapses with different biases in the decoding neural population.

Indeed, empirically, you get as good decoding accuracy with a single decoder that changes its bias during running than with two separate decoders tuned to each condition:

Screenshot from 2016-06-26 15:20:06
You get as good decoding accuracy with a single decoder as with two decoders for the running and stationary states.

What that implies is that by more efficiently allocating spikes, you get higher spatial resolution when it really matters, without needing special hardware to understand the adaptive neural code.

The paper is out in J Neurosci now:

P. J. Mineault, E. Tring, J. T. Trachtenberg, D. L. Ringach (2016). Enhanced Spatial Resolution During Locomotion and Heightened Attention in Mouse Primary Visual Cortex. J Neurosci. 36(24):6382– 6392. [PDF]


One response to “Enhanced spatial resolution during locomotion”

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s