I’m showing an SFN poster today on RF plasticity in V4, here is the reprint, and the abstract.
Gilbert had some interesting ideas in the early 90’s about reorganization of receptive fields following removal of input. The idea is that if you have a scotoma on the retina, or any other type of permanent deinnervation, within minutes or hours receptive fields reorganize towards the location. This increases the amount of information transmitted in this new situation.
He extended this idea to artificial scotoma, which occur when an area of the visual field has zero contrast whereas other areas have high contrast. Say a square of gray against a grating background. The idea was that this virtual lesion would trigger rapid reorganization of RFs in V1, much like a real lesion would.
DeAngelis then did some further experiments on the subject and showed that the claimed reorganization could be explained by a simple change in gain which masquaraded as an increase in the size of V1 RFs towards the non-lesioned area.
Our idea was to extend this research in area V4. The idea goes, even if the only changes in neurons in V1 are gain-related, that might translate into effective RF shape downstream.
Here’s a little cartoon of the idea. Sorry I only have access to Paint from the hotel computer. As an adapting stimulus causes spatially coherent changes in the gain of V1 cells, a neuron downstream which integrates over large area of space sees its receptive field shift. I think there’s some interesting consequences for neural coding: gain changes in downstream areas can bubble up and cause much less trivial changes upstream in a hierarchical scheme.
Unfortunately this idea didn’t really pan out for spikes. We did find changes in receptive field gain but not in RF position. I had big plans to roll out MCMC-based RF tracking methodology but since I didn’t see any changes at all with slightly cruder methods (maximum likelihood in highly parametrized generalized linear models), I kind of gave up on the idea, and so focused on LFPs instead (more later).
Talking to Curtis Baker and Greg DeAngelis and others, though, I think I figured out the problem. I got so worked up on the methodology (I’m a computational guy, remember, that’s what I like) that I didn’t tweak the experimental paradigm enough to get good results. Initially we had natural images as the adaptation stimulus, and that showed promising results. Then we tried better controlled sparse orientation adaptation stimuli, because I wanted to be able to correlate the content of the images with the changes in the RFs, and I think it just didn’t drive the cells. Plus the adaptation period was probably way too short.
Another problem I ran into is that the Utah array recordings we used weren’t as stable as they needed to be. Even losing 10% of spikes towards the end of the recording can easily mess up the analysis of such a dataset. So I had to reject most cells from analysis and that only left about 10 cells per recording day which didn’t really give the inference power I was looking for. If anybody has any insights on stabilizing Utah arrays somehow I’d be very interested.
So instead I mostly analyzed LFP receptive fields. You will notice that there are 2 different components to the LFP receptive fields estimated through reverse correlation. There’s an initial component which is very broadly tuned for space and a later component, either of the same or different polarity, that is much more tightly tuned in space. Both of these components have smoothyl changing retinotopy on the array.
As it turned out the broad component would see some similar changes in peak RF position and size across the entire array, while the tightly tuned component would see much more local changes. What I think is happening is that the tightly spatially tuned component is integrating over a small chunk of cortex, say 250 microns, while the other component is integrating over a larger area. That brings a little twist to the Ringach/Shapley LFP cortical integration story that I’d like to follow through.
The next steps are pretty obvious:
- change the content of the adapting image to trigger change as strongly and rapidly as possible
- increase the adaptation period
- modify the mapping stimulus to map receptive fields as efficiently as possible, possibly using online infomax methods or cruder trial and error
- continue work on estimating a well-defined MUA to increase the number of effective electrodes I can work with
- change spike detection and tracking algorithms to account for nonstationary recordings
Thanks to everyone who went to see the poster, your feedback was invaluable.