*Now with some 2011 updates*
In a few months, our lab will be doing permanent implant recordings in V4, and analyzing this data will be a major challenge. In contrast to single electrode recordings, the stimuli used for array recordings is usually not optimized for a given cell (say, so that the stimulus is centered on the receptive field), and the analyzer often has to figure out what a given cell is responsive to more or less from scratch. So I decided to download the pvc-1 dataset from the CRCNS website, courtesy of the Ringach lab, consisting of array recordings in V1 in response to movies to prepare for this upcoming work. These are my observations on how to get a basic understanding of the selectivity of the cells in this dataset, in preparation for more thorough systems identification.
Preprocessing the input
The pvc-1 dataset consists of long (> one hour) recordings from multiple electrodes in an array in response to 30 second color movie segments at a resolution of 320×240 and a frame rate of 30 Hz. There are more than 50 recordings, with each electrode probably sampling from 2 or more cells, and so 100 good cells or more can be isolated from these recordings. Uncompressed as doubles, each movie takes up about a 1.5 GB of memory, or 500 MB if you only care about luminance. Since at least 20 different movies are used in each dataset, that’s 10 GB that you’re going to need to load into RAM, which is needless to say, a stupid idea. There’s little hope of using systems identification methods on this type of dataset without first downsampling the movies by a huge factor. Now, if a cell’s receptive field is much smaller than a movie frame, and it responds to high spatial frequencies, then by downsampling we will lose the high spatial frequencies and be unable to identify the cell’s input-output mapping. So we need to get a ballpark estimate of the position and size of the receptive field of our cells. We will then extract downsampled movie patches around the cell’s receptive field’s center and run systems identification methods on those.
We might think of finding a cell’s receptive field by spike triggered averaging (reverse correlating) the images with a cell’s spike train, which can be done without loading all of the images into memory at the same time. This should, however, only work for simple cells. Spike-triggered covariance works on complex cells but it would require us to load all of the images into memory which is a no-go.
The solution I propose is to use spike-triggered averaging on nonlinear functions of the images. For this to work, these nonlinear functions should be orientation and spatial frequency selective, localized in space and insensitive to spatial phase, which is essentially saying that the nonlinear functions are idealizations of complex cells. I propose preprocessing each movie image with a Gabor pyramid, composed of both odd and even Gabors at different orientations and at several scales. Taking the norm of the responses of odd filters and their corresponding even filters (the square root of the sum of squares) will give phase-insensitive responses to orientation. Then the spike triggered average of these phase-insensitive Gabor pyramids should (hopefully) show a big region of excitation in a spot in the images, at a certain orientation, which we will then declare is the receptive field center. Later I will show this delightfully hacky method works pretty well.
Here I rolled out my own Gabor pyramid implementation, preprocessed every frame in the dataset, and stored the results as one .mat file per movie segment, like so:
%preprocess.m - Preprocess images mids = [0,0,0,0,0,0,0,0,0,0, 0, 0, 0,1,1,1,1,1,1,1,1,1,1, 1, 1,2,2,2,2,2,2,2,2,2,2, 2, 2,3,3,3,3,3,3,3,3,3,3, 3, 3]; sids = [0,1,2,3,4,5,6,7,8,9,14,15,16,0,1,2,3,4,5,6,7,8,9,14,15,0,1,2,3,4,5,6,7,8,9,14,15,0,1,2,3,4,5,6,7,8,9,14,15]; ws = []; for ii = 1:length(sids) nimages = length(dir(sprintf('movie_frames/movie%03d_%03d.images/*.jpeg',mids(ii),sids(ii)))); ws = zeros(51200,nimages); for jj = 1:nimages A = imread(sprintf('movie_frames/movie%03d_%03d.images/movie%03d_%03d_%03d.jpeg',mids(ii),sids(ii),mids(ii),sids(ii),jj-1)); w = gaborpyramid(squeeze(sum(A,3))); ws(:,jj) = w; if mod(jj,100) == 0 jj end end save(sprintf('movie_frames/movie%03d_%03d.images/ws.mat',mids(ii),sids(ii)),'ws','-v6'); end %gaborpyramid.m function [w] = gaborpyramid(im) %6 level gabor pyramid w1 = []; w2 = []; im = downsample(im); for jj = 1:5 im = downsample(im); [ws1 ws2] = gaborfilt(im); w1 = [w1;ws1]; w2 = [w2;ws2]; end w = [w1;w2]; end function [ws1 ws2] = gaborfilt(im) [xi yi] = meshgrid(-4:4,-4:4); %4 orientations ws1 = []; ws2 = []; for ii = 0:3 coso = cos(ii*pi/4); sino = sin(ii*pi/4); thefilt1 = cos((coso*xi+sino*yi)*.8).*exp(-(xi.^2+yi.^2)/2/2^2); thefilt2 = sin((coso*xi+sino*yi)*.8).*exp(-(xi.^2+yi.^2)/2/2^2); thefilt1 = thefilt1 - mean(thefilt1(:)); thefilt2 = thefilt2 - mean(thefilt2(:)); w1 = conv2(im,thefilt1,'same'); w2 = conv2(im,thefilt2,'same'); ws1 = [ws1;w1(:)]; ws2 = [ws2;w2(:)]; end end % downsample.m function [im] = downsample(im) im = conv2(im,[.25,.5,.25;.5,1,.5;.25,.5,.25]/4,'same'); im = im(1:2:end,1:2:end); end
This should take a couple of hours, but you only have to do it once. Note that I am saving the .mat as -v6 so it’s uncompressed. This will take up a lot of disk space, but it’ll be much faster when reading the .mat file back into memory. An alternative would be to a use Gabor-like pyramid decomposition which includes quadrature pairs and has a fast implementation, like, say, the steerable pyramid or the dual-tree complex wavelet transform, and process the images on-the-fly, although that would probably end up being slower in the end.
Spike sorting
The pvc-1 dataset is not sorted for spikes, it’s up to the analyzer to do it, as explained in the documentation. Basically, an electrode will pick up spikes (action potentials) from multiple neurons, and these must be separated. An often used process is as follows. First, one detects possible spike events, usually by high-pass filtering the electrode recording and thresholding. Whenever the threshold is reached, a snippet of the recording before and after the threshold event is taken. That snippet is the trace of a (stereotyped) action potential plus some noise. The pvc-1 electrode recordings consist of these snippets. The problem then, is to cluster these snippets into sources, which can be done automatically by an algorithm, or manually by a human.
Focusing on the former option out of laziness (although efficiency is a key concern as research shows humans are pretty bad at spike sorting), the automatic clustering algorithm of choice depends on your assumptions and needs.
*2011 update: The following proposal is stupid. For the purpose of getting something going lump all the spikes together in an MUA. Spike sorting is a tough problem. You should read Lewicki ’98 and Sahani ’99 several times, forwards, backwards, in the mirror, etc. before getting into the spike sorting game. If you don’t know what you’re doing then use something that’s proven like Wave_clus. If you’re going to run a clustering algorithm then for the love of Christ interpolate your waveforms and align them (see chapter 5, Sahani ’99). *
If you assume that the snippets are generated by the choice of one of several action potential templates, plus Gaussian noise, then the snippets are realizations of a mixture of Gaussians distribution, the parameters of which can be estimated through the EM algorithm and is already implemented in Matlab. Another possibility is to assume a mixture of t-distributions. Provisions for non-stationary data and overlapping spikes can be taken. It can get very complicated very fast, so if you’re curious look at the references on Paninski’s statistical analysis of neural data course page, Feb. 25 class, especially, for starters, Lewicki 98.
Here I load a dataset, look only at spikes from electrode 5, collect snippets from all trials, and fit the snippets to a mixture of 4 Gaussians (e.g., assuming 4 sources):
load('neurodata/ad1/ad1_u006_019'); results = pepANA; clear pepANA; ii = 5; snippets = []; times = []; for jj = 1:20 for kk = 1:results.Repeat snippets = [snippets,results.listOfResults{jj}.repeat{kk}.data{ii}{2}]; times = [times,results.listOfResults{jj}.repeat{kk}.data{ii}{1}]; end end options = statset('Display','iter'); obj = gmdistribution.fit(double(snippets)',4,'Options',options); obj.AIC obj.BIC
This should only take a couple of minutes. From what I gather, human sorters a variety of heuristics to determine the optimal number of clusters and how they are defined. My prefered semi-heuristic at the moment (which I am sure will evolve) is to keep adding clusters until the BIC starts to rise or I get a cluster center which doesn’t look like a spike waveform. 4 seems about right on this electrode.
Correlating input and spikes
With this headwork, the rest is cake. First, cluster the spikes, collect the spike times over repeats, and create lagged post stimulus time histograms (PSTH):
[idx,b] = obj.cluster(double(snippets)'); %% t0 = 0; offset = 0; for jj = 1:20 for kk = 1:results.Repeat len = length(results.listOfResults{jj}.repeat{kk}.data{ii}{1}); for ll = 1:max(idx) spiketimes{jj,kk,ll} = times(find(idx(offset + (1:len))==ll)+offset); end offset = offset + len; end end %Collect spike times over repeats str = cell(20,max(idx)); for jj = 1:20 for kk = 1:10 for ll = 1:max(idx) str{jj,ll} = [str{jj,ll},spiketimes{jj,kk,ll}]; end end end % %Create several different spike trains %for the cells, at different lags lags = 0.052:.016:.148; laggedpsth = zeros(18528,max(idx),length(lags)); offset = 0; for jj = 1:20 mid = results.listOfResults{jj}.values{1}(1); sid = results.listOfResults{jj}.values{2}(1); nframes = length(dir(sprintf('movie_frames/movie%03d_%03d.images/*.jpeg',mid,sid))); for kk = 1:max(idx) spks = str{jj,kk}; for ll = 1:length(lags) theh = histc(spks,(0:nframes)/30+lags(ll)); laggedpsth((1:nframes)+offset,kk,ll) = theh(1:end-1); end end offset = offset + nframes; end refpsth = squeeze(laggedpsth(:,:,4)); shuffledpsth = []; % Get 100 shuffled versions for jj = 1:100 shuffledpsth = [shuffledpsth,[refpsth(end-(jj-1)*183+1:end,:);refpsth(1:end-(jj-1)*183,:)]]; end laggedpsth = reshape(laggedpsth,size(laggedpsth,1),max(idx)*length(lags));
Note that I am creating shuffled versions of the psths which maintain the same spatial structure as the regular psths but should remove the correlation between stimulus and response. We’re going to need this to approximately z-score the Gabor pyramid maps. Spike-triggered averaging the phase-insensitive Gabor pyramids is as simple as:
%start by adding together the images stmean = 0; strefs = 0; offset = 0; clear ws ww A B; for jj = 1:20 mid = results.listOfResults{jj}.values{1}(1); sid = results.listOfResults{jj}.values{2}(1); load(sprintf('movie_frames/movie%03d_%03d.images/ws.mat',mid,sid)); nframes = size(ws,2); ww = sqrt(ws(1:end/2,:).^2 + ws(end/2+1:end,:).^2); A = ww*laggedpsth((1:nframes)+offset,:); B = ww*shuffledpsth((1:nframes)+offset,:); stmean = stmean + A; strefs = strefs + B; clear ws ww A B; jj offset = offset + nframes; end strefs = reshape(strefs,size(strefs,1),max(idx),100); ms = repmat(mean(strefs,3),1,length(lags)); ss = repmat(std(strefs,[],3),1,length(lags)); zvals = (stmean-ms)./ss; zvals = reshape(zvals,size(zvals,1),max(idx),length(lags));
Now to present the results:
imagesc(plotPyramid(squeeze(zvals(:,1,5))));colorbar;
(cell 1, lag 5 corresponding to 116 ms lag between stimulus and response). This gives:
This pic is a representation of a Gabor pyramid into bands. At the top, we have the horizontal orientation, at the left, the vertical orientation, and on the diagonal, the two diagonal orientations. Towards the bottom right are the fine scales, and towards the top left are coarse scales. This pic has a very prominent feature: a big red blob in the finest vertical scale towards the middle right of the movie frame with a large (approximate) z-value. This means that the presence of high-frequency vertical orientations at that spot is correlated with an elevated response 116 milliseconds later. The receptive field is much smaller than the size of the movie, meaning we can crop an area around the receptive field center (we can read off the approximate coordinates from the plot) and run our prefered stimulus-response estimation method based on the cropped, downsampled stimulus. It should be noted that if we use the same data to find a region of interest (ROI) and estimate a model based on that ROI, we’re double-dipping, which is a big no-no since the estimated models will show overly optimistic correlations between predicted and real response. Hence the ROI should be defined using a separate dataset (for example, using one or two movies to find a ROI and the other 19 to estimate the input-output response).
To create the pyramid decomposition plot:
function [im] = plotPyramid(w) im = zeros(120,160); offset = 0; for ii = 2:6 hh = ceil(240/2^ii); ww = ceil(320/2^ii); nums = hh*ww; im0 = w(offset + (1:nums)); im1 = w(offset + (1:nums) + nums); im2 = w(offset + (1:nums) + 2*nums); im3 = w(offset + (1:nums) + 3*nums); im0 = reshape(im0,hh,ww); im1 = reshape(im1,hh,ww); im2 = reshape(im2,hh,ww); im3 = reshape(im3,hh,ww); %zero orientation (horizontal) im(hh+(1:hh),1:ww) = im0; %vertical im(1:hh,ww+(1:ww)) = im2; ww2 = ceil(ww/2); hh2 = ceil(hh/2); im(hh+(1:hh2),ww+(1:ww2)) = downsample(im1); im(hh+hh2+(1:hh2),ww+ww2+(1:ww2)) = downsample(im3); offset = offset + 4*hh*ww; end end
Conclusion
With a bit of work, one can find the approximate receptive field properties of cells based on presentations of very high-dimensional stimuli. With this information in hand, one can define a ROI for further analysis, enabling one to work on much lower-dimensional representations of the data. I’ll be working more on this dataset in the future and will post what I find of interest.
2011 update: the prelim analysis convinced me that if you want to work comfortably with an array you should get much computing power. So we got 4x core i7’s + 12GB RAM running on Ubuntu 64-bit, accessed through NX, to run analyses on these bad boys.
I had also not anticipated before running the analysis that the spike sorting problem would be, well, a problem. Following this I started thinking about LFPs and had an illumination that cross-correlating LFPs and spikes is stupid unless you are 100% sure that there are no spike remnants in the LFP. But obtaining a clean LFP required having access to the wideband signal, so we ended up adding mods to the system to acquire wideband signals (10kHz) on all 96 channels. This was a challenge to get going, the data files are huge (1GB per 5 minutes of recordings), and required writing some automation scripts in Python. With hindsight though, it was a great decision, having the wideband data is so much better spike sorting wise.
We ended up modding wave_clus so that the detection step is optimal in some sense (paper to be written about this) and adding elements to the GUI to streamline analysis. So basically once we are done for a day, I type sudo python processDay -d dayName in a console and the wideband files and the stimulus files are fetched through FTP from the rig computers, renamed and processed to a Matlab friendly format. I then view the day’s experiments (including recording length, parameters, notes, etc.) using a custom Python GUI (PyQT). Then in Matlab I make a selection for a file to sort in a GUI and press a button and it sorts all the channels. When it’s done I get an email and then review the clusters. Finally I sort all the other experiments of the day against these manually reviewed clusters, and again once that’s done I get another email.
Friendly advice: if you get an array several months beforehand start thinking about how you will process the data; once you start recording you will not have time to dick around with this stuff as you are struggling to get as much data as you can and run prelim analyses and so forth.
mids = [2,2,2,2,2,2,2,2,2,2, 2, 2,3,3,3,3,3,3,3,3,3,3, 3, 3];
sids = [0,1,2,3,4,5,6,7,8,9,14,15,0,1,2,3,4,5,6,7,8,9,14,15];ws = [];
for ii = 1:length(sids)
nimages = length(dir(sprintf(‘movie_frames/movie%03d_%03d.images/*.jpeg’,mids(ii),sids(ii))));
ws = zeros(51200,nimages);
for jj = 1:nimages
A = imread(sprintf(‘movie_frames/movie%03d_%03d.images/movie%03d_%03d_%03d.jpeg’,mids(ii),sids(ii),mids(ii),sids(ii),jj-1));
w = gaborpyramid(squeeze(sum(A,3)));
ws(:,jj) = w;
if mod(jj,100) == 0
jj
end
end
save(sprintf(‘movie_frames/movie%03d_%03d.images/ws.mat’,mids(ii),sids(ii)),’ws’,’-v6′);
end
2 responses to “CRCNS data set pvc-1 by Ringach lab – getting something to work”
[…] Ringach – my new supervisor! – has a ton of V1 data on the CRCNS website. I wrote about how to get a simple analysis running on this dataset, and the advice is still valid (if a bit […]
[…] also share bits of advice such as this gem regarding spike sorting: If you’re going to run a clustering algorithm then for […]