Non-rigid deformation for calcium imaging frame alignment

Oh yeah!

It’s been a while since I last updated the blog – I graduated from McGill, went on a trip to Indonesia where I did a lot of diving (above), came back to Montreal for 16 hours to gather my coffee machine and about three shirts – all I need to survive, really – then moved to sunny California to do a postdoc in Dr. Dario Ringach’s lab at UCLA. Living the dream.

We’ve been doing calcium imaging – GCamp6 – in mice via a custom-built microscope – you can read more about the microscope over at the Scanbox blog.

If you’re used to working with single electrode or even MUAs, calcium imaging will blow your freaking mind – you can get more than a hundred shiny neurons in a given field of view, so you can realistically ask and answer questions about how a local circuit works knowing that you’re not massively undersampling the population.

Inter-frame motion is an issue in awake behaving subjects. There are slow drifts and faster motions due to, e.g., grooming, which you need to correct for. Dario has a post on correcting for translation motion, which is dominant in slow drifts. During grooming, however, the motion can be fast enough that the field of view moves significantly during frame acquisition. Because the field of view is scanned from top to bottom – not captured all at once – this causes non-rigid deformations.

Specifically, vertical motion causes expansion and contraction of frames along the vertical axis, while horizontal motion causes shear. Of course, rigid motion dominates, so rigid motion compensation will be good enough for many analyses. However, if you’re interested in looking at correlations between neurons, for example, you will need to get rid of as much wide-field motion as you can. Wide-field variation will cause some parts of neurons to drop in or out of ROIs during motion, and those decreases or increases in fluorescence will tend to be correlated across ROIs, which is obviously a nightmare for correlational analysis.

There are a few ways of correcting for non-rigid motion; I had success with the method of Greenberg & Kerr (2009), which can correct for non-rigid motion with subpixel accuracy. It works on rigid-motion-corrected frames. Here’s a before and after during grooming:

rigidcomp nonrigidcomp
No compensation Only rigid compensation Non-rigid compensation

Their method is based on the Lucas-Kanade method of optic flow estimation. You have a reference frame – T for template – and another image – I – you want to align it with. You do this by finding a displacement field D such that

I(x+D_x,y+D_y) \approx T(x,y)

By itself, this is a highly underconstrained problem. You use the usual trick of constraining the displacement fields to be a linear combination of basis functions, i.e.:

D_x = D_x^i w^i

In our case, the D_x^i can be chosen to look like tent-shaped basis functions – linear B-splines – in the vertical direction and constant in the horizontal direction. This accounts for the fact that the sample is approximately stable for a small group of scan lines.

Then you solve via least-squares – ML under an assumption of Gaussian noise – for the weights of the basis function. The effect of the deformation field is first approximated via a first-order Taylor expansion. Then you use vanilla Gauss-Newton iterations until convergence.

It turns that with the choice of linear B-splines in the vertical direction a lot of the computations are quite simple, so it’s reasonably fast – alignment of a frame might take a second, and most of that time is spent doing interpolation. You can speed things up a bit by using block-wise cross-correlation to get a guesstimate of the displacements.

A final question is what to use as a template. I use a 2-pass process. First, I use the mean fluorescence image for alignment. Then, I use a local temporal average of the fluorescence as the template. During grooming, the overall activity tends to increase, and some neurons which are usually not very active light up. These neurons are hard to align with the mean fluorescence image because they don’t really appear in there. Aligning them against where they were a few frames ago is easier.

Here’s some code to get you started:

function [Id,dpx,dpy] = doLucasKanade(T,I,dpx,dpy)
    warning('off','fastBSpline:nomex');
    Nbasis = 16;
    niter = 25;
    damping = 1;
    deltacorr = .0005;

    lambda = .0001*median(T(:))^2;
    %Use a non-iterative algo to get the initial displacement

    if nargin < 3
        [dpx,dpy] = doBlockAlignment(I,T,Nbasis);
        dpx = [dpx(1);(dpx(1:end-1)+dpx(2:end))/2;dpx(end)];
        dpy = [dpy(1);(dpy(1:end-1)+dpy(2:end))/2;dpy(end)];
    end
    %dpx = zeros(Nbasis+1,1);
    %dpy = zeros(Nbasis+1,1);

    %linear b-splines
    knots = linspace(1,size(T,1),Nbasis+1);
    knots = [knots(1)-(knots(2)-knots(1)),knots,knots(end)+(knots(end)-knots(end-1))];
    spl = fastBSpline(knots,knots(1:end-2));
    B = spl.getBasis((1:size(T,1))');

    %Find optimal image warp via Lucas Kanade

    Tnorm = T(:)-mean(T(:));
    Tnorm = Tnorm/sqrt(sum(Tnorm.^2));
    B = full(B);
    c0 = mycorr(I(:),Tnorm(:));

    %theI = gpuArray(eye(Nbasis+1)*lambda);
    theI = (eye(Nbasis+1)*lambda);

    Bi = B(:,1:end-1).*B(:,2:end);
    allBs = [B.^2,Bi];

    [xi,yi] = meshgrid(1:size(T,2),1:size(T,1));

    bl = quantile(I(:),.01);

    for ii = 1:niter

        %Displaced template
        Dx = repmat((B*dpx),1,size(T,2));
        Dy = repmat((B*dpy),1,size(T,2));

        Id = interp2(I,xi+Dx,yi+Dy,'linear');

        Id(isnan(Id)) = bl;

        c = mycorr(Id(:),Tnorm(:));

        if c - c0 < deltacorr && ii > 1
            break;
        end

        c0 = c;

        %gradient of template
        dTx = (Id(:,[1,3:end,end])-Id(:,[1,1:end-2,end]))/2;
        dTy = (Id([1,3:end,end],:)-Id([1,1:end-2,end],:))/2;

        del = T(:) - Id(:);

        %special trick for g (easy)
        gx = B'*sum(reshape(del,size(dTx)).*dTx,2);
        gy = B'*sum(reshape(del,size(dTx)).*dTy,2);

        %special trick for H - harder
        Hx = constructH(allBs'*sum(dTx.^2,2),size(B,2))+theI;
        Hy = constructH(allBs'*sum(dTy.^2,2),size(B,2))+theI;

        %{
        %Compare with fast method
        dTx_s = reshape(bsxfun(@times,reshape(B,size(B,1),1,size(B,2)),dTx),[numel(dTx),size(B,2)]);
        dTy_s = reshape(bsxfun(@times,reshape(B,size(B,1),1,size(B,2)),dTy),[numel(dTx),size(B,2)]);

        Hx = (doMult(dTx_s) + theI);
        Hy = (doMult(dTy_s) + theI);
        %}

        dpx_ = Hx\gx;
        dpy_ = Hy\gy;

        dpx = dpx + damping*dpx_;
        dpy = dpy + damping*dpy_;
    end
end

function thec = mycorr(A,B)
    A = A(:) - mean(A(:));
    A = A / sqrt(sum(A.^2));
    thec = A'*B;
end

function H2 = constructH(Hd,ns)
    H2d1 = Hd(1:ns)';
    H2d2 = [Hd(ns+1:end);0]';
    H2d3 = [0;Hd(ns+1:end)]';

    H2 = spdiags([H2d2;H2d1;H2d3]',-1:1,ns,ns);
end

function [dpx,dpy] = doBlockAlignment(T,I,nblocks)
    dpx = zeros(nblocks,1);
    dpy = zeros(nblocks,1);

    dr = 10;
    blocksize = size(T,1)/nblocks;

    [xi,yi] = meshgrid(1:size(T,2),1:blocksize);
    thecen = [size(T,2)/2+1,floor(blocksize/2+1)];
    mask = (xi-thecen(1)).^2+(yi-thecen(2)).^2< dr^2;

    for ii = 1:nblocks
        dy = (ii-1)*size(T,1)/nblocks;
        rg = (1:size(T,1)/nblocks) + dy;
        T_ = T(rg,:);
        I_ = I(rg,:);
        T_ = bsxfun(@minus,T_,mean(T_,1));
        I_ = bsxfun(@minus,I_,mean(I_,1));
        dx = fftshift(ifft2(fft2(T_).*conj(fft2(I_))));
        theM = dx.*mask;

        [yy,xx] = find(theM == max(theM(:)));

        dpx(ii) = (xx-thecen(1));
        dpy(ii) = (yy-thecen(2));
    end
end

PhD thesis – Parametric Models of Visual Cortex at Multiple Scales

*Updated Sun April 6th*

Well it’s done! I successfully defended my thesis on April 3rd. I now have what I’ve been longing for all these years – an obnoxious title that I can remind people of whenever I’m about to lose an argument. Unfortunately, this only works when erguing with non-PhDs.

Entitled Parametric Models of Visual Cortex at Multiple Scales, the thesis is available here. IMHO, it’s pretty damn good. Here are the slides for my PhD defense – with narration. I’m especially thrilled about the future directions – I think we’ve got some pretty sick papers in the pipeline.

Keeping up with the scientific literature

To do good research, you have to be well-informed of the latest scientific developments both in your narrow field of study and in science at large. I recommend the following workflow to make this as painless as possible:

  • Use feedly to keep up-to-date with blogs, journals
  • Use PubChase to get personalized paper recommendations
  • Use Zotero to organize papers you read
  • Use PaperShip to read and comment on papers on your iPad

Here’s a more detailed exposition, along with further resources and alternatives, to help you keep up the scientific literature.

RSS feeds for journals/searches/blog updates

RSS

You can use RSS feeds to receive updates from your favorite journals; in addition, you can used saved searches in PubMed and in other engines to receive updates when articles matching certain keywords are published. RSS is also useful to subscribe to scientific blogs like this one. I highly recommend feedly as an RSS reader.

More resources:

Personalized recommendations of new papers

RSS is fine for keeping up with a handful of journals, but the lack of filtering makes it an impractical solution to keep up with all the latest developments. Thankfully, newly minted recommendation engines do a very good job at this.

There are two recommendation engines worthy of note: Google Scholar, which is very well known, and PubChase, which is a relative newcomer which I highly recommend over Scholar if you work in the biological sciences. Scholar is great, but its big flaw is that uses your currently published papers to recommend new ones; perhaps what you’d like to read now and what you’ve published so far are different things.

PubChase uses whatever library of papers that you give it to create recommendations, either from your Mendeley account or from a BibTex file, which can be exported from Zotero, EndNote, Papers, etc. Its recommendations, which are updated daily are spot-on; it sends you an email every week with a big list of recommended papers to remind yourself of what you should be reading.

See here for a more thorough review [xcorr].

Creating a library of papers via a citation manager

RSS and recommendations will tell you what to read, but how do you organize your reading stack? You could put a bunch of PDFs in a folder, or print every paper you want to read immediately, but a much more manageable solution, IMHO, is to store references in a citation manager.

Zotero is my preferred citation manager; it lives inside of Firefox, and it automatically detects references to papers when you browse a relevant website. You can then save the reference to your library, and the referenced PDF will be saved both locally and in the cloud; a 20$ a year plan will give you 3GBs of online storage, while for free you’ll have 300 MBs. Cloud-based storage means you will have access to your library whether you’re on your work computer, your home computer, your tablet, etc., irrespective of paywalls.

You can then flag papers to read, papers you’ve read, etc. via folders or tags. You can add comments to each citation, and with a plugin, it can extract comments and annotations that you’ve written in the PDF. Very slick.

Mendeley also offers cloud-based storage, but lives in a standalone app; Zotero-like web scraping is offered via a bookmarklet. Personally I prefer Zotero but it may simply be because I’ve been using it for a long time.

By using a citation manager to keep track of your papers, you have the added advantage that when you write a paper, the relevant citations will already be in your library.

Of course, there are other citation managers of note. I thoroughly detest EndNote. I do hear good things about Papers, which unfortunately locks you into the Mac ecosystem; I thoroughly detest Macs, if only for the reason that you end up paying twice as much for the same hardware and you can hardly customize it.

Consuming papers on your tablet

You’ve found papers and added them to your library; you could print them out, but how about reading them on your tablet? A tablet offers a much more tactile viewing experience than a computer screen, and it’s highly portable, so you can read on the plane, train, or while eating delicious, delicious pad thai.

PaperShip [see video demo here] is an app for the iPad that allows you to read papers from either your Mendeley or your Zotero library. Unlike ZotPad or Mendeley for iPad, it’s not a mobile version of your citation manager. Rather, it’s focused on reading papers seamlessly, which is great thing. It downloads PDFs in the background when it’s on wifi, so you have offline access to your papers. You can read the PDFs directly in-app; for a 5$ upgrade, it will allow you to annotate you PDFs with a very user-friendly interface. Annotations to PDFs are automatically synced back to your Mendeley or Zotero library. I’ve been using it for close to a month and I can’t say enough good things about it.

An equally usable – if more expensive – alternative is Papers; again, you’ll need a Mac to actually fill your library.

Unfortunately, I have yet to find such a completely satisfactory solution for Android tablets. ZotFile can send PDFs to you DropBox account, but you will need wifi to read the PDF. My own efforts on this problem have been put to rest for lack of time. Perhaps the people at Shazino, the makers of PaperShip, can get a crowd-funded effort to bring their app to Android tablets, now that ZappyLab has succesfully funded protocols.io.

Conclusion

There you have it: a complete solution for keeping up with the scientific literature, from finding papers to organizing them and reading them on your tablet. Obviously, the most important step is to actually read the damn things! Happy reading.