New paper out in PLOS Comp Biology by Mijung Park and Jonathan Pillow on a spiffy new linear receptive field (RF) estimation method. The proposed method can be seen as an extension of state-of-the-art RF estimation methods that combine assumptions of smoothness and sparseness.
The idea is to use what they call a localized prior. This is a multivariate normal prior that in the relevant basis embodies the assumption that most of the non-zero weights are physically close to each other. In practice, the assumption is that the envelope of the variance of the weights is Gaussian in the relevant basis (Gaussian in the sense of a Gaussian curve, not in the sense of the normal distribution). Take a look at the example difference-of-gaussians receptive field below. The receptive field itself is not well described by a Gaussian. On the other hand, the envelope of the receptive field — the absolute value of the RF weights — is roughly Gaussian. So if we assume that the envelope of the RF is Gaussian we’ll get better estimates than the maximum likelihood estimator because we’re not wasting degrees of freedom on the edges of the RF.
It’s possible to combine the assumption that the RF is localized in space (localized prior in the pixel basis) and that it is localized in frequency (localized prior in the Fourier basis). Examples of estimated V1 RFs with one minute of data through the maximum likelihood estimator and the proposed method are shown below. As you can see, the combination of localization in space and in frequency is very powerful.
It’s worth noting that a Gabor can be described by a Gaussian spatial envelope and Gaussian frequency envelope. So one way to think about the local-local prior is that it assumes that the RF has spatial/frequency content similar to that of a Gabor. Importantly, however, the assumption is soft; the prior can express RFs which are only loosely localized in space/frequency. It does seem like such an assumption might exaggerate the extent and strength of secondary inhibitory subfields. See for example the third and last RFs in the second row. Whether such bias matters depends on the question of interest, of course.
Now the prior is described in terms of hyperparameters corresponding to the shape of the Gaussian in the pixel basis, the one in the Fourier basis, as well as their height. So for a 2d receptive field this comes out to 9 hyperparameters or so (x, y, times two plus a gain). That’s too many hyperparameters to tweak through cross-validation, so the authors instead rely on marginal likelihood (ie evidence) maximization. The most relevant precedent here is Sahani and Linden (2003) which used marginal likelihood methods (specifically, automatic relevancy determination or ARD) to estimate sparse RFs either in the pixel basis or in a basis of Gaussian blobs whose spatial extent is itself determined by marginal likelihood.
Although the authors use a linear-Gaussian noise model, which makes evidence optimization more tractable, it should be noted that evidence optimization is still feasible within a GLM. One needs only to use a second order expansion to the posterior (“the Laplace approximation”), and the resulting algorithm is only mildly more expensive than in the linear-Gaussian model. I have a poster at this SFN with Theo Zanos which uses marginal likelihood with GLMs to track functional connectivity through time.
The authors promise that code will be made available on the Pillow lab’s web page although it wasn’t posted yet last time I checked.
Overall, I think the method seems very promising. It might seem pointless to try and find ever more efficient RF estimation methods (you can almost hear a certain pot-bellied NYU prof snidely retort “just get more data”), but there are definitely scenarios where efficiency is the essence of the problem. Tracking receptive field nonstationarities comes to mind, in particular.