### Spike-triggered mixture of gaussians

Nice new paper out from Matthias Bethge’s group on neural systems identification. The proposed method can be considered an extension of the spike-triggered average/spike-triggered covariance approaches.

In STA/STC, you’re characterizing the spike-triggered stimulus ensemble and comparing it to the baseline stimulus ensemble. Assuming for a moment that the baseline ensemble is iid Gaussian, then if the spike-triggered ensemble (STE) is also iid Gaussian with the same variance, the only useful information you can extract from the STE is its mean. That leads to the STA. If the STE is Gaussian but with a different covariance then its covariance is also informative. That leads to the STC.

However the STE can have all sorts of weird-looking shapes, which motivates the idea of characterizing it with a more flexible distribution that a Gaussian. The next logical step is a mixture-of-Gaussians, which is much more flexible than the Gaussian, of course. This is the gist of the idea presented in the paper. Basically, mixture-of-Gaussians on the STE, mixture-of-Gaussians on the baseline ensemble.

In the STA/STC approaches, you can turn your generative model for the STE into a discriminative model by dividing out the STE and baseline distribution, i.e. $p(y=1|x) \propto p(x|y=1)\/p(x)$. Even when the technical assumptions for the STA or STC are not fulfilled, you can calculate the (biased) STA/STC and use it to initialize your discriminative model (GLM or GQM depending on whether you’re starting out with STA or STC). That’s exactly what they do here for their STMOG, so it should work even if the technical assumptions wrt. the distributions don’t pan out.

Interesting approach. I’m not sure I have a good intuition about what the discriminative model regression function looks like, however; it remains to be seen whether the model is amenable to the kind of intuitive visualization available with GLM/GQMs.