Generalized linear models are very useful in modeling neural responses to dynamic stimuli. The Poisson-exponential GLM is the basis of many recent descriptions of responses in the retina, LGN and visual cortex. The Poisson-exponential GLM accounts for some aspects of neuronal data not well accounted for by earlier methods like reverse correlation; in particular, the fact that neuronal responses are necessarily positive and that variance scales with the mean of the response.
Of course, the description of neurons using Poisson GLMs abstracts away most biophysical facts about neuronal integration. Somewhere between the Poisson GLM and highly detailed compartmental models of neurons lies the integrate-and-fire (IF) neuron model. IF models assume that the neuron integrates neuronal input until is reaches a threshold, after which it is reset. This can account for certain aspects of neuronal data not accounted for by Poisson GLMs. Among them, there is the fact that neurons frequently show nonlinear phase shifts as their drive is increased. Typically, with a stronger stimulus the neuronal response will not only be stronger but also of shorter latency. This is explained in IF models by the fact that neurons will take less time to reach threshold with stronger inputs.
IF models are a bitch to fit, however. It would be really nice if you could get IF-like effects within a much-easier-to-fit GLM. Paninski (2004) has some hints that this is possible. The idea is to leverage the fact that it possible to integrate spike history effects in a GLM. The traditional way of doing this is to add regressors to the design matrix containing the time-lagged spikes. This allows one to model both the absolute and relative refractory period of neurons, among other things. The result is similar to an autoregressive model with exogenous inputs.
It’s possible to extend this spike-history GLM so that it not only modifies the gain of the neuron after firing but also its temporal filter. In the IF model, right after a spike, the neuron is emptied of its input. This is equivalent to saying that following a spike, its time filter is truncated so that it only integrates over time periods following the latest spike. It’s easy to modify the design matrix of a GLM so that following a spike, inputs before this critical time are set to 0.
Now there are some problems in applying this idea directly. In the traditional IF model, the temporal filter is an exponential. In a typical GLM, the shape of the temporal filter is determined by the data; it will peak after some latency, and it might be bimodal. With a naive application of the IF-GLM, the entire input would be wiped out after a spike, and the cell would only reach full gain after an amount of time equal to its latency. Thus, at the very least, an IF-GLM should account for the latency of the neuron.
Furthermore, since there are many stages of temporal integration between the sensory input and the neuron, the nonlinear integration effects of IF neurons will be smeared out. It might be better if instead of assuming that the temporal filter is emptied after a spike, it is simply changed, and let the data decide what exactly those changes are. Such a GLM with a spike-history-dependent temporal filter is very similar to the Spike Response Model (SRM) of Gerstner. Here’s a very recent NIPS paper examining the link between an integrate-and-fire model and a GLM.
While I haven’t seen any applications of IF-GLMs and SRM-GLMs in the literature as of yet, it will be interesting to see if models which better account for nonlinear temporal integration can explain some of the variance discrepancy in visual cortex.