Learning about GLMs and GAMs in neuroscience

From Wu, David and Gallant 2006

During my lecture on Wednesday, a few students asked me where they could learn more about generalized linear and additive models (GLMs and GAMs) and their applications to systems identification in neuroscience. Unfortunately, there are few textbooks in computational neuroscience, and most cover systems identification to some degree, most notably Marmarelis’ latest. To the best of my knowledge, however, neuroscience books are pretty silent on GLMs and GAMs. The statistics literature is of course abundant on this, but it might be a bit inaccessible for people whose background lies in computer science or applied math.

A good introduction to GLMs and GAMs is contained in the J. Vision article by Knoblauch and Maloney, Estimating Classification Images with Generalized Linear and Additive Models. The article is written in a tutorial style, with example R code showing how one can write design matrices, fit models and perform inference related to model parameters. It focuses on the example of classification images, where a psychophysical observer must respond yes/no in a detection or classification task split over several thousand trials. The trials are made harder by the addition of noise, and the pattern of correct and incorrect inferences allows the experimenter to learn something about the observers’ strategy for performing the task. In GLM terms, this task is well-modeled as a logistic or probit regression.

Now, the authors use what they call GAMs, but I would personally call them penalized GLMs in a spline basis; the basis that they use doesn’t confer the inputs one-dimensional nonlinearities, but rather lower the dimensionality of the inputs. In any case, the discussion of splines and implementation issues is lucid, and it should be pretty straightforward to apply the same to neuronal systems identification, aka reverse correlation.

The authors frequently refer to the book Generalized Additive Models: An introduction with R by Simon Wood, which, as I have mentioned previously, is a fantastic textbook. If you are the type of person that learns more by doing than by theoretical discussions, you will love this book. It’s in the Texts in Statistical Science series, so it’s oriented towards statisticians, but all the required background is included. It covers linear models and GLMs in the first two chapters. The introduction to GLMs is beautiful. The rest of the book on GAMs is also great. There is a long discussion on different types of splines, tons of examples, and a lot of code.

For systems identification applied to neurons, the review article Complete Functional Characterization of Sensory Neurons by System Identification is excellent. It takes what I would call an algorithmic view of the whole thing, and covers things like loss functions, regularization, cross-validation, choice of stimulus set. It puts forward the view that every systems identification method is more or less linear regression plus bells and whistles.

Indeed, linear regression, in its Bayesian and kernel variants is at the heart of systems identification. If you’re comfortable with reading machine learning literature, then a great introduction to linear regression plus is contained in chapter 3 of Pattern Recognition and Machine Learning by Bishop. In fact, I think you should read the entire book several times forwards and backwards. You might only get 20% of what he’s talking about in the first place, but it gets better, and that is a really a goldmine of new ideas.

Dr. Paninski has excellent class notes on the subject of GLMs. A lot of this information is geared towards those that have a strong background in statistics, but the information is very nicely put together. He has a long list of suggested readings (including an article by your humble narrator), which I recommend you go through. Probably one of my favorite papers that uses GLMs is the retina paper by Jonathan Pillow et al. The first time you see the model diagram in that paper is like, wow, holy shit, that seems unbelievably complicated and powerful (of course in practice it’s not much harder than doing straight up reverse correlation, but that’s our little secret).

Of course pretty much every Paninski and Pillow paper should be on your reading list. They range from pretty accessible to brutal (Fokker-Planck anyone?), but of course these guys are pushing the boundaries. There’s an entirely different literature coming from Emery Brown’s lab that’s often focused on the hippocampus, it’s quite good but I’m not a huge fan of the notation. Theo tells me that they have a banner in their lab which states that “To bin is a sin”. I think point processes and time rescaling are nice and powerful but I really prefer to think of systems identification as linear regression on amphetamines. Maybe that’s just a deformation from my physics background where I got brainwashed into thinking that every problem is a harmonic oscillator.

Finally, I’m a big fan of learning by programming, and there’s a ton of nice data over at crcns.org where you can get started with GLMs. For a beginner, I would suggest working with low dimensional systems, like hippocampal data or full-field visual stimulation, otherwise you spend your whole time fighting with RAM usage and other technical issues that really don’t have much to do with the science.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s