-
Non-negative sparse priors
Sparseness priors, which impose that most of the weights are small or zero, are very effective in constraining regression problems. The prototypical sparseness prior is the Laplacian prior (aka L1-prior), which imposes a penalty on the absolute value of individual weights. Regression problems (and GLMs) with Laplacian priors can be easily solved by Maximum a
-
Bounds on accountable deviance in Generalized Linear Models
The quality of fit in Generalized Linear Models (GLMs) is usually quantified by the deviance, or twice the negative log-likelihood. When there’s a high level of noise in the data, it’s difficult to interpret the deviance directly; the lower bound for the deviance doesn’t take into account noise, and is much too low. I had
-
Denoising and spike detection in a Utah array
Our V4 Utah array died last week… and then it worked again this week. It’s pretty amazing that it’s still up and running after about 18 months. It’s gettting noisier, however, and increasingly what’s being recorded is mostly multi-unit activity (MUA). To have any chance of detecting single units, it’s therefore quite important to denoise
-
Optimizing GLM hyperparameters through the evidence
I wrote earlier about a recent paper by the Pillow lab which uses priors optimized through the evidence (aka marginal likelihood) to estimate spatially and frequency-localized receptive fields. It seems that evidence optimization might be seeing something of a revival as a technique for estimating model hyperparameters. I just posted an update on my GLM
-
Integrate-and-fire neurons within the GLM framework
Generalized linear models are very useful in modeling neural responses to dynamic stimuli. The Poisson-exponential GLM is the basis of many recent descriptions of responses in the retina, LGN and visual cortex. The Poisson-exponential GLM accounts for some aspects of neuronal data not well accounted for by earlier methods like reverse correlation; in particular, the
-
Comp Neuro News: a news aggregrator
Comp Neuro News is a social news website about computational neuroscience recently created by Ian Stevenson and Jeff Teeters from UC Berkeley. It’s the same idea as Reddit or Digg: users submit links which are then upvoted by interested members, yielding a constantly refreshing list of interesting articles. Not much activity currently, but I’m sure