# Category: Math

• ### Non-negative sparse priors

Sparseness priors, which impose that most of the weights are small or zero, are very effective in constraining regression problems. The prototypical sparseness prior is the Laplacian prior (aka L1-prior), which imposes a penalty on the absolute value of individual weights. Regression problems (and GLMs) with Laplacian priors can be easily solved by Maximum a […]

• ### Approximate log determinant of huge matrices

Log determinants frequently need to be computed as part of Bayesian inference in models with Gaussian priors or likelihoods. They can be quite troublesome to compute; done through the Cholesky decomposition, it’s an operation. It’s pretty much infeasible to compute the exact log det for arbitrary matrices > 10,000 x 10,000. Models with that many […]

• ### Verifying analytical Hessians

When using optimization functions which require Hessians, it’s easy to mess up the math and end up with incorrect second derivatives. While Matlab’s optimization toolbox offers the DerivativeCheck option for checking gradients, it doesn’t work on Hessians. This nice package available on Matlab Central will compute Hessians numerically, so you can easily double check your […]

• ### Using the binomial GLM instead of the Poisson for spike data

Cortical spike trains roughly follow Poisson statistics. When it comes to modeling the spike rate as a function of a stimulus, in a receptive field estimation context, for example, it’s thus natural to use the Poisson GLM. In the Poisson GLM with canonical link, the rate is assumed to be generated by a weighted sum […]

• ### Adaptive Metropolis-Hastings – a plug-and-play MCMC sampler

Gibbs sampling is great but convergence is slow when parameters are correlated. If the covariance structure is known, you can reparametrize to get better mixing. Alternatively you can keep the same parametrization but switch to Metropolis-Hastings with a Gaussian proposal distribution whose covariance is similar to the model parameters. But what if you don’t know […]

• ### Waiting times in cyclical HMMs: modeling neuronal refractoriness

Cyclical Hidden Markov models (HMMs) can be used to sort spikes. For example, in Herbst et al. (2008), wideband data is assumed to be generated by an HMM, where for most of the time the HMM is in the rest state, and with small probability jumps to the spike initiation state. Once it’s in the […]