The PSTH (post-stimulus time histogram) summarizes the timing of neuronal spikes following a stimulus. When few trials are available, or the neuron being recorded seldom fires, the PSTH can be quite noisy. Thus, the PSTH is frequently smoothed with a Gaussian kernel — for example, to reliably estimate the latency of the response. It is often difficult, however, to find a smoothing bandwidth that preserves transients while appropriately smoothing sustained responses.

Bayesian adaptive regression splines (BARS; paper available here) offer an alternative method of denoising a PSTH. It works by fitting a function (a spline) whose smoothness is allowed to vary as a function of time. Let’s call the underlying firing rate that we’re trying to estimate f(t). You can approximate f(t) by a sum of localized basis functions (say, Gaussians or piecewise cubic polynomials), . The nice thing about this construction is that for fixed basis functions estimating is equivalent to fitting a (generalized) linear regression, which is easy; this is the approach used to fit one-dimensional nonlinearities in generalized additive models, for example.

You can increase the accuracy of the approximation by setting N to an increasingly large number and decreasing the bandwidth of each basis function. However, if you want to estimate f(t) from noisy data each basis function that you add eats up a degree of freedom by way of the weights . An alternative way of getting a good approximation for f(t) without increasing N is to place the basis functions judiciously; take away basis functions from smoother time periods and place them in transients. This is the approach used by BARS; it adaptively places spline knots to track transients and smooth away quiet periods, as shown below.

Finding the best locations for the knots is highly non-trivial computionally. BARS embeds the problem in a fully Bayesian model, including provisions for the Poisson statistics of neurons, and then uses a Markov chain Monte Carlo (MCMC) method to find the posterior probability of the firing rate density. One advantage of the MCMC approach is that you get meaningful confidence intervals for the firing rate in addition to a single “best” estimate of the rate. There are free implementations for both Matlab and R. I’ve used it on MST data to determine the timing of the peak firing rate despite small numbers of trials per condition, and it works great.

### Like this:

Like Loading...

*Related*

I loved that paper, and the followup regarding testing equality between two histograms. It’s more powerful than, say, testing against central tendency or variance.