I’ve received a few emails recently regarding some packages I submitted on Matlab Central that include C code to be compiled into a MEX file. I’ve always developed MEX files with GCC on Linux and my understanding of C is rudimentary. I wasn’t too sure what was necessary to get them to compile on Windows with Visual Studio C++. Users reported that VC++ was giving tons of compilation errors.
Turns out it’s pretty straightforward. The C compiler in VS wants variables to be declared at the top of functions. It’s a very annoying limitation. The solution is change the extension of every .c file to .cpp to get it to compile with the C++ compiler, which has no such limitation. Obviously you’ll need to change #include statements accordingly. C++ is not a strict subset of C, but unless you’re doing something fancy it should compile as is. A nice side effect is that you no longer need to use the -std=c99 flag in GCC to get it to accept C++ // style comments.
I’ve updated mexme accordingly, so you can generate cross-platform MEX files with only a rudimentary understanding of C.
Some other gotchas include differing definitions of various numeric types across platforms (including 32-bit vs. 64-bit). The trick here is to type all non-doubles using the constant that the Mathworks define in mex.h:
REAL32_T //AKA single
REAL64_T //AKA double
Assuming your code is plain jane C code that should be enough to get it to work.
If the mex file includes file IO or some other Linux-specific feature, you will need to do more work. You can try compiling with MinGW or Cygwin. There’s a package on SourceForge called gnumex that will get that running. Or you could use macros to select the appropriate OS-specific functions.
On a related note, you will want to get into the habit of using mwSize and mwIndex to refer to the sizes and index of a matrix, which will enable compatibility with -largeArrayDims.
I’ve updated my automatic 2d Gaussian surface fitting function, available in Matlab Central, to add a function to fit Gabors to noisy data. As I’ve discussed previously, fitting a parametric surface to noisy data is pretty trivial whether it’s a Gabor, Gaussian, or otherwise — it’s a straightforward application of numerical optimization that can be done straightforwardly with
lsqcurvefit. The annoying bit is choosing the initial parameters when the error surface is non-convex, especially when you have a bunch of data to fit.
The Gaussian surface fitting function does this by an exhaustive search through parameter space, which is made efficient via FFT-based convolution. The new Gabor fitting function uses the fact that the absolute value of the Fourier transform of a Gabor is a Gaussian. It gets a good guess of the orientation, spatial frequency and bandwidth of the Gabor using this fact, locates the approximate position of this Gabor, and then fills in the details with
I’ve removed the function for Gibbs sampling the Gaussian fit; it didn’t work too well in practice with data that does not conform well to the model assumptions. I think this is partially because the posterior is often composed of islands of high-probability separated by a sea of low probability, and the slice sampler is not well adapted to this configuration.
Update: I’ve now added in a Metropolis-Hastings sampling to replace to original Gibbs method. Not sure how much more reliable with will be but this scheme is at least less messy to implement. Let me know.
B-splines are mathematical curves with convenient properties. The B in B-spline means basis. A curve y(t) is defined in terms of weights w and knots k, such that . Each basis function is a piecewise polynomial with compact support determined by the position of the knots. The order of the polynomial is determined by the difference between the number of knots and weights. If the number of knots is 2 more than the number of weights, you get a piecewise linear basis function which looks like a little tent. For 4 extra knots, you get a “hump” defined by a cubic, which looks like a Gaussian but has compact support (it is exactly zero outside its interval). Chapter 5 of The Elements of Statistical Learning offers an excellent introduction to splines.
One example use of B-splines I discussed recently is BARS, or Bayesian adaptive regression splines. Here a PSTH is approximated with a smooth curve, and the knots are positioned adaptively. Generalized additive models also frequently use B-splines as smoothers. In general, B-splines (especially the cubic variety) excel as non-parametric smoothers. Linear splines have been used to infer input nonlinearities in reverse correlation-type experiments.
Here is a lightweight Matlab class that implements B-Splines. It allows one to fit, evaluate and differentiate B-Splines, and is well documented.
Update (08 Sep. 2011): I’ve now included C code in the Matlab package to evaluate B-splines much more rapidly (by a factor ranging from 5x to 50x). This now makes the code appropriate for implementing large-scale GAMs.
Matching Pursuit (MP) is a greedy algorithm to obtain a sparse representation for a signal in terms of elements of a dictionary. I’ve seen it used from time to time in neuroscience. For example, Smith and Lewicki (2006) use it as part of their demonstration that a sparse code for natural sounds matches the properties of cochlear receptors. It’s also frequently used in EEG/MEG analysis and recently was applied to LFPs in Ray and Maunsell (2011).
With one-dimensional signals such as EEG and sound, it’s natural to use a dictionary composed of a set of basis elements offset at every possible time point. In that case, the result is a sparse convolutional code.
Here’s an implementation in Matlab. It’s fairly efficient; it precomputes all convolutions outside the main loop and uses a 3-level search tree to find the current maximal correlation. I’ve also implemented a variant which forces all basis functions to be positive. It was borne out of my frustration in failing to figure out how to use the “anywave” block in the Matching pursuit toolkit (MPTK).
Plexon has a Matlab API that can be used to stream wideband, LFP and spike data for real-time processing. It’s available in the Downloads section of the Plexon website under the name “Matlab client development kit”.
They include several sample applications with the API, including a real-time LFP monitor. However, it’s missing a few features to make it a useful noise diagnostic tool, IMHO. I programmed a very simple wideband monitor which shows, for a given channel, a snippet of the raw signal, a spectrogram, and the mean power spectrum over a larger window. Although it’s very simple and it probably won’t work as is for other setups, it’s a good place to start.
Here’s a screenshot with a faulty pedestal:
Notice how the dBs drop very slowly at high frequencies. Here’s a ground loop:
Don’t try this at home, kids. We’ve been having some hardware issues, so I can’t really show you what a nice signal looks like, unfortunately. Here’s the source. You will need the Plexon API on your Matlab path, as well as the Plexon server started.