Here’s a first in what might become a regular feature on xcorr: an exposition of some of Matlab’s obscure but quite useful functions. Today’s function: bsxfun. The Matlab documentation states that bsxfun’s function is to:

Apply element-by-element binary operation to two arrays with singleton expansion enabled

This is not a very helpful definition. What bsxfun(@op,A,B) does is virtually expand A and B to a common size op(Abig,Bbig). It obviates the need for repmat and outer products in most scenarios. For example, to center the columns of X, you might use:

X = X - repmat(mean(X),size(X,1),1);

Or:

X = X - ones(size(X,1),1)*mean(X);

Internally this constructs the intermediate matrix on the right hand side, which is wasteful. bsxfun can be used to center the columns of a matrix without intermediate matrices:

X = bsxfun(@minus,X,mean(X));

How bsxfun virtually replicates matrices is straightforward. In the previous example, the A matrix as size MxN, while the B matrix has size 1xN. bsxfun virtually expands matrix B along its dimensions of size 1 (the singleton dimensions). to match the dimensions of A.

Another example: construct the matrix of size MxM: [1,2,3,…;0,1,2,…;-1,0,1,…]:

themat = bsxfun(@minus,(1:M),(0:M-1)');

### Like this:

Like Loading...

*Related*

Patrick, what you forgot to mention is that bsxfun provides a huge speed-up in many cases (if you manage to figure out how to use it). For example, if you need to compute Euclidean distance, bsxfun can be a tremendous boost (see this thread). In my code I use it a lot, and for the Euclidean distance I’ve got a 200x speed-up – instead of 20 seconds it is done in less than 0.1!

Similarly, when you need to multiply a dense square matrix on a diagonal (some algorithms require it due to the use of SVD), you can also get a 10-50x speed-up (see this thread. My code is now flying like on a supercomputer :-)

I managed to put my old notes about bsxfun in a blog post here, just in case.

[…] You can read about bsxfun here if you are unfamiliar with this function. Because the results of whitening can be noisy, a fudge factor is used so that eigenvectors associated with small eigenvalues do not get overamplified. Thus the whitening is only approximate. Here’s an image patch whitened this way: […]