-
Coarse-grained parallelism in Matlab with parfor
Previously, I discussed how you can take advantage of multiple cores in C. In day-to-day research, however, it’s more common to work with high-level languages like Matlab and Python. Although Matlab has been multithreaded for several years now, it’s not very good at maximally using all the cores in a computer. You can verify this
-
Programming for multi-core environments
CPUs with multiple cores are currently the norm. Getting optimal performance out of these systems is challenging. I recently read Parallel Programming in C with MPI and OpenMP by Michael Quinn, a book that, while released in 2004, remains relevant and actual. Dr. Quinn introduces two technologies which are available in C (and in Fortran
-
DIY number-crunching machine on the cheap
My old work computer is starting to show its age, and so for about a year now I’ve been almost exclusively working on one of the computers in the lab cluster. The mini-cluster, which I built out of robust core i7 920 machines to analyze array data, has been a great success, and now it’s
-
The far-reaching influence of sparse coding in V1
Introduction Olshausen and Field (1996) made a big splash in visual neurophysiology and machine learning by offering an answer to a provocative question: Why are simple cell receptive fields (RFs) organized the way they are? After all, they could just as well be shaped like elongated sine waves, as in Fourier analysis, or they could