Although PsychToolbox is very easy to get up and running, implementing a full experiment takes time. PsychToolbox is a toolbox, not a framework; there are no constraints on how you should organize your programming. If your programming is well-organized, it’s easy to modify your experiments, understand how a program works after you haven’t touched it for a few months, and share data with colleagues. Hence, I’ve decided to implement a full classification image experiment in PsychToolbox, including presentation and analysis, in an attempt to show some best practices that I’ve come up with over time, and make it easier for beginners to get up and running. You might disagree with some of the methods I’ve used, but I hope it will start a discussion on how to properly organize programming for experiments.
You can download the files here.
The experiment
I have implemented a part of the experimental protocol used in Neri and Heeger 2002, published in Nature Neuroscience. It is a classic psychophysical reverse correlation protocol. A set of 11 vertical lines (the noise) are shown on screen, whose luminance is chosen randomly. The lines are replaced with a new set after about 30 ms. Each trial lasts about 270 ms, giving 9 frames. In the middle of the presentation, a bright bar (the target) may be superimposed on the set of lines at the center of the screen. The observer is asked whether the target was shown or not on every trial.
The luminance of the bars is picked using a 9×11 array called the noise field. By figuring out difference between the mean noise field when the observer saw a target versus no target, one retrieves the classification template used by the observer; this is called the mean kernel. One can do the same for the square of the noise field, yielding the variance kernel.
The version of the protocol I implemented uses a red fixation target, reminds the observer of the stimulus every 25 trials, and gives the observer a pause every 200 trials. For the first 100 trials, wrong responses are indicated to the user by a 0.1 second beep. Although this wasn’t in the original paper, I implemented a simple staircase procedure to adjust the contrast of the target bar which keeps performance around 75%. I chose this experiment because:
- Neri and Heeger’s paper is short, well-written, and influential
- The protocol used is classic classification image, which is very useful
- The stimulus is simple to render, so the code is concise
- There’s enough meat on the protocol bone, so to speak, to show interesting things about PsychToolbox
Organization of the programming
I split the experiment into 3 .m files: runciexperiment.m, analyzeciexperiment.m, and getciexperimentrandvars.m. Some important points:
- It’s a good idea to split your experiment into 2 parts, one presentation and one analysis, because they are logically separate, and frequently they will run on two machines. Also, you don’t want to break your experiment presentation when trying new analysis techniques.
- The presentation part of the experiment saves all data using a .mat file; the analysis part of the experiment loads this .mat file. The data you collect from observers is precious; you don’t want to lose this data because Matlab crashes. By imposing that the two files can only speak through each other through .mat files, you always leave a “file trail”, and you can automatically back up these files. Loading and saving .mat files is easy and fast, and it’s better than rolling out your own file format. Mat files are also somewhat portable across platforms; you can read them in Python, R, and Java, for example.
- The purpose of getciexperimentrandvars.m is to generate the random variables used in the presentation and analysis. The full noise fields used in many reverse correlation experiments are large, which means if you save them inside a .mat file, it will take a while to read the file, and it’ll take up spurious disk space. It’s often faster to just generate the random variables for the analysis part. Random number generators use seeds to generate their data. Thus a standard solution is to only store the seeds in the .mat files, and regenerate the noise fields by reseeding the random number generator as needed. Thus, getciexperimentrandvars.m takes a seed and an experimental parameters struct and spits out the random variables needed for a trial. I do want to emphasize though that if it’s more practical to store the generated fields for whatever reason then do it.
There is a 4th utility .m file, mergeDataSets, which allows several data sets (say, from different observers) to be merged together.
Presenting the experiment
runciexperiment.m contains the Matlab function runciexperiment.There is a difference between a script .m file and a function .m file. A function .m file is compiled into pseudo-code before being run, and is therefore faster than a script .m file. The presentation file should always be wrapped in a function to prevent PsychToolbox from missing presentation deadlines.
All of the data that needs to be saved for subsequent used is wrapped in the data struct. This struct contains metadata concerning the current presentation, for example, the title of the experiment and some notes. In addition, it contains the arrays required to recreate the experiment, as well as the responses of the observer. Using a struct is better than using an array for permanent file storage because it is self-documenting. For example, if you look at two year old data, you might not understand what dta(1,:) means, but data.response is more meaningful.
Storing metadata related to your experiment in the data struct itself is also a good idea, because it is attached to the data and never leaves it. Needless to say, this information is only useful if you use meaningful variable names and keep metadata up to date. Finally, using a struct means that if you add a new metric in your experiment (say, data.responsetime), it will automagically be saved when you call save(filename,’data’).
There is a second struct inside the first, called xp for eXperimental Parameters. This struct is for parameters which are constant across trials, and which are required to redo the experiment; this is a good place to store screen parameters such as width and gamma table so you can easily transfer to a shiny new experimental setup when you get that NIH grant.
A warning to programmers. Matlab structs have no pass-by-ref (pointer) semantics. So if you write, say:
mystruct.foo = 2; bar = mystruct.foo; bar = 1; mystruct.foo %displays 2The equivalent code in Java would give mystruct.foo = 1 at the end. To get pass-by-ref semantics you must subclass the handle builtin class.
The trial loop takes the form of a while 1. If you want to exit the trial loop, you use return. Oftentimes, you don’t know how much trials you’re going to do, so this is one way of having variable numbers of trials. Before every trial, the program does some preprocessing, for example, get the values of the random variables for this trial. Then it flips the screen, and starts the frame loop. You should do as little as possible inside of the frame loop, otherwise you will get timing issues (PsychToolbox will miss deadlines). There is often a way, for example, to calculate the bounds for the presented geometric features outside of the main loop. I did this in the case of the bounds for the vertical lines used in the experiment. Don’t bother optimizing anything outside of the frame loop; it’s a waste of time because it won’t affect presentation timing.
To do something every nth trial, I check whether mod(trialnumber,n) is equal to zero. This is the simplest way to implement blocks of trials and pauses. After every block of 200 trials, I save the current responses in temporary .mat file. If Matlab crashes for whatever reason, you will lose at most the 200 most recent trials. The backup files are saved in the backup folder, while the data files are saved in the data folder. I also present auditory feedback for the first 100 trials using the PsychToolbox functions MakeBeep and Snd.
I quantified the accuracy of the observer by looking at the 16 most recent trials. If more than 12 are correct, then I make the difficulty harder by incrementing a contrast index; if less than 12 are correct, I make the task easier. Classification images are relatively insensitive to the actual value of the accuracy (Murray, Bennett, Sekuler 2002), as long as the task is not much too easy or hard. Furthermore, it is better to look at the accuracy in the immediately preceding trials than for all trials when judging how to adjust the difficulty, because attention can change dramatically over long periods. This is especially true if your observers are tired undergrads with hangovers.
Analyzing the experiment
The experiment is analyzed in a separate file. I have furnished a sample .mat file which contains real data from running myself as a subject for 1600 trials. I implemented two ways of analyzing the data: the standard classification image formula (as in the Neri & Heeger paper) and nonlinear regression (as in Wu, David & Gallant 2006 and Mineault & Pack 2008 VSS poster). The standard CI formula works as long as you use uncorrelated noise, and you’re not doing anything too fancy. Nonlinear regression works with any type of noise (e.g. 1/f noise) and can accommodate pretty much any classification experiment where you can write down an equation for a smooth LAM observer appropriate for your task. The second way seems more natural to me, but you might disagree.
For every trial, I regenerated the noise fields in order to do the analysis. For the standard CI formula, I implemented a bootstrap analysis that z-values kernels after smoothing.
I organized the file into cells using Matlab’s cell mode. This way, you don’t need to rerun the whole file to implement a new analysis. This allows rapid iterations in your analysis.
Convention over configuration
You might notice that I use a lot of conventions: using structs, mat files, naming schemes, etc. You might disagree with some of these conventions. The point I want to get across is that it doesn’t matter what conventions you use, as long as you stick to them. For every new experiment you run, you could choose a new data structure perfectly tailored to it. IMHO, that’s a bad idea. The reason is that you are going to spend a lot of time thinking about this new data structure, and your return on your time investment will be very close to nil. You’re better off sticking to a convention for every experiment of the same general type. That way, when you do a new experiment, you can basically copy and paste an old experiment and you’ll be 3/4 of the way to the new one.
Here’s a specific example. In the data struct, every trial matrix (seeds, responses, etc.) is a column vector. If you stick to that convention, then you will be able to use the mergeDataSets function, which merges numeric elements along the first dimension (the column dimension). Now, whether the responses matrix is a column or a row vector is completely arbitrary, but once you’ve settled upon a convention you can build generic tools, such as mergeDataSets, to massage your data and then you can reuse it.
A warning to programmers. Resist the temptation to over-engineer. Matlab is not Java.
Conclusion
I have shown how one can program a psychophysics experiment in a principled way. Disciplined programming means that experiments are easy to modify and data can easily be shared between colleagues. Some of the tentative best practices I have demonstrated include:
- Split your programming into a presentation part and an analysis part
- Wrap your presentation in a function
- Use structs to self-document data
- Name variables meaningfully and write metadata
- Backup your data, backup your data, and backup your data using mat files
- Keep your frame loop as tight as possible, but don’t bother optimizing the rest of your code
- Your conventions don’t matter, as long as you stick with them
Acknowledgements
I’d like to thank all of the developers of PsychToolbox for their work, in particular Mario Kleiner for his current work; Drs. Neri & Heeger for the protocol; and Dr. Frederic Gosselin for helpful comments.
2 responses to “A case study in PsychToolbox”
It might be only me but I couldn’t download the experiment files. Could you update the link please?
Sorry, I can’t find a backup of the files.