To estimate visual receptive fields accurately with reverse correlation or other inference techniques, you need to know the exact timing of each stimulus and spike (give or take 10 ms). In my more naive days, I thought that I could use the nominal frame rate of the computer screen to figure out when each frame was presented. How silly of me! Even when no frames are dropped, the actual framerate of a screen can vary wildly from its nominal spec; I’ve one measured a discrepancy of .5 Hz, which means that after a minute timings were off by something like 500 ms.
But even with accurate frame timings, RF estimation can be complicated by the absolute accuracy of the clocks involved in data acquisition. If you use two different computers for presentation and data acquisition, there is no guarantee that the clocks in the computers are accurate enough for RF estimation. As stated in this Wikipedia article, relative accuracy (=variance of the clock frequency) is much better in quartz clocks than absolute accuracy (= bias of the real frequency versus its nominal frequency). This article on the NTP protocol for example measures a bias of 50 ms per hour on a test computer; if you’re doing LGN recordings, for example, 50 ms is huge.
My own experiments confirm these ideas. The presentation computer sends sync events through a low-latency TTL pulse channel on ITC-18 hardware to the data acquisition computer. Doing linear regression between the timestamps on the two computers, I get the following results:
Model: time2 = time1*a + b 1-a = 2.26-05 s/s std(time2 - time1*a - b) = 1.14 ms
Thus, clock 2 is running slow compared to clock 1 by a factor of 2e-5, which is about 80 ms per hour, certainly enough to throw off RF estimation. However, once the absolute discrepancy between the clocks is known, the timestamps in one clock can be predicted on the second clock with an accuracy (standard deviation) of 1 ms. The lesson: don’t assume that because two computers are in sync now they will be half an hour later.
One response to “Quartz clock accuracy matters in RF estimation”
I agree. Don’t trust the display. Or clocks. Record the frame changes using a photodiode in the corner of the screen. Make sure it flips state (black-white) on each frame change. This gives you a
TTL-level signal that you can record and use for analysis. In fact, you can high-pass filter the signal and just record frame change events, similar to spikes. If you use the same system to record activity and frame changes, then there’s no sync problem. Well, none up to the sample rate of your DAQ, anyways.
And if you’re concerned with precision above the sampling rate of your DAQ, well, then… you either need a faster DAQ or you need to rig up an FPGA or some other outboard electronics to run your experiment.