Update (12/10/2011): Huang Xin has written better Python script based on ctypes, part of RealTimeElectrophy.
I posted a few days ago on solutions for reading .plx files on Mac/Linux, and was kind of bummed out that in spite of doing a thorough background search I couldn’t find a really satisfying solution to the problem. So I wrapped up the Python script I am using in-house and removed the lab-specific aspects of it to share with fellow electrophysiologists.
It’s called plx2ddt.py. It’s used like so:
sudo python plx2ddt.py -i c080a.plx -o testout
It translates continuous data into the .ddt format, which is a Plexon format specific to continuous data that is very easy to read. Basically, it’s composed of 432 bytes of header data followed by a continuous flow of int16 values. ddt.m, which is included in Chronux, shows how to read this format; it’s dead easy. It can be visualized directly by Baudline in Linux.
For the global and channel specific headers, spike channels, and event channels, I have coded it so that the data is dumped into JSON files. It’s not the most efficient format, but it can be read pretty much anywhere. The .plx reading logic and the writing logic are in separate files so you can alter the output format without messing up the data reading. So you could store the data as XML, pickled, in a database or any other format you think is appropriate.
I’ve implemented HDF5 writing in addition to JSON but I am unable to test it as HDF5 is giving me a “infinite loop closing library” that I can’t get rid of. I am hoping that generous people knowledgeable in HDF5 can patch up this bit if need be, but to paraphrase Donald Knuth the code is correct though I haven’t actually tried it.
Much of the code is lifted from OpenElectrophy. I did quite a few things to optimize the reading and writing speed that I can’t quite remember (this was coded a year ago). Two of the things I think I did: use buffered IO to dump continuous data directly rather than storing it in Python arrays; and use struct.unpack with a compiled expression rather than the generic reader for reading class headers. On the test computer (i7 920) it processes about 10MB/s. This is quite a bit faster (5x maybe?) than the original code lifted from OE, although still much slower than the equivalent C.