I’m at the Cold Spring Harbor Lab Computational Neuroscience: Vision class for the next two weeks, and I’ll try to write a steady stream about some of the content covered in the course in addition to lighter subjects.
Banbury campus is beautiful, it feels like a movie’s depiction of a New England rich people hideaway and/or Martha Stewart show.
First lecturer was Tony Movshon; although best known as a ping pong athlete, he is also an electrophysiologist. In the first half of the lecture, he discussed the concept of the plenoptic function (Adelson and Bergen, 1991), a notion which came out of an early CSHL computation vision class. The idea is that you can characterize the visual environment in terms of 7 variables: x, y and t, lambda for the wavelength of light, and vx, vy and vz for the observer’s position. The argument goes that what the early visual system does is encode various first and second derivatives of these quantities with respect to one another rather than the quantities themselves, since constant features are rather uninformative about the environment.
This gives a pretty useful method of classifying and organizing various features and feature detectors represented in the early visual cortex. For example, dx/dy gives you orientation selectivity, dx/dt motion in the x direction, dx/dv_x motion parallax and/or binocular disparity, and so forth. If you wanted, you could go further with this idea: gradients in orientation represent curvature while gradients in motion represent optic flow. By that argument, V4 is analogous to MST.
I would classify it as one of those old theories-of-everything that frequently appeared in the early 90′s in the computational neuroscience literature. Interesting as an organizing principle and food-for-thought for designing new experiments.
He also discussed linear systems theory and its foundational status in the study of sensory systems.
In the second half of his 3 hour lecture, Tony attempted to convince the crowd that natural images are awesome. A real 180 here, as I’ve pointed out earlier. He gave three examples of how natural image structure turns out to be reflected in the organization of the visual system.
- The spatial frequency bandwidth in octaves of cortical simple cells is almost constant with respect to spatial frequency. This is an example of a distribution of cortical tuning properties matched to the distribution of image properties, in this case the 1/f spectrum of natural images.
- Divisive normalization is ubiquitous in cortex and is especially well adapted to dealing and eliminating the “butterfly” joint distributions of nearby linear pyramid transform coefficients (Simoncelli’s stuff)
- V2 appears to be especially well adapted to represent natural textures (Jeremy Freeman’s stuff).
EJ’s talk was all about the retina and its general awesomeness. I was impressed by his ability to communicate his enthusiasm and undying awe for the formidable processing done in the retina. The techniques he’s been deploying to measure retinal ganglion cells are impressive. He showed some examples where he was able to not only measure the spike of a neuron occurring (presumably) near the spike initiation zone but also its propagation through its axon using massive multi-electrode arrays. He demonstrated using the same method what appears to be a spike propagating from an amacrine cell.
Some impressive feats of the light transduction biochemical pathway includes the striking reproducibility of the potentials generated after the absorption of a single photon. Say that a rhodopsin molecule catches a photon, to become Rh*. Assuming that there is only one activation site, then the deactivation time of the enzyme should follow an exponential distribution. In that case, the strength of the photocurrent generated by one photon should follow an exponential distribution. In reality, the distribution looks more like a (slightly skewed) Gaussian centered around a mean value; in other words, totally unlike the exponential distribution.
However, if there is more than one activation site, then the cumulative effect of the multiple wait times means that the time until the enzyme completely deactives is less noisy than for one active site.
Nonlinearities start as early as the cone itself in the retina; the current generated for a light value L is L/(L+L*), where L* is the so-called dark light; this is consistent with Weber’s law. Thus, light adaptation is already present before the first synapse.
EJ then went on on a long and fascinating tangent about the importance of the complexity of the retina. We mostly care about midget and parasol ON- and OFF- retinal ganglion cells. However, there are ~15 other types of retinal ganglion cells which also seem to form complete mosaics, stratify within the inner plexiform layer in highly specific ways and project to the LGN. On of these is the blue-ON bistratified cell, which projects to the konio cells of the LGN. Others, we have no idea what’s going on.
EJ speculates that if they are there, then we should not ignore them; such fine structure is not the result of an accident but must serve some purpose. It should be noted that the uncommon RGCs all have dendritic fields much larger than the 4+1 classic RGCs. So whatever they’re representing is at coarser spatial scales. Some of the people present noted the exotic RGC types present in the salamander retina — could they also exist in primates?