The Gallant lab have a new paper out in Neuron using fMRI to study the brain’s representation of visual scene categories. It’s a slick little paper, using some fun machine learning algorithms (Latent Dirichlet allocation) that shows that there’s a substantial amount of latent semantic information available at the fMRI macroscale – raising the question of how semantic information can be exploited by the visual system. Chris and I wrote a preview for the paper in Neuron. Here are the first few paragraphs:
In a 1942 essay, Jorge Luis Borges discusses the categorization of animals, purportedly found in a fictitious Chinese encyclopedia named the ‘‘Celestial Empire of Benevolent Knowledge’’ (Borges, 1942). Animals therein are classified into 14 fanciful categories, including, ‘‘fabulous ones,’’ ‘‘those that have just broken the flower vase,’’ and ‘‘those that look like flies when viewed from a distance.’’ Borges uses this example to suggest that any attempt to categorize the contents of nature is ‘‘arbitrary and full of conjectures.’’
Nevertheless (again quoting Borges), ‘‘the impossibility of penetrating the divine scheme of the universe cannot dissuade us from outlining human schemes, even though we are aware that they are provisional.’’ In fact, such schemes can be quite useful in sensory neuroscience. A decade after Borges’s essay, Barlow (1953) discovered neurons that respond selectively to stimuli that look like flies when viewed from a distance. These ‘‘fly detectors’’ were found in the retinas of frogs and, hence, were linked to a specific category of behavior (feeding). Subsequently, Hubel and Wiesel (1962) identified visual cortical cells that were described as ‘‘simple’’ and ‘‘complex,’’ and these turned out to be useful labels for understanding many aspects of the visual cortex from anatomy to computation.
More recent imaging studies have led to the suggestion that neurons with particular stimulus selectivities are clustered together, forming brain modules responsible for encoding rather abstract categories of stimuli, including faces (Tsao et al., 2006), places (Epstein and Kanwisher, 1998), and buildings (Hasson et al., 2003). Of course, the number of such categories must be far greater than the number of brain regions, which leads to the profound question of how the brain organizes such a vast quantity of visual experience. In this issue of Neuron, Stansbury et al. (2013) address this question.