NeuroAI is the budding research field at the intersection of neuroscience and artificial intelligence. One of the core concepts used in the field is that artificial neural networks can act as good models of the brain. For example, it’s often claimed that convolutional neural networks can account for the response of the ventral visual stream to images. Similarly, large language models have been found that capture what’s going on inside the language network of the brain. When I have discussions with people outside of our narrow field, this often triggers surprise. Wait, how can a deep neural network be like the brain? What does it mean to be a good model? How can the squishy stuff be like the silicon?

Here I want to get into the specifics of how, mechanically, you compare a brain to an Artificial Neural Network (ANN). I give some historical background first, focusing on classic results and methods that originated in the field of visual perception. I will explain the nitty gritty of how correspondence scores between brains and ANNs are calculated. I discuss some of the conceptual difficulties inherent in the classic methods of linear regression and RSA, and explain some of the proposed alternative metrics in Williams et al. (2021). You’ll get the most out of these sections if you’re coming from a math background: statistician, data scientist, computational neuroscientist, etc. Nevertheless, if you’re less math-oriented, I will give intuitive explanations so you can follow along. I conclude with a call to create and use more nuanced and detailed comparisons between brains and neural nets.

# History: the visual ventral stream as a convolutional neural net

Let’s start with the classic example of convolutional neural networks vs. the ventral visual stream. In the late 80’s, Yann LeCun was inspired by the classic work of Hubel and Wiesel on the physiology of the visual cortex. He created a neural network consisting of sandwiched layers of selectivity and invariance operations, not unlike the simple and complex cells of the primary visual cortex. His network, LeNet, was the first example of a convolutional neural net (CNN) trained with gradient descent, and it could classify handwritten digits. Two decades later, this work was then greatly scaled up by Alex Krizhevsky and co. in Geoff Hinton’s lab at the University of Toronto. AlexNet led to the ImageNet moment, in 2012, where a CNN did far better than state-of-the-art classical machine learning methods at image classification, showing that “deep learning has arrived”.

If CNNs were inspired by the brain, and they do the same thing as a brain (image classification), could they be a little like the brain? If it talks like a duck, and it walks like a duck, is it a duck? This is a surprisingly subtle question, and two teams (Dan Yamins in Jim DiCarlo’s lab, and Khaligh-Razavi in Niko Kriegeskorte’s) answered this question with a vigorous “yes, maybe!?” in 2014. They looked at the responses of the ventral visual stream—areas of the brain traditionally associated with shape perception and image classification—and compared them against the ANNs of the time.

## The mechanics of comparing a brain to an ANN

So how did they compare a brain to an ANN? They followed what’s now considered a classic recipe. You need three inputs:

- an ANN trained for some task (e.g. visual classification)
- a brain (human or non-human)
- a set of benchmark stimuli (e.g. a set of images, sentences, videos, etc.)

You then proceed as follows:

- You probe the ANN with all the benchmark stimuli. You obtain a matrix of responses
**X**. Each row is one stimulus (=one image, one movie clip, one sentence, etc.). Each column corresponds to a subunit of the neural network (e.g. the collected intermediate activations of the ANN). - You do the same with the brain. That means, for instance, having someone sit inside the scanner, looking at the same set of images that the neural network was exposed to, and you recording their functional responses. You collect the data into a new matrix. The rows are again exemplars, but now the columns represent something else: physical neurons, EEG sensors, fMRI voxels, etc. This gives us a matrix of responses
**Y**.

By construction, **X** and **Y** have the same number of rows, but different column counts. We’ve thus reduced the problem of comparing a brain and an ANN to the problem of comparing two matrices of different shapes.

There are two now classic ways of doing this:

- Linear regression: Do multiple linear regression to map one matrix onto the other. Learn a weight matrix
**W**such that the residual ||**Y**–**XW**|| is minimized. This requires some regularization: Tikhonov regularization, which penalizes the sum-of-squares of**W**, might help here. An alternative is to require that the mapping is low-rank using partial least-squares with a limited number of components. This is the path used by Yamins & DiCarlo. The final score is the R2 of the linear regression, perhaps calculated through cross-validation (CV). - Representational Similarity Analysis (RSA): One difficulty with the previous method is that it requires learning a weight vector
**W**. This is necessary, in part because the dimensionality of the two matrices might be different. If we form the matrices**XX**’ and**YY**’, however, we obtain two square matrices of the same size. Furthermore, these matrices are invariant to a relabeling (permutation) of the columns. Thus, we can compare the elements of the two similarity matrices via a correlation coefficient. Technically, you could have a negative score, but you could threshold or square to get a score in the 0-1 range. This is the method pioneered by Niko Kriegeskorte back in 2008.

These procedures will result in two distinct scores for the similarity of matrices: the linear regression score (really, CV R2) and the RSA score. Both have the property that 0 is maximally misaligned and 1 is maximally aligned. This way, we have reduced the difficult philosophical question of what it means for a brain to be like an ANN to a problem of *big number good, small number bad.*

Using these tools, the two historical papers came to similar conclusions: deep neural nets trained on images have similar representations to the ventral visual stream of the brain. For the Yamins paper, it was with linear regression and with single-cell neurophysiology; for Khaligh-Razavi, it was RSA on fMRI data.

**Conceptual difficulties with comparing brains and ANNs**

Procedures to map brains to ANNs like linear regression and RSA swap out deep philosophical issues about the nature of perception with a technocratic procedure. In fact, there’s a lot that hides under these scoring procedures.

Both methods are *correct* at the extremes: if you try to compare two random matrices against each other, you’ll get a score of 0; and if you compare a brain (or an ANN) against itself, you’ll get a score of 1. So a brain is similar to itself but not to random noise: cool. That’s a pretty low bar to achieve, and it’s in the middle scores that we run into conceptual difficulties. What does a 0.5 similarity between the brain and an ANN mean? What are we actually trying to quantify?

There are different ways in which we can conceptualize how the brain can be like an ANN. Let’s name some of these ways:

- We could ask for a 1-to-1 correspondence: each subunit in the ANN should correspond to a neuron in the brain. That’s a very high bar to clear! Let’s call this 1-to-1 correspondence.
- We could ask that distances are preserved in the brain and in the ANN. An analogy in 3d will help. Two three-dimensional shapes (say, rubber duckies, above) can be similar, regardless of their (arbitrary) orientation. Distances between two points on the surface of the duckies are preserved: they are invariants. Let’s call this orthogonal correspondence.
- We could ask that one manifold of responses can be morphed into another through a linear transformation. Let’s call this linear correspondence.

It’s not clear how traditional linear regression and RSA scores map to these desiderata. You can make a verbal argument that linear regression is similar to linear correspondence (2), while RSA is most similar to orthogonal correspondence (2). However, there are complications in real implementations: regularization in linear regression, selection of voxels with sufficient signal-to-noise ratio in RSA. This means that we might not capture our (unstated) goals in complex ways.

Traditionally, these concerns have been more or less swept under the rug, and each subfield has converged on its own widely agreed-upon scores: RSA for human neuroscience, linear regression for single-cell neurophysiology. The argument goes that whatever score we choose, alternative scores would correlate with it. This encourages papers from using whatever is the most commonly used score in their subfield, which allows the scores to be compared more readily from paper to paper. A perfectly reasonable heuristic, but a little unsatisfying.

**Williams et al.’s solution: computational shape analysis**

Williams et al. (2021) offer a nice treatment of these issues with some good conceptual solutions, casting the problem as one of computational shape analysis: analyzing shapes in high-dimensional spaces with statistical tools.

First, they project the two representations (brain and ANN) onto a fixed, common-sized representation. You could use random projections, subsampling, PCA, etc. to get two matrices with different widths to the same width. Call the resulting matrices and . One of their proposed distances is . **T** is a transformation within some group *G* that captures what is it that we mean by “same”. Some potential choices:

- If you want one-to-one correspondence between the brain and ANN,
*G*can be the set of all permutation matrices - If you want distances to be preserved, then
*G*can be the orthonormal matrices - If you want to allow squishing along arbitrary linear dimensions, you can let
*G*be arbitrary linear transformations

It turns out the resulting scores are proper distances that respect the triangle inequality, which has some nice benefits for clustering. They also introduce more metrics for more unusual scenarios. For example, for convolutional neural networks, you’d want something that allows remapping along the channel dimension, but not along the space dimension. Although each metric seems like it requires a brute-force search, it turns out there are clever ways of calculating the optimal transformations in each scenario.

# The advantage of axiomatic methods

Overall, Williams et al. take cues from the recent success of geometric deep learning (e.g. Bronstein et al. 2021) to ask deep questions about the metrics we use. More than just an incremental numerical improvement, I think it’s a big conceptual improvement over linear regression and RSA: you know what you’re getting into. An axiomatic approach tells you very explicitly what the underlying assumptions are.

Ironically, this makes it easier to poke holes in some of these scores. For instance, one of the core desiderata that Williams et al. fulfill is that their chosen score should be symmetric. Now, I would argue that a brain can be more similar to a neural network than vice-versa. Modern large language models (LLMs) represent more stuff than humans: GPT-4 knows markdown, latex, English, German, how to write listicles and poems and sound like a pirate. Most people can only do a subset of these things. Divergences for distributions are naturally asymmetric, so there’s nothing inherently fishy about a score that is asymmetric. I’m sure there is a variant of linear regression’s asymmetric R2 which can be properly axiomatized according to the framework laid out by Williams et al.

In the meantime, one can use combinations of these well-justified metrics to answer interesting questions about how brains vs. ANNs represent information. For example, the delta between linear correspondance (metric 3) and the orthogonal correspondance (metric 2) is an index about how much warping is necessary to get two latent spaces to match. This could help reveal whether a brain’s representation is *a subset of* or *a noisy approximation of* a particular artificial neural net, which is ambiguous for each individual metric. Williams et al. have made their metrics available in this Python package, so you can try this out yourself.

# Discussion

For all their justification, we haven’t seen much empirical work using well-axiomatized scores beyond linear regression and RSA. I think the reason for the status quo is that we’ve been focused on coarse characterizations thus far. This will surely need to change as we go beyond *ANNs as metaphors for the brain* and start using them as *in silico models* of the brain.

For instance, I’m interested in using ANNs as models for the brain for the purpose of neural engineering. What I really want is that when I create a virtual lesion in a neural network, it predicts how the real brain will react to a real lesion. Such a *causal manipulation score* doesn’t yet exist, but I think it would be both conceptually and practically useful. The axiomatic approach of Williams et al. points us towards ways of building this type of score.

For other purposes, however, it may be that the metrics we have are good enough. This recent paper from Tuckute et al. shows that one can learn a transfer function between a large language model and the brain’s language network using linear regression. The model can predict which sentences drive or don’t drive the brain. Using in silico models to predict the response of the brain is helpful in this scenario, and in a certain meaningful sense it means that the brain *is like a neural network*, along this prediction axis, but perhaps not according to more stringent criteria.

The work from Williams et al. is just one of a number of recent approaches looking at this problem in depth. We’ve seen some of the same authors extend this work to stochastic representations. Furthermore, empiricists have pointed out problems that current metrics don’t capture, for example, hierarchical correspondence. I think it’s an exciting time to think deeply about our metrics and what it really means for a brain to be like an ANN.

## One response to “How can a neural network be like the brain?”

Thank You. Great article and have lots of potential in the near future. Regards, Tim