What’s the maximal frame rate humans can perceive?

Updated March 2020

Gamers care a lot about framerate. You will find endless threads on the internet where gamers argue that 60 (or even 120!) frames per second is better than 30, and any game that performs below that threshold is sacrilege.

4xyr0g
Favorite game of all time. That slowdown though.

Why is 24 fps good enough for cinema, but the Oculus Rift need 90? What’s the maximal framerate that humans can perceive?

This is a surprisingly complex question, and the answer involves both the interaction of the visual system with properties of the medium that does the presentation. So what is the right refresh rate for a movie, a game, or a VR display? Let’s break this down into three questions:

  1. At what frequency should a display refresh so that it doesn’t appear to flicker?
  2. At what frequency should a movie or video game be displayed so that it doesn’t appear choppy?
  3. What framerate should a VR display use?

Let’s look at each of them in turn.

Flicker fusion

The first question is straightforward. It’s been known for more than a hundred years that light flickering at high frequency appears stable. It’s an important phenomenon in everyday life, as many display and lighting technologies work by displaying very brief flashes of light several times a second. This includes the incadescent light bulb, fluorescent lights and cathode ray tubes (CRTs, the bulky glass tubes in old TVs).

It’s straightforward to measure the critical flicker fusion frequency in human observers by asking them to report their sensations when viewing a simple stimulus flickering on a CRT display with fast phosphors. You can recreate this experiment at home on a very low budget by flashing an LED with an Arduino (DYI instructions here).

Flicker fusion as a function of illumination and stimulus size, from Hecht and Smith (1936)
Image via Scholarpedia

The critical fusion frequency depends on the luminance of the stimulus and its size, as shown in the graph above (Hecht and Smith, 1936). For a large, high luminance stimulus covering the fovea, like a full screen white field on a CRT, flicker fusion occurs at about 60 Hz.

It’s interesting to note that there are cells in the LGN of primates (lateral geniculate nucleus, a relay between the eye and the brain through which the visual signal is forwarded) which respond to higher temporal frequencies than 60 Hz and are more sensitive than human observers to flicker (Spekreijse et al. 1971). That means that the signal is available somewhere in the brain but it isn’t available to consciousness. Cells in the visual cortex appear to discard the high-frequency information. It is possible, however, that such signals reach the brain through various other means (blindsight or miscellaneous projections to non-visual areas). Indeed, the 120 Hz flicker of older fluorescent lights has been found to cause cognitive deficits and headaches (Veitch and McColl 1995).

Many insects have much faster visual systems than us. They have poor visual acuity, however, because they have so few photoreceptors — hence their vision is much more focused on the analysis of motion than us. Ruck (1961) finds a critical fusion frequency of more than 200 Hz in the housefly, and cites a figure of 300 Hz for the honeybee. Indeed, you need special hardware to do visual stimulation for insect vision research (oscilloscopes were used in the old days, although this might have changed).

Smoothness in video games and movies

Flicker fusion is not especially relevant when you see a movie or play a video game. A typical movie is shot at 24 Hz, yet that doesn’t mean that it will feel flickery when played back. Lack of flicker fusion would result if you flashed each image with a fast display like a CRT. You can get around flicker fusion in a movie by presenting it at a higher frame rate, repeating each frame several times, or simply presenting it on a display with built-in persistence like an LCD screen.

Each frame in a movie is only slightly different than the last — this is what enables compression algorithms like MPEG4. This is a very different situation than in the flicker fusion setup, where each frame is maximally different from the last: black, white, black, etc. The modulations in luminance are much smaller in a movie. Hence fusion occurs at much lower framerates than in the flicker fusion scenario, as is visible in the graph of flicker fusion frequency versus illuminance above.

Image via Wikipedia

Importantly, however, flicker can occur from a different source. If you look at the image of a DLP projector, and move your gaze outside the screen, you will notice fringes of color. This also occurs if you wave your hand in front of the projector, as shown above. In a single-chip DLP projector, the DLP chip produces black and white images, and the colors red, green and blue are created by a spinning color wheel. The wheel of common projectors can rotate at 4 or 5x the frame rate of the signal, so 240 or 300 Hz. This is a very high rate, yet clearly you’re able to discriminate such large temporal frequencies.

Between each flash of the projector’s image, your eyes can move up to a third of a degree of visual angle (assuming a peak saccadic velocity of 300 degrees per second, 300 Hz framerate, and 3 colors). Hence, slightly offset images in red, green and blue are captured on the retina, which you perceive as fringes of color.

Eye movements transform temporal frequencies into spatial frequencies. Human eyes have very good spatial frequency selectivity.

What’s happening is that temporal frequencies are translated to spatial frequencies through motion. Although you can’t discriminate flicker through its temporal aspect, adding motion will allow it to be discovered through its spatial signature. In fact, the visual system appears to exploit this effect to enhance its sensitivity to high spatial frequencies – fixational eye movements shift the spatio-temporal frequency spectrum of natural scenes towards frequencies neurons are more sensitive to (Rucci et al., SFN 2011)

Temporal aliasing is the main reason why videogames can look choppy at frame rates of 24 or 30 Hz. Let’s say a line moves from the left of the screen to the right. Its path can be represented as a two-dimensional image where one dimension represents its x position and the other represents time. In the graph shown below at the left, the line moves slowly, and its retinal image, which integrates over several frames, appear smooth with a bit of motion blur.

On the right, the same line is now moving faster. Now the retinal image shows a faded trail of the path of the line; this will be interpreted as choppy when viewed by a human observer. The problem is that a video game video rendering pipeline generates instantaneous snapshots of the video game world at a set sampling frequency (say 30Hz). But the Nyquist-Shannon sampling theorem implies that if things happens in the virtual world at higher temporal frequencies than half of the sampling rate (the Nyquist frequency), the result will be temporal aliasing, the creation of artifacts from inadequate sampling.

Spatial aliasing in the form of a Moiré pattern.
Image via Wikipedia

Note that aliasing has little to do with the visual system; it’s a signal processing effect, and it occurs in many modalities other than time. In the spatial domain, aliasing can causes artifacts like Moire patterns. The image above shows the corresponding effect when a brick wall with spatial frequencies above the sampling frequency is sampled without first eliminating high frequencies through an analog filter — note the interference pattern at the bottom right.

The solution is to sample the video game world at a higher temporal frequency, and optionally downsampling it before display. This is temporal anti-aliasing. Of course, that means more polygons to blit over time, which means you need a beefier video card. Cinema cameras apply their own analog anti-aliasing by integrating photons over the entire period the camera shutter is open (usually 1/24th of a second). This means that frames per second in video games and video cameras are not comparable.

From Claypool, Claypool and Damaa (2006)

So then what’s the minimum frame rate at which a video game should be rendered to ensure that it doesn’t suffer from jitter or choppiness? There’s no maximal rendering framerate above which aliasing effects are guaranteed to be eliminated. Once again, it has little to do with the human visual system and everything to do with the Nyquist-Shannon sampling theorem. You can always cook up a hypothetical situation which demands arbitrarily large numbers of frames per second to be viewed without artifacts.

And the relevant question is really whether the choppiness bothers you or not, rather than whether it’s visible. Claypool, Claypool and Damaa (2006) report that performance starts to saturate in a first-person shooter game at about 30 fps, with only marginal benefits at 60 fps. So these non-experts seemed to be equally good at 30 vs. 60 fps.

Of course, it might be different for an expert player – if you think that 60fps makes you a better player, by all means, go buy the latest GPU. But I haven’t heard a lot of people make the claim that anything above 60 fps is worth it – except, very importantly, for virtual reality (VR).

Cues in motion: VR displays

betto rodrigues / Shutterstock.com
Betto Rodrigues / Shutterstock.com

To navigate in the world, we use a mix of:

  • Vision. Moving forward creates a lot of expansive motion in the visual field, which gets analyzed by specialized brain areas.
  • Vestibular cues. The vestibule is the inner ear organ that reports the direction of gravity, acceleration and head rotation. It acts as the brain’s accelerometer and gyro.
  • Proprioception. Proprioception is the sense of knowing where our body and our limbs are. Little sensors attached to our muscles tell our brains how much they’re stretched, and that helps us know where our body is in space.

Neurons in area MST of the visual cortex are exquisitely sensitive to matches or mismatches between vestibular cues and visual cues. When different modalities of the input are contradictory, the effect is often a strong perception of self-motion. MST neurons control, with very low latency, visual reflexes which cause movement of the eyes and the head the VOR. The problem is that under contradictory input, these reflexes can create counterproductive self-motion which only serves to exacerbate the problem. This causes motion sickness.

VR is particularly prone to this issue because:

  • The display takes up most of the field of view. Bigger things cause a larger amount of motion discomfort.
  • You’re generating your own motion through your body. Your brain knows what to expect, and when that expectation is not met, the brain is not happy, and you get motion sickness.

The discomfort can be much attenuated by keeping the motion-to-photon-latency to an absolute minimum. The motion-to-photon latency is the amount of time your own movement affects what’s displayed on the screen. This is very well-explained by Michael Abrash in this prescient article from 2012, who went on to becoming chief scientist at Oculus Research (and my skip-manager when I was an engineer at Facebook Reality Labs). You can keep the motion-to-photon latency number low in a number of ways:

  • Having a fast (low-persistence, low-lag) display
  • Having a high framerate. The motion-to-photon-latency is lower bound by how long a frame is displayed. At 90 frames per second, movements in the controllers right after a frame is presented will be lagged 1/90 Hz = 11 milliseconds because that’s how long a frame lasts.
  • Having very low-latency motion sensors and computer vision pipelines to estimate the location of controllers and the headset. Even if your display is 1000Hz, if it takes 200 ms to estimate the location of controllers, you will feel motion sick. Bad tracking really ruins the experience.
  • Having well-optimized rendering pipelines that can use the controller and head location information at the very last minute. For example, asynchronous reprojection can take a stale frame and reproject it according to the latest estimate of controller and head location to fake a frame when the GPU cannot render frames fast enough. There tricks can really help improve the perception of a smooth framerate and help minimize motion sickness, as highlighted by John Carmack.

The original Rift refreshed at 90Hz, while newer machines like the Quest are now refreshing at lower framerates of 72Hz. This doesn’t mean that the visual system runs at 72Hz or 90Hz or anything like that. What it means is that the predictions the brain does about vision, based on bodily movement, are sensitive to changes on the order of ~10-15 milliseconds. This is pretty remarkable when you think that the latency between a photon and the wave of activity in high-level cortex can run up to 200 milliseconds.

Over the years, many a reddit thread has linked to this article to make an argument for or against the necessity of high framerates for gaming. My own take is: it depends. If you’re a professional doing e-sports, every millisecond counts. For the rest of us folk playing Beat Saber on medium difficulty on the Quest, if it feels comfortable, that’s probably all that matters.

This post was chosen as an Editor's Selection for ResearchBlogging.org

Hecht S, & Smith EL (1936). Intermittent stimulation by light : VI. area and the relation between critical frequency and intensity. The Journal of general physiology, 19 (6), 979-89 PMID: 19872977
Spekreijse H, van Norren D, & van den Berg TJ (1971). Flicker responses in monkey lateral geniculate nucleus and human perception of flicker. Proceedings of the National Academy of Sciences of the United States of America, 68 (11), 2802-5 PMID: 5001396
Veitch, J., & McColl, S. (1995). Modulation of fluorescent light: Flicker rate and light source effects on visual performance and visual comfort. Lighting Research and Technology, 27 (4), 243-256 DOI: 10.1177/14771535950270040301
Ruck, P. (1961). Photoreceptor Cell Response and Flicker Fusion Frequency in the Compound Eye of the Fly, Lucilia sericata (Meigen). Biological Bulletin, 120 (3) DOI: 10.2307/1539540
M. Rucci, J. D. Victor, A. Casile, X. Kuang (2011). Fixational eye movements enhance sensitivity to high spatial frequencies in the retina and LGN. SFN 2011
Mark Claypool, Kajal Claypool, & Feissal Damaab (2006). The Effects of Frame Rate and Resolution on Users Playing First Person Shooter Games. Proceedings of SPIE

11 responses to “What’s the maximal frame rate humans can perceive?”

  1. […] monitor is also near the limits or about of perceivable smoothness”, but you might also check this article or that article (yet, keep in mind, that visual system of some people might be especially […]

  2. “It’s an important phenomenon in everyday life, as many display and lighting technologies work by displaying very brief flashes of light several times a second. This includes the incadescent light bulb…”

    incandescent (note that 2nd N!) do not work like this. the filament is heated as current flows through, and it creates heat and light. even though the current may pulse 60 or 120 times a second, the filament does not have time to cool down, so the light remains steady.

  3. Hi there!

    I’ve been medicated for Eplipsia for years (a lot) and maybe for reason I’ve some degeneration in my image processing …. I’ve NO flicker fusion threshold!

    CRT, fluorescent tubes … any LCD or even LED-illuminated LCD and even very very high frequency oscillations from fluorescent compact tubes give me a lot of headache and my eyes become very quicly red

    More frequency of oscillations less time to get sick!

    I’ve made my own reseaches and I’ve some idea of what is going on but I’m leaving in a “developong-country” so I can’t do anything about it :(

  4. I think you are partly mistaken in your interpretation of the data, as well as the conclusion. The research on flicker fusion is quite clear on the fact that it is referring to the frequency of the modulation (from dark to bright). That means that with a critical frequency of 50 Hz, it is the peaks (be it black or white) that are being flickered at 50 Hz. Therefore to reproduce this on a display, you need to display black and white frames successively at 100 fps (or 120 fps if you take the higher value).

    Also, as far as I understand, the test setup is idealized (that is, half of the time absolute darkness and half of the time absolute brightness). It is entirely plausible that with a different (uneven or varying) duty cycle, differences of less than 10 ms can be perceived. Therefore I cannot say with certainty that even 120 fps is the limit (although it is quite probably enough). For virtual reality purposes, more seems to be better, without exception. Michael Abrash (of previously Valve Software and now Oculus VR) estimates that the sweet spot for 1080p at 90 degrees field of view is somewhere between 300 and 1000 fps (see the website link).

    The fact that differences in game score become negligible between 30 and 60 fps is meaningless. Game score is hardly a measure of the absolute limits of human vision.

    I am quite certain that given 60 fps and 120 fps displays side by side with video at the same frame rates, I would be able to tell the difference between them. So please, give me a 120 fps display already.

    • That’s a good point. Most of the research I quoted is based on monocular vision or binocular viewing of a 2d display. Binocular disparity signals depend on minute differences in position, and a very small lag could perturb your ability to obtain good fusion from the two eyes. So I agree that in VR displays (Oculus, etc.) the desirable framerate could be quite a bit higher. I’ll have to look in more detail into the research on VR displays.

  5. Very disappointing that you didn’t make a bigger effort to come to a more satisfying conclusion after such an interesting examining part of the article… in the end 60 hz is enough for a screen to appear constantly alight but what about choppyness? Aliasing was only touched upon… even if a 100% anti-aliasing can’t be guaranteed ANY numbers would had been helpful to say for example of a 240 fps monitor could remove 33% or 50% of aliasing in games etc… no numbers at all means any numbers at all to someone reading the article you know.

    I did come to the same conclusion as you before reading this article though: it’s not about when people can no longer distinguish the frames, it’s about when they stop caring.

  6. “It’s easy enough to measure the critical flicker fusion frequency in human observers by asking them to report their sensations when viewing a stimulus flickering on a CRT display with fast phosphors.”

    Fast phosphors? How fast? My CRT is painful to look at when running at 60hz, obviously whoever was running that test didn’t have a screen as good as mine.
    So what happens when you have an even better screen? Has anyone actually tested for the limits of “flicker fusion” with a light source that only stays on for exactly half of each frequency period? What happens if the light source only stays of for a tenth of each period?
    The frequency limits of human vision are probably a lot higher that previous testing suggests.

Leave a comment