Video super-resolution is a technique to increase the resolution of a movie by exploiting the redundancy between frames. It’s easiest to understand the technique by first thinking of the corresponding technology in images. It’s possible to increase the effective resolution of an image by taking multiple pictures, each offset by a fraction of a pixel, and subsequently joining them. Some early digital photo cameras had this pixel-shifting technology builtin; the CCD was physically displaced (or maybe it was a lens) by half a pixel horizontally and two images would be stitched together. I remember we had an early JVC camera from the late 90′s back when I was working in an electronics store that cheerfully advertised being able to increase its resolution in this manner to a then-whopping 3 megapixels.
Pixel-shifting will not truly double the resolution of the image by itself — pixels still integrate over a large area of space, so the images must be deconvolved against the point spread function of a CCD pixel to recover the true image. Blind deconvolution is of course prone to noise, but advances in image modelling have made natural image deconvolution quite reliable. As in the case of denoising, the more sophisticated your prior over natural images is (smooth, sparse in a wavelet basis, Gaussian scale mixture, etc.), the better the results of the deconvolution.
Video super-resolution uses the same basic premise. Subsequent frames in a video are only slightly different from each other — this is what makes video highly compressible. You can model the change from one frame to the next as a non-rigid transformation. Once optic flow has been estimated images can be aligned by undoing the warping created by optic flow. Then subsequent video frames can be stitched together as in image pixel-shifting to form a super-resolution video. Actual algorithms for video super-resolution can blur the lines between the optic flow, stitching and deconvolution steps; see this IEEE article for more details on the subject.
These technologies have been around for a while to be used by researchers. For example, this Matlab package implements several different video super-res techniques, while this one implements one specific technique in detail. I’m sure you can think of many applications to scientific image processing.
Video super-resolution is slowly making its to consumer products. MotionDSP offers the Ikena product aimed at forensic professionals to perform super-resolution on the fly. It can be used to read a license plate number (above) or capture the image of a criminal, for example. Take a look at this video to see this in action; it looks straight out of a cop show.
Now they’ve taken this technology and offered it as part of a consumer-oriented program called vReveal. The company indicates that it either removed the feature, renamed it, or integrated it into other components of the software going from version 2 to version 3. I’m sure you can find the old version on the interwebs, however. The same technology is used in recent iterations of the software to perform image stabilization and denoising. Here’s a demo from another company of of another super video-resolution product.
Related technology for noise reduction is also available in plugins to video editing software. Here’s the result of applying the technology to noisy video shot at night. Pretty dramatic.
Unfortunately it’s difficult to relate what these commercial, closed-source packages to specific algorithms in the literature; although a lot of companies claim to increase video resolution or reduce noise, it’s hard to say what it is exactly the products do behind the scenes. Nevertheless, it’s interesting to see industry follow up on fundamental research, even if it is only to enhance silly home videos. I’m sure better software will become available in this area as soon as somebody has figured out how to make money out of this thing.