Advertisement

Analog Angle

Can your camera capture trillions of frames per second? This one can.

“Fast is good, but faster is better” is a guideline that applies to many operations we try to instrument. Consider the development of strobe-based film-camera photography with flashes as short as one hundred thousandth of a second, largely led by MIT Prof. Harold “Doc” Edgerton at MIT beginning in the 1930s. Many of his “stop motion” photos are well known icons, such as the one of Figure 1, and hundreds more are posted in a dedicated online gallery. Although he began his efforts with single-shot events, Edgerton also developed multi-shot systems which could capture a series of evenly spaced flashed images as well. In addition to the “wow” factor, his work was essential to all sorts of scientific research cutting across many disciplines.

Figure 1 This is just one of numerous flash-capture images for which Doc Edgerton is known; others are more scientific and research oriented than this “attention-getting” photo. (Image source: Edgerton Digital Collections)

I wonder what Doc Edgerton would say if he saw a recent project from a team at the California Institute of Technology (Caltech) with a system that can capture images at 70 trillion frames per second (fps). Further, unlike some multi-frame image-capture cameras, it does not do this for just a single or a few frames, but up to 1000 frames in succession. Unlike some successive-image capture systems, it does not require that the subject be a repetitive event where it captures the successive images once per cycle but with a slight but precise time shift. Instead, this camera can meet both the frame/sec and number of frame numbers for single-shot events – a major advantage. According to the researchers, this incredible speed in useful for fast phenomena such as ultrashort light propagation, radiative decay of molecules, soliton formation, shock wave propagation, nuclear fusion, photon transport in diffusive media, and morphologic transients in condensed matters. I’ll have to take their word on that.

Not surprisingly, this image-capture system developed by a team led by Lihong Wang, Bren Professor of Medical Engineering and Electrical Engineering, comprises an impressive and non-intuitive mix of analog, digital, and electro-optic technologies. Wang calls it compressed ultrafast spectral photography (CUSP), combining a laser that emits femtosecond pulses of laser light with optics and a camera that is unlike any conventional meaning of the word in the analog (film) or digital sense. It employs advanced electro-optic and optical-physics principles and delves into the quantum properties of light and its interactions, Figure 2. In the illumination section a beam-splitter pair followed by a glass rod converts a single femtosecond pulse into a temporally linearly chirped pulse train with neighboring sub-pulses separated by tsp, which can be tuned according to the experiment. The resulting image includes both spectral dispersion by the grating in the horizontal direction and temporal shearing by the streak camera in the vertical direction.

Figure 2 This schematic of the active CUSP system for 70-trillion fps imaging shows: a) complete schematic of the system.; b) detailed illustration of the spectral dispersion scheme (black dashed box); c) composition of a raw CUSP image in s-View. [Abbreviations: BS – beamsplitter; DMD – digital micromirror device; G – diffraction grating; L – lens; M – mirror.] (Image source: Caltech)

One of the key elements in the system is the streak camera, which combines some aspects of the old-fashioned, almost-obsolete cathode ray tube (CRT) with a CCD-based imager, Figure 3. The optics break up individual femtosecond pulses of laser light into a train of even shorter pulses, with each of those pulses capable of producing an image in the camera. Along the way, the arriving photons generate corresponding photoelectrons, which the sweep electrodes displace vertically to separate the images based on their time of arrival. As shown, the red dots between the sweep electrodes represent accelerated photoelectrons of different times of arrival. The top ones arrive earlier than the bottom ones. The sweeping voltage is applied to the sweep electrodes in streak mode, while no sweeping voltage is applied in focus mode.

Figure 3 In this detailed illustration of the streak camera, you can see the mechanisms by which the arriving photon pulses generate electrons that are displaced based on the photon pulse’s time of arrival. (Image source: Caltech)

The researcher’s “high level” description of the system’s operating principles sounds like something out of Star Trek or The Twilight Zone: “It breaks the limitation in speed by employing spectral dispersion in the direction orthogonal to temporal shearing, extending to spectrotemporal compression.” While each of these mentioned techniques in already in use, the way in which they are combined here is apparently quite innovative and also easier accomplished in theory than in practice.

There’s no point in me to attempting to provide a comprehensive summary of how it works, as their intense but fairly readable nine-page academic paper “Single-shot ultrafast imaging attaining 70 trillion frames per second” published in Nature Communications is a better source. They have also provided a 40-page Supplementary Information paper with additional details of the set-up including some fairly intense math which analyzes the physics as well as error sources, and there are videos of the operation here as well. (Not surprisingly, this project was funded in part by the National Institutes of Health.)

This leap into trillion+ fps/1000 frame image capture a is truly impressive, especially as it was achieved by the merging of distinct technologies in electronics, optics, lasers, imaging CCDs, and digital signal processing, into a mutually supportive structure. The researchers have broken down the “silos” (to use that somewhat tired phrase) and devised a system where not only is the whole greater than the sum of its parts, but that whole exists only because they have meshed the disparate parts into a very new kind of system. It’s somewhat analogous to combining oxygen and hydrogen and getting water, which has no resemblance to those constitute elements.

Related content

Is there an optical rectenna in your future?

Underwater Optical Links Make 5G Look Easy

Silicon yields phased-arrays for optics, not just RF

Is Optical Computing in Our Future?

0 comments on “Can your camera capture trillions of frames per second? This one can.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.