IN MEMORY OF DON KELLY

D.H. Kelly

1923 - 1997


The following material is an excerpt from "Optics and Photonics News", July 1990, adapted for the web.

40 Years of Image Technology and Vision Research

by Donald H. Kelly

This article was adapted from the keynote address given at an OSA Topical Meeting on Applied Vision in San Francisco in July 1989. The Applied Vision talk had to be cut in half to fit these pages. To those who heard the original, I apologize if your favorite story has been omitted.

An undergraduate at Rochester

In high school I thought I might go to Columbia University and study journalism. However, an Optics Prize Scholarship sent me to the University of Rochester instead, and that was the beginning of my career in optics and visual science. I came under the influence of Professor Brian O'Brien, one of the movers and shakers in optical science. An alumnus of both Harvard and Yale, his inventions ranged from irradiated milk to military instruments. He studied waveguide effects in polystyrene models of retinal cone cells, scaled up 80,000 times to match the wavelength of the available K-band klystron.

Trained as a physicist, he taught physiological optics with the skill of a virtuoso. He made me believe that the days of Leonardo and Helmholtz were not dead-- that a man could still master anything worth knowing, if he put his mind to it. He was the Director of the Institute of Optics at the University of Rochester from 1938 to 1952. Partly in recognition of his work for the government during World War II, he won the Optical Society's highest award in 1951. He served as OSA president from 1951 to 1953.

World War II had started during my sophomore year, and O'Brien was often in Washington or at some military test site. Like most of his scientific peers, Professor O'Brien was determined to make the maximum possible contribution to winning the war. In his campaign to apply the resources of the Institute to the war effort, Professor O'Brien fully included the undergraduates. Being short-handed, he improvised. Every optics major had a job, some on government projects, some as teaching assistants. I was Bob Hopkins' lab assistant in geometrical optics.

Far from impairing our education, this wartime training gave us more motivation and resourcefulness than peacetime had ever provided. O'Brien treated everyone who graduated as an important national resource, moving heaven and earth to place him in the most useful situation possible. My entire class, some in uniform and some not, went off to Washington or Oak Ridge or some such place. "For the convenience of the Government," our orders read.

In my case, Professor O'Brien suggested that I apply for a Navy Commission and leave the rest to him. Because I was too young to be anything but an enlisted man, the Navy put my application on hold. So I went to Hollywood and started to work for Mitchell Camera Corporation. In those days Mitchell manufactured most of the professional motion picture cameras used in Hollywood, with a film-movement mechanism that was a triumph of mechanical design. My stay at Mitchell was short-lived; I was first drafted and then commissioned, late in the summer of 1944.

In the Navy

My first post in the Navy was the Patuxent River Naval Air Station, a huge testing operation on Chesapeake Bay, about 70 miles from Washington. As the Navy's only Aircraft Test Center, Patuxent River was directly under the cognizance of the Bureau of Aeronautics (BuAer). Among other things, captured enemy aircraft were reconditioned and test-flown at Patuxent.

I was assigned to the Aircraft Camouflage Section of the Tactical Test Unit. We tested various paint schemes proposed by BuAer, either to minimize the visibility of a given aircraft, or to maximize it (as in the case of target-towing or Air-Sea Rescue planes). But it seemed that the test results had never shown much difference, even among the most extreme paint jobs. I soon learned why.

Since it was impractical to do much psychophysics with real airplanes, the camouflage group was using small scale models, a few inches long. Cast in bakelite, these little models were originally training devices-- pilots used them to practice recognizing various planes from their silhouettes. After carefully applying the proposed paint schemes to the models, the camouflage people would take them outdoors and hang them from wires, in a big frame against a clear sky background. Then the observers would start from far away and walk toward the models, noting the distance at which each one reached the threshold of detection. This distance was invariably the same, to within the standard error, no matter what the paint crew did, because it was simply the resolution threshold for a small dark blob against a bright background.

When a full-scale airplane disappears, on the other hand, it is usually obscured by scattering or haze in the atmosphere, even on a clear day when the pilot is not trying to avoid detection. Now that is partially a contrast threshold, not simply a resolution threshold, and it occurs much closer to the observer than the scaled distance inferred from the model planes. As I tried vainly to explain to my new colleagues, they had neglected to scale down the atmosphere. Whole filing cabinets full of useless data had been collected in this way.

Lacking any background related to atmospheric optics, none of the other camouflage officers could understand the folly of what they were doing. But when they were all mustered out, near the end of the war, I assumed command of Camouflage at Patuxent, just long enough to correct this fiasco.

Brian O'Brien's son, Brian O'Brien, Jr., was then an ensign working at the Camouflage desk in BuAer. On a quick trip to the Institute of Optics, young Butch had an ingenious little haze-box built, that did scale down the atmosphere . By the time he delivered it to me, I was the entire Patuxent Camouflage Section. We did some experiments that proved the point about scaling the atmosphere, using the same camouflaged model planes but this time at realistic distances. I wrote up the results, wondering what to do with them. The answer was soon apparent.

Young Butch was leaving the Camouflage desk at BuAer, to work on something new called a "guided missile," and I was his designated replacement. For awhile there, I held two desks in the Navy-- my old one at Patuxent, and my new one in BuAer. At BuAer, I wrote up orders for Patuxent to carry out the experiments that Butch and I had just done. Then I hurried down to Patuxent to receive the orders and convert our experimental results into an official report to BuAer. Eventually Patuxent received a commendation from BuAer for these superb results!

The Technicolor Era

On leaving the Navy in 1946, I went back to Hollywood, to work in the Research Department at Technicolor Corporation. I stayed there for 15 years. The big man at Technicolor was Dr. Herbert T. Kalmus, who founded the company in 1915 and ran it until he retired, 45 years later. He was the President, General Manager and Chairman of the Board; he also controlled a majority of the Technicolor stock.

The first Technicolor plant was built in Boston but eventually the whole operation moved to Hollywood. Dr. Kalmus even persuaded Professor Leonard Troland, the famous vision scientist, to leave Harvard and become Technicolor's first Research Director.

The so-called three-strip process, launched in 1932, was in its heyday. It included a unique camera, some very special emulsions and a whole factory full of custom-built printing and processing equipment, all covered by as many patents as possible. Technicolor had tried other processes over the years, but this was the first one that really produced good color rendition. For many years there was no competition worth mentioning, so it became very profitable.

The heart of the Technicolor camera consisted of three films, two apertures and one beamsplitting prism. The film movements and aperture assemblies were supplied by Mitchell, of course. Which film went through which aperture was dictated by the need for good color separation. Inside the beamsplitting prism was a thin layer of sputtered silver, whose reflectance was adjusted to balance the film speeds.

The Technicolor process required very bright lights on the set, partly because that metallic beamsplitter wasted a lot of light. It sent all wavelengths impartially to both apertures, taking no account of the spectral sensitivities of the red, green and blue negatives. The beamsplitter was about the only way the three-strip camera could be improved, and I set about to do it, by substituting a multilayer, dielectric, color-selective beamsplitter that would send each film just the wavelengths it needed, wasting nothing.

There were no suppliers of thin-film interference components in those days-- we had to make our own. Ironically, some of the pioneer work in making interference filters and beamsplitters had been done by Mary Banning, Harry Polster and others at the Institute of Optics while I was there. But I had never worked in the thin-film lab-- my friend Bob Hills had that job. Fortunately, I obtained some good advice and hired some very talented help. We had to design our own vaccuum chambers, jigs, optical control systems, everything. There were no computers, but we did the necessary theoretical work on Hollerith cards, using a calculating card punch in the accounting department.

In due course we got the job done. There were 34 Technicolor cameras in the world and each needed a spare prism, so getting the job done meant turning out 68 more-or-less identical dichroic beamsplitters. We had increased the speed of the Technicolor process by two whole F-stops. The new, faster Technicolor could shoot pictures without the old, blinding Kleig lights. A circus picture called "The Big Top" was even photographed in available light.

In the early 1950's, the Technicolor Research Department was often in turmoil. Television loomed on the horizon. What should a movie-processing company do about it? Hollywood was agog over various schemes for projecting wide-screen pictures, both film and video. Technicolor contracted with SRI International to develop exotic TV devices. Exploiting my new status, I pushed for something far out: a bandwidth-compression project. We hired Bill Schreiber, who had been measuring picture statistics for his Ph. D. at Harvard, and his group developed a coding scheme with a 4-to-1 compression ratio. There was no computer simulation in those days. This was all analog hardware that could handle real-time, off-the-air signals.

Meanwhile, my thin-film group was champing at the bit. They had ideas for all kinds of new thin-film products to make and sell, but Technicolor wouldn't hear of it. Eventually they all walked out-- all except me-- and started their own company, under the name of Spectrolab. A few years later, the advent of high-quality, one-strip color negative film made the three-strip camera obsolete. There is one on display at the George Eastman Museum in Rochester.

But the other half of the Technicolor process was dye transfer printing, and color negative didn't make that obsolete. After being developed and edited, each of the Technicolor three-strip negatives was projection-printed through the back of a special film called matrix stock, which was processed to produce a positive relief image. After the demise of 3-strip, these matrices were printed from color negative instead, through the appropriate color-separation filters. Each matrix was then soaked in the appropriate subtractive dye: yellow for the blue-light image, cyan for the red, and magenta for the green. In precise registration, these dye images were then transferred one at a time to a strip of blank film, which became the finished product. This dye-transfer step was carried out continuously, 24 hours a day, on huge machines running at astonishing speeds (hundreds of feet a minute), thanks to a remarkable invention called pin-belt.

For anything over 200 release prints, dye transfer was still cheaper than printing color negative on color positive stock. Technicolor was very dependent on Eastman Kodak Company to design and supply all the special films that were needed to make the whole thing work. Kodak was happy to do this because Technicolor was a very big customer. Eventually the Justice Department intervened, to make these films available to everyone.

While at Technicolor I was trying to use sine-wave patterns as photographic test targets. The first paper I ever published, in 1957, was a short note about how to make good sine-wave targets. By the time I left Technicolor, in 1961, I had done several full-scale projects using these targets to characterize various photographic processes. One JOSA paper on sine-wave measurements described a nonlinear model that included the imagewise effects of processing in various developers.

My favorite project involved laying down a latent image with light of one wavelength, and erasing parts of it (before processing) with another wavelength. Among other things, this could sharpen up a blurred picture by purely photo-optical means, without the benefit of masking, scanning, electronics or coherent light. That paper was called "Image Processing Experiments." Today image processing is an everyday term, but I believe that was the first use of the phrase in its present meaning. (If anyone can cite an earlier coinage, I'll withdraw my claim!)

Flicker at UCLA

Meanwhile I was also enrolled as a graduate student in engineering at UCLA, commuting from Hollywood to Westwood. In 1953 I had taken a summer course at MIT from Shannon, Wiener and other leading lights of information theory, and I resolved to somehow apply systems analysis to the human visual process. (So did David Robinson, who was in my class.) I knew some of the problems with spatial sine-waves, from my work at Technicolor, so I chose temporal ones for my dissertation, hoping they would be easier. I had thought I would be the first to study sine-wave flicker (except for Herbert Ives, who pioneered almost everything). But then I heard about a Dutch engineer named Hendrik DeLange, who was doing some very similar experiments. We corresponded at length, and finally met in Amsterdam in 1963.

The Itek years

In 1961, when I moved to Itek in Lexington, Massachusetts, it was the newest, shiniest, hi-tech company on Boston's Route 128-- its greatest claim to fame was the manufacture of spy-in-the-sky satellites. Itek was an acronym for "information technology," which included imaging devices of every kind-- chiefly optical, photographic, and electronic. The former Physical Research Laboratory of Boston University was the nucleus around which Itek was formed.

I joined a group that included Bob Shannon, Dick Barakat, Raoul Van Ligten, Dick Swing and others-- officially, the Optics Department, under Geoff Harvey. Geoff reported to Dow Smith, Director of Research, who in turn reported to Duncan McDonald. Bob Hills ran a neighboring division. Brian O'Brien, Jr. was not far away.

In those days visual scientists normally built a special optical system for each experiment, which was later dismantled for the next experiment. At Itek, I set about to design a general-purpose visual stimulator that could be used for many different experiments. One early design involved a He-Ne laser and a coherent optical system that formed controllable diffraction patterns directly on the retina. With Itek's optical shops at my disposal, that wasn't too difficult to build, but laser diffraction provided only a limited range of spatial patterns.

When Bob Hills sent me on a lengthy mission to Itek's Palo Alto division in 1963, I moved my family back to the West Coast. There was a problem with the so-called "multi-band camera," an aerial camera that took 9 pictures at once, in 9 different wavelength bands, from the near-ultraviolet to the near-infrared, using 3 different rolls of film. Each of these 9 images had to be examined by Itek's photo-interpreters, and they were not happy about it. The main thing they could see was film accumulating, 9 times as fast as ordinary color photography. Vaults were overflowing, just from preliminary tests, and yet no one knew whether 9 bands actually revealed more than 3.

Aubrey Bailey, a former colleague from Technicolor, had joined Itek West, and I put him to work on this problem. We wound up with a factor analysis or eigenvector technique that compressed most of the information from an arbitrary number of bands into 3, which could then be viewed as a single, 3-color image. In practice, this involved photographic masking or electronic matrixing in order to derive each of the 3 final images as the sum of positive or negative, enhanced- or reduced-contrast versions of the original bands.

We suppressed the tone scale that was common to all the input images, because it contained no spectral information. Then the first derived spectral vector typically contained 80 or 90% of the variance in the scene, the second, 10 or 15, and the third, a few percent more. Printing them in the order green, red, blue, we obtained a brilliant color picture (with very false colors, of course) that typically contained 98 or 99% of the spectral variance of all the original bands. All objects with the same spectral signature were reproduced as the same color and brightness, regardless of their original brightness levels.

As a laboratory test, we made up two classes of visually gray tiles, with different spectral signatures but carefully designed to be metamers for ordinary color film, as well as for the eye. The upper picture was taken with Ektachrome. We decided that 5 bands would be enough to demonstrate the point, so the lower picture was taken of the very same objects, with a five-band process. As you see, the result clearly sorted out the two spectral classes. A lot of spectrophotometry, densitometry, and computation were required to get this final result, but of course it could all have been automated. The process never got used, because the whole multiband project had somehow fallen into disfavor.

While completing my original mission, I had continued to design visual stimulators. In 1965, Itek negotiated a contract for me to build one using a cathode-ray tube (CRT) display, under the sponsorship of the Night Vision Laboratory at Ft. Belvoir, Virgina (which was developing the Army's sniperscopes and snooperscopes). But as Manager of Photographic Systems at Itek Palo Alto, I had many other things to do besides vision research. A year later, I was offered an opportunity to work exclusively in vision, at SRI International in nearby Menlo Park.

SRI International

After five years at Itek, my departure was very gradual. I started setting up a lab at SRI in March of 1966, and finally cleaned out my desk at Itek in June. That was 24 years ago, and I have been at SRI ever since. Well, almost ever since. Bob Boynton persuaded me to spend the academic year 1971-72 at the University of Rochester, working in the Center for Visual Science (CVS). He also persuaded Brian Thompson, who was then the Director of the Institute of Optics, to offer me a Visiting Professorship. Brian O'Brien was long gone, but Rudolph Kingslake was still on the faculty.

Eventually the flickering-pattern apparatus I had designed at CVS was also used in California. But when I first returned from Rochester in 1972, I was the only one doing vision research at SRI. Since then, I have been fortunate to work with many skillful collaborators, including Robert E. Savoie, Henry Magnuski, Robert Boynton, William Baron, Dirk van Norren, Hugh Wilson, Francine Frome, Thomas Piantanida, Christina Burbeck, Hewitt Crane, and Eugenio Martinez-Uriegas. There has been a thriving vision group at SRI for many years. Of my several sponsors at SRI, by far the most important one has been the National Eye Institute, of the National Institutes of Health. That agency has supported my work almost continuously since the early 70's.

Another important influence of this period was the invention of the Purkinje eyetracker, by Tom Cornsweet and Hew Crane. That device has had many different applications in basic and applied visual science, but to me it brought a whole new era of stabilized retinal images. After Cornsweet returned to academia, Crane continued to develop this principle, aiming toward a small, easy-to-use, clinical instrument. Ultimately, the production of eyetrackers was spun off to an appropriate commercial manufacturer.

The most important piece of apparatus is still a CRT visual stimulator, with or without an eyetracker attached. When I brought the Itek stimulator to SRI, there was nothing like it anywhere in the world. With later additions and refinements, this machine provided the data for 20-some experimental papers over a period of 14 years. With an eyetracker attached, it became the first CRT display to be used for stabilized-image work. But it was built in 1965, so it necessarily relied on analog signal generators that were full of vaccuum tubes. By 1979 it was beginning to show its age. Time for a new machine, designed to study the spatial and temporal aspects of color vision.

The 1979 design was all digital. The computer not only runs the experiment and collects the data, it also generates the spatiotemporal-chromatic stimuli and stores them as an array of pixels, to be transmitted on demand to a full color display. Ten years ago we could not assemble a system like this out of commercial components; it had to be custom-built.

Over the years, this digital system has also required additions and refinements-- new color monitors, a rebuilt frame store, an upgraded computer, and countless software changes. So far it has yielded the data for another dozen publications, and it's still going strong. But again it is about to be left behind by rapid changes of technology. A 60-Hz frame rate is now the industry standard for computer workstations-- 10 years ago, we got the first one off the assembly line. Today, microcomputers can do things that our PDP 11/73 doesn't come close to. So it's time for a third generation machine. We are now building another vision laboratory, based on the Mac II. It will have all-new software, a high-resolution color display, and the latest model eyetracker. I hope it won't become obsolete for at least another five years.


To The Visual Sciences Program Front Page