A Self-Interview
by George Michael
I started at the Laboratory on the 16th April in 1953. Someone met me at the
gate and told me I would be sitting in the "Cooler" until my Q clearance came
through. It's been downhill since then.
I can honestly state that interviewing myself is boring. The purpose of this
recording is to talk about some of the earliest memories I have of things that
had to do with graphics. To do that, I will talk first about the UNIVAC. Now,
the UNIVAC arrived about a week after I got there. I got my clearance in one
month - never having been anywhere interesting, nor done anything even
moderately contentious - but it took six months or so to get the UNIVAC
cleared (to run). And while we were waiting, we practiced writing UNIVAC
programs and did a collection of manual calculations and plots to produce,
exactly, the numbers that we expected the UNIVAC to produce when it ran.
Some of the bigger design calculations, then as now, took between 20 and
40 hours to run. Of course, they were very simple compared to what's being
done now. Simple or otherwise, they produced reams and reams of output. It
took a long time to get at the output because it was printed, one character at
a time on typewriters. These typewriters were somewhat modified Remington
Rand typewriters with very wide carriages. We produced, literally, reams of
output on these printers. We would then take the output, plot the numbers
appropriately, by hand, on graph paper and, truthfully, it took too long. One run
would take over a week to plot, so that you could see just the crudest of the
trends in the calculation, or what was much worse, find out that there was
some sort of error and all the computer time had been, in a sense, wasted.
However, this intimacy with ones output seemed to give a special kind of
intuition about how the program was running.
We decided that we needed some kind of automatic plotter. Now, the chief
engineer on the UNIVAC, Lou Nofrey, and Bob Crew, Chet Kenrich, and Dick
Karpen, all assistant engineers, found a commercially available flatbed
plotter. A crude vacuum held down the paper, and you could move an X-Y
plotter on top of it, and a little hammer would smack a die through a wet ink
ribbon, and leave a symbol where it hit. One of the problems with this plotter,
which was about the only thing then available, is that it had no interface to
the UNIVAC; the only thing it understood was IBM punched cards.
Socomplexity on complexitywhat was done was to feed the data written on an ordinary output UNIVAC tape (a metal oxide coating on a metal tape) into a
converter that could produce punched cards. For output, the UNIVAC
recorded on tapes at a density of 20 pulses per inch20 bits per inch20
characters per inch in each of seven parallel tracks. The trouble was that
there was no such converter. So, Lou Nofrey and his crew designed and built
such a machine. It would accept input from the tape and produce punched
cardsold-fashioned, ordinary IBM cards. We then took the IBM cards and
put them through an IBM "Summary Punch" that was hooked up to the
plotter; (I think I remember that Benson Lehrner made the plotter.) After a
suitable setup, which included a lot of twiddling and diddling with multiplier
potentiometers (pots) and scaling pots, and so forth, and much effort to
ensure that the paper was aligned exactly with the X-Y axes of the plotter,
you could actually get it to read the cards and produce a series of points. And
then, by carefully drawing the axes, you could read values from the plot. Not
surprisingly, it turns out, that really wasn't much faster than doing it by hand.
An added "feature" was that the plotter refused to stay adjusted, so
comparison of successive graphs was essentially impossible.
The entire procedure was too complicated and time consuming to be useful
in our daily work but that, I believe, is a first instance of computer graphics at
the Lab. It was between 1953 and '54 that this work was done. The plotter
didn't survive too long because it represented much too much complex hand
waving to get the plot. It most cases, it was actually faster to plot
something by hand, even though it took a week to do, say, the fifty plots that
characterized a problemthat is, reduce all of the results from an entire
problem to the graph paper.
The other thing that certainly contributed to the early death of that plotter
was the fact that in 1956 we took delivery of some 704s from IBM, IBM
704s, on one of which there was a cathode ray tube that was imaged by a
camera. It was called the Model 740. So, this was the Model 740 on the IBM
704. And there was also a direct-view tube that went with it, the Model 780a
modified television set. And one could look at the results as they were being
played out at the same time that the objects were being plotted by the 740
and recorded on film. The difference in speed between the Benson Lehrner
plotter and the IBM 740/780 was so dramatic that it quickly became the
favorite way to produce graphical output from the design calculations. Of
course, there's no free lunch. As delivered, the 740 was loaded with
inadequacies. The unit while faster than anything else we had, was
impressively slow, the distance between frames was not constant, the lens
and the film used did not match the phosphor, a single frame advance took
on the order of half a second. Initially, we were treated to the slow production
of poor quality pictures. However, fixing these problems turned out to be
easy, and it was stimulating and gave us the nerve to go to bigger and better
things.
Before discussing some software efforts, I would like to just mention some
other graphical developments from these early years. Starting with the IBM
740/780, we will be discussing the Digital Equipment Corporation PDP-1, the
Data Display Incorporated dd80, the Information International Incorporated
FR-80, the Scientific Data Systems SIGMA-7, and the Television Monitor
Display System. Some of the persons being interviewed will discuss, more
extensively, these and other aspects of the development of graphics
capabilities at the Laboratory. Even including these discussions, not all the
graphics work that went on at the Lab is going to be covered. The ones that
are mentioned are intended to highlight some particular aspect of the
graphical work that went on. It is not my intention to slight the other work. I
am simply not as familiar with it as I should be. Where possible, I hope later
to interview some of those who were more involved, but were missed for one
reason or another in my initial set of interviews.
In the abstract, the early display devices were ones that supply some
hardware solutions for elementary actions like:
- Set the beam to the point (X,Y)
- Move the beam to the point (X, Y)
- Change the intensity of the beam
- Move the film
With such primitives, it is necessary to develop programs that can display
characters, draw lines and, generally, produce pictures that satisfy some set
of requirements. The early display devices were delivered without such
software; a situation that was both awkward and liberating. One might say that,
instead of delivering, manufacturers abandoned their hardware at our door,
and it became both our job and our pleasure to write the softwareit didn't
have that name thenthat we actually got to use.
One of the most interesting pieces of software developed very early was a
subroutine called "Plotla" written by Norman Hardy. Its only function was that
between any two points in the lattice, it could plot a "best straight" line
represented by a series of points that were spaced either every point, every
other point, every fourth point, and so on. It was a very tight and fast routine.
The speed of the 740 was such that it could only plot, in 151 microseconds,
either a point at some location (X,Y) in a raster of 1024 by 1024 points, or
starting at a given point, it could draw either a horizontal line, a vertical line,
or a 45-degree diagonal line going from lower left to upper right from that
given point. With that capability, one had to see all of things that you'd like to
plot in terms of those three or four simple little capabilities. Norman's routine,
even though there was some nontrivial amount of calculation being done to
get the points, was a tight routine.
Being as fast as possible, Plotla was used extensively. So, everything,
practically speaking, was built out of that and one other subroutine. We used
some data tables to design characters that could be plotted in a 5 by 7 matrix
of points. With this capability, we were able to build routines that simulated
all kinds of graph paper and plotted all kinds of points, and also, of course,
with some trickery, produced strange kinds of surface texturing features. So
an area could be textured or otherwise marked according to some prescription.
Another interesting thingthis is now about late 1956 or early 1957is the
so-called discovery of motion pictures. It happened this way: Every week, the
group of designers would get together and talk about their designs and what
problems were current. At these meetings, people would discuss and show
how their calculations were progressing. Among those was Chuck Leith. So,
one day he was showing some of the results that he had obtained by using
Ng triangular zoning [1]. Beside allowing
some very interesting approximations
and averaging techniques, some theorists believed tiling with triangular
zones was the most natural way to the difference equations in a region.
Anyway, his results, showing the movement of the mash, were recorded on a
strip of 35mm film and in a slide projector. When he was finished he just
pulled the filmstrip out of the projector and the image on the screen moved! It
electrified everyone in the room. This was incredible! The images were
moving! And, at once, everybody saw the value that motion would contribute
to the presentation of such results.
The use of motion pictures in science was common enough, but it seems
that no one had yet thought of them as being useful when doing mathematics
and physics with a computer model. But there it wasa powerful new way to
perceive one's results. So, several of us began looking into the uses of
computer generated motion sequences. In general, however, computer time
was too valuable to use it making movies. So, the movie-making mechanisms
had to be careful not to use too much computer time and, certainly, to never
waste time. Among other things, this inspired efforts to develop things away
from the big computers, and a search for a small computer that could be
used for movie making. One other approach was followedthe movie making
capability on the big computers was improved.
One of the first things we decided was that the camera was not acceptable
for movie making and had to be replaced. At this point enters our Technical
Photography Division. Then, Bill Jordan was in charge of it, but the guys who
did the work included Dave Dixon and perhaps one or two others that I don't
remember now. But what they did was to interface a real movie camera to
the computer. The camera had pin registration and a claw pull-down, so that
the amount of film that was moved at each frame advance was very, very
accurately controlled to within a few ten-thousandths of an inch. And with that
camera, we started producing motion pictures. The next problem that
showed up was that the film being used was not acceptable. It had, in one
case, a bluish cast, and the images were very blurry. I started dealing with
some representatives from Kodak, complaining about the need for a much
better film. They started bringing out samples of new coatings that would
respond better to the kind of phosphor that was on the 5-inch tube in the
Model 740. It was an exciting, interesting thing to do, to learn how to match
the capabilities of the lenses, and the film, and the film processing, and the
phosphor in the tube, and the amount of energy that was being produced by
the unblanked signal, and so forth, and get that all to work together
harmoniously, so that we got sharper images. But we did it. I should add that
some of these developments found their way into a neat Kodak handbook on
CRT photography, and some very excellent films were developed, giving
more resolution and more photographic speed.
It was a tremendous amount of fun, but not being satisfied with just that, the
next thing that people started talking about was, wouldn't it be neat if we
could produce color pictures? By color-coding the data in each frame, one
could see very quickly lots of new information that was not so obvious from
looking at numbers or even monochrome pictures. This was, I would say, in
the middle to the latter half of 1957.
The problem of producing color was, in general, much more difficult. One can
think of the customer as the human eye, a very high precision device. It had
to see the computer colors as comparable with every day color. The
variables in this study included the CRT phosphors, the color films, lenses
and film processors. We considered various things, but it was a suggestion
of Dave Dixon in our Technical Photography group that the color processes
that were available on film could be exploited best by organizing the
information we wanted to plot into separate frames of monochrome film,
each frame being destined for a unique, logical color. We called this the
Color Separation Method. Thus, you could imagine the information on one
frame would be meant to be projected onto the color film through a green
filter, and the next frame go through blue, and the next frame it would be red,
and so on. And, with the help of an optical-effects printer in the darkroom, it
would merge these pictures, one on top of the other, through appropriate
color filters, onto some ordinary, standard color film, and produce a
color image. We tried that, and although it took a long time, it yielded superb
color pictures. The problems of directly producing color pictures and movies
took considerably longer to overcome, so in the interim this Color Separation
Method was used, albeit sparingly. Nonetheless over the years, thousands of
colored movies were produced.
Our very first attempts at this, other than just test runs and so forth, were
done with the help of Leith's program. Chuck modified the code so that some
of the contents of one frame would be designated as green data, and the
next would be in blue, and so forth. Of course, one could merge, for instance,
green and blue and produce yellow and things like that. This gave a usable
color capability without having to wait for the making of a special color CRT
and special films and lenses and so on. It turned out that something over six
years went by before one could produce color pictures directly from a
computer controlled CRT.
I took the first set of black and white (monochrome) runs that were
produced. Since we had no equipment of our own at the Lab to do this sort
of thing, I took them down to a place in Hollywood called Film Effects. The
person who had invented the optical-effects printer during the Second World
War ran it. His name was Linwood Dunne. And he thought we were crazy
when I told him what we wanted to do and how to do it, but he said, "It's your
money, you can do what you want." So, in this manner, he produced our first
Color Separated movies, through green, blue, and red filters sequentially.
And, lo and behold, we had real color output! It wasn't the best, most slick
color in the world, but it was very usable color output. And with it, you could
see, for instance, the hottest spots in a field in red, or, when you chose, to
show a shock position in yellow. It came out beautifully.
We brought this film back to the Laboratory and showed it to all the
designers, and while it was generally very well received, the impression,
again was like the reaction to the Benson-Lehrner plotter several years
earlier: Ho Hum. It was too involved to become a part of the regular
production cycling that had to be done with these design codes.
Dave Dixon built an optical-effects printer that would do the job at the Lab.
With that little thing, many, many, movies in color were made, and when the
work outgrew the homemade version, we acquired a commercial version
where even more elaborate effects were possible. We tried putting the
effects printer under control of a computer, but it was a step too far into the
future; digital control was still too foreign to the film industry. And, as usual, by the time they caught up, the entire approach had been passed by.
The color films that Kodak and others now started producing for direct
exposure by CRTs got to be better and better, along with the processing that
was being done, and by our learning about better lenses, and better films,
and better CRTs and so forth, we could blend all of these things together and
produce the best color film exposures that were possible.
In 1960, say, we finalized the design of what was called a high-precision
cathode ray tube. Its raster was 4,096 by 4,096 points, and it had many
levels of intensity, and it could draw characters and draw lines, and so forth.
This machine, this precision thing, was to be an integral part of our first
acquisition of a PDP-1 computer from the Digital Equipment Corporation,
which had two basic goals. It was to be kind of a romper room for us to try
weird ideas, and it was to do all of the plotting that came off the larger
machines. You'd just bring over a tape and plot the stuff. The PDP-1 we got
was truly a romper room. With it, one could explore many areas that were for
us, really new. It was made to talk, play music, and do high-precision film
recording. It could accept magnetic or paper tape or punched-card input or
output and had one other unusual feature: It was able to digitize
photographic records. This was done initially using the 740 CRT on the IBM
704, but the PDP-1 was faster and far more accurate. This facility was called
the Eyeball, and was used for almost ten years to digitize many test films.
These films were held in the Eyeball's film mount device. A point (X,Y) was
displayed on the CRT and imaged on the film sample. One measured the
amount of light getting through the film and knew thereby, the photographic
density at that point. All of the readings were written onto a tape and taken
over to a larger computer where various filtering and analysis techniques
were used.
Somewhat independently, at Information International, Ed Fredkin, Ben
Gurley, and later Bob Waller produced a series of much more elaborate
Eyeballs, which they called Programmable Film Readers (PFR). The PFR-2
was the most precise of these, having effectively,
a 218 by 218 addressable raster,
and this was used by an AEC contractor, EGG, in
Nevada to digitize the films that were being produced there. The PFR-3 and
the PFR-1, at least, were used in rocket research to digitize rocket traces
and other shock analyses that came out of White Sands and North American
Rockwell and places like that. And PFR-0 and PFR-3 were used also in
various and sundry of the DOE laboratories to digitize things like bubble
chamber films that were produced by the big accelerators. Even with that
precision, it was decided by the persons who were doing the work, that that
was not sufficient. So, while they did a great deal of analysis, higher
precision measurements were needed. These were being done by other
kinds of specialized semiautomatic film analyzers based on the on-axis
measurement of photographic density. But I won't go further into those here.
Many years later, I met a person, Phil Peterson, who was then at Lincoln
Laboratory working on the TX 2. He independently had the same idea and
proceeded also to build an "eyeball," which he called the Flying Spot
Digitizer and used, among other things, to scan a slide of the Mona Lisa. He
then achieved some added fame by producing a very large picture of the
Mona Lisa on a Cal Comp 30-inch plotter. He did this while he was an
employee of CDC, and it had a great deal of publicity all over the United
States. Everybody wanted a copy of the Cal Comp Mona Lisa picture. It
became an instant collector's item.
Well, again, in the time between its design and delivery, new things came
along, not the least of which was, in our case, a display called the dd80,
which was built for us by Data Display Incorporated, in St. Paul. Data Display
was absorbed into CDC subsequently but, at the time, it was a separate little
company, and they were using an electrostatic deflection system instead of
the standard electromagnetic deflection. This allowed the dd80 to be much
faster than any other display system then in existence. And these machines
allowed us to return the display function to the production machines. On the
IBM computers, the interface was a data channel that delivered 36 bits every
6 microseconds to the display controller. We went through a typical
improvement sequence, getting better films and lenses.
With the help of Leonard Nelson, vice president of Ehrenreich Photo-optical,
we were able to get a CRT lens that dramatically improved exposures. The
lens was so good that it was produced in great number and used throughout
the AEC test facilities.
The dd80 CRT had a P-24 phosphor on it. Just as a measure of how things
had improved, when we started with the 740 on the 704, the cycle time to
plot a point was 151 microseconds. On the dd80 it was about 4.0 microsec.
Even with that short time, the new films were able to record the exposure. In
addition, much better looking characters picked out of an ASCII matrix, could
be plotted in 9 microseconds. And lines drawn from anywhere to anywhere
on the face of the tube could be done between 6 and 30 microseconds each.
So, the machine was a very powerful, fast, and stable thing. And it led to a
major change in how we produced output on the computers for the design
calculations.
We needed a fast camera. We found it in a company called Flight
Research, run by Bob Woltz. We got him to slightly modify a version of the
Flight Research camera that had been developed to photograph spark
chambers used at LBL and other accelerator sites. Our version of it had
thousand foot magazines, and was capable of up to 30 frame advances a
secondthat's almost two feet per second. So the frame advance, which
used to be a half a second or so on the 740, became a more acceptable
time. In consequence, we could run some calculations that were more or less
simple and photograph the output at 30 frames a second, and some of the
movies that are still in the archive today were made precisely that way.
I want back up a little bit. Starting back in the early '60s, when we had the
PDP 1, and even on the 740 earlier, in addition to Norman Hardy, other
programmers like Bob Cralle, Garret Boer, and Gale Marshall were producing
useful software such as, given a large collection of values, plot a series of
isopleths in color. I remember also, David Mapes, who had an intuitive grasp
of the binary character of the display routines. We put together great
quantities of little routines that could be pulled out of a repertoire and stuck
into the physicist's FORTRAN code to do all kinds of plotting and graphics
massaging and so forth. Pat Crowley and I produced a thing called
CRT-Batch, which was a collection of many of the plotting subroutines existing at
the time. These routines ran mostly on the production computers that used
the dd80s, until they had outgrown their usefulness. There are, however, still
versions of CRT Batch running and the dd80s were never superseded in
speed. It was one of the fastest things on wheels then, and probably still is,
but there arose a desire for more precision and flexibility.
We got this higher precision in the '70s by our acquisition of the FR-80s
from the Information International Incorporated. The FR-80sI think there
were six of them at the Lab at one pointprovided a resolution of 80 line
pairs per millimeter. They had the ability of recording on a variety of film
formats and directly on paper. They took over essentially 80 percent of the
output load of the entire computational facility. The remaining 20 percent
were produced on a high-speed printer, the Radiation Printer, a 7 page per
second device. Its description will be covered at another time. The FR-80s
picked up a load of about one million microfiche per year with an average of
over 120 pages per fiche produced at a reduction of 42X. The FR-80s were
not anywhere near as fast as the dd80, but they produced an incredibly finer
image. As the people aged at the Lab, the speed got to be less important
than the quality of the output. There may be some sort of ontological
message here, but I'll leave that for some sociologists to ponder.
To return to the question of software, I think what should be inferred from all
this is that, given the opportunity to fiddle around with some very strange
equipment, people tend to do interesting things especially if they're not being
hassled by deadlines or other things. We had some young and very bright
people who came to work at the Laboratory during the summers. One of
them, Ken Bertran, was a cooperative engineering student at Berkeley. And
he and I worked a lot together. We started this adventure on the 709 or
7090, I don't remember. But the idea was, given a display device and given
the fact that the computer is a powerful logic engine, produce, with Kenneth
doing most of the work, a program that could do Symbolic Algebra or
simulate things like an oscilloscope probe. Such capability is present today in
things like Macsyma and Mathematica. It was really important, in his version,
that the functions you were working with could be displayed on the screen of
a direct view CRT, and in stereo if needed.
We moved fairly quickly from the 704/709/7090 sequence to the PDP 1
as soon as it got in, and started producing things there, because there was
much less pressure on the programmer to get out of the way and let some
production work get done. With Ken's program, you could get a stereoscopic
view of the function and, now, if you trained your eyes the right way, you
could see the thing in livid, three dimensions and understand what the
function looked like. You could do calculations with the program and
understand the consequences of changing a variable or changing a parameter.
We did two other things from which not much came. One was an
attempt to digitize voice such that you could synthesize new words from the
phonemes that were being spoken. This used the Eyeball, and the idea was
these sounds would be strung together to produce speech. First, it seemed
that we could read things and produce speech use by persons who had a
vision problem and to let the computer instruct the operators about what was
needed. Like telling them to mount a tape, or replenish a paper supply.
Again, a complete solution of the problem was too tedious for people to use;
there were simpler but less capable things available and more generally,
this sort of dabbling didn't really fit in with the Laboratory's programmatic
work.
We tried reading with music with the Eyeball. We could read the music and
select certain voices to have the PDP-1 play through a set of attached
speakers. But again, it was too tedious; the machine wasn't fast enough for
the job we wanted to do. All it was really good for was a "demonstration of
principle." Actually, an MIT student, Peter Samson, developed an encoding
scheme for writing and playing music on the PDP 1.
I should mention also that, when the Eyeball was developed, some of the
peopleGale Marshall, Roger Fulton, Ray De Saussure, David Mapes, and
Larry Galeswere very instrumental in making this piece of hardware a
usable tool for the purposes that we had intended. I'm not sure where any of
these persons are now, but the work they did in the early '60s should not be
forgotten; it was truly a pioneering effort.
I think it would be fun to end this diatribe with a little anecdote that I think is
charming. After we had the dd80 working fully, it was one of the sources of
illustrations never before seen. There was a program that Bob Cralle
produced called "Jahke." Inside this Jahke you could house all kinds of
programs.
At one point, I was asked to give a talk at an Association for Computing
Machinery (ACM) dinner meeting on St. Patrick's Day.
We put together samples of films and a first clip that said "Happy St.
Patrick's Day." Since we controlled the size of the characters, Bob made
them vanishingly small to begin with, and he displayed them backwards.
And then, as the film played, this phrase approached the viewer, and at an
appropriate time was supposed to turn around and become readily
recognizable as "Happy St. Patrick's Day." Well, so fine and good.
It was, of course, printed in all livid green, just beautiful. And it was the first of many little clips of film that were on this sampler that I had made up to
illustrate our work. We were privileged then to have such beautiful
equipment. And it was, well, it was fun to show others what you could do with
it. So, we went to the banquet, and we gave the talk, but something
happened.
The projectionist looked at the film before he put it into the camera, and
noticed that it was backwards, because he could see that it was "Happy St.
Patrick's Day," but it was backwards. So, he thought that the film had been
wrapped wrong on the reel. So he rewrapped it. He reversed it. I was
supposed to stand up and say, "Well, I want you to see that this is a special
day, because St. Patrick got rid of some of the snakes in Ireland, so we want
to wish you all a Happy St. Patrick's Day." And we show this thing marching
onto the screen, and I say, "Oh, it's backwards. Turn it around." But it was
already turned around, and now it went backwards. So, everybody got a large
laugh out of that thing. But it just goes to show youyou can't assume too
much about how others will view your work. It's working around corners that
you hadn't otherwise thought about.
Finally, there is a story to tell about graphics for the masses. In the mid
1960s I was shown some work at CDC being done by Malcolm MaCaulley
and Joseph Hood. Briefly, they were building interfaces in a CDC 3600 to go
from video to digital data and from digital data to video. The key point is that
the vision organ was an ordinary television display. Back at the Lab, we were
actively looking for a way to provide graphics capability directly in the user's
office that was affordable. Television monitors clearly were the lowest cost
displays we could find. The question of how and where to manage the
refresh did not have so obvious an answer. Then, some low cost disks came
available. The one we chose was a made by Armin Miller at Data Disk, Inc.
There was a head per track and the heads rode directly on the disk surface,
a cobalt-nickel alloy. The capacity was such that at least 16 video channels
could be placed on each disk. Then the vector display lists were scan
converted using a program running on the production computer, and
characters were produced through a video character generator. The scan
converter software was highly optimized; it produced raster displays faster
than a television camera. Thus was born the first model of the Television
Monitor Display System (TMDS). With the refresh of the screen being served
by writing the rasterized images on the disk, and the TV signals distributed to
individual offices over high-grade coaxial cable, the users finally had
excellent graphics in their offices. Later models of the TMDS grew from 16 to
almost 256 video channels, and then through some cross bar switches to
almost 2048 terminals. Also, later on, the refresh disks were replaced by
solid-state memories.
As with all the other graphics devices mentioned in this interview, a more
detailed treatment is given in some of the other interviews.
Finally, I would like to reiterate that this discussion of some early graphics
adventuring at the Lab is not meant to be complete.
[1] At that time, and even today, most difference equations were written specifically for quadrilateral zoning.