An Interview with Norman Hardy
NH = Norman Hardy
GAM = George Michael
GAM: I'm interviewing Norman Hardy, and today is the 23d of May, 1994. Norm, why don't you begin by telling us how you got to the Lab, and when, and where you came from, and so forth?
NH: Okay. I graduated from UC Berkeley in 1955. I was interested in computers and had a background in physics. And that was a good combination, it seemed. I knew that Livermore had computers, and when Dick von Holdt, who came to Berkeley to recruit staff, told me about the kinds of computers they were looking toward, I was very excited. So, the combination of physics and computers interested me very much, and I was interested in the Lab, I guess.
I was hired, and I arrived at the Lab on July 4, 1955, which is an easy date to remember. I'm not very good at dates, but that was an easy one. I sat around in the cooler for a while, and had occasional access to the IBM 650 before I got my clearance. I went over and tried to debug a couple of matrix inversion routines that I had been thinking through in my head, and discovered that it wasn't a good idea. But that was good experience on the 650.
Tad Kishi was the instructor for probably a batch of fifteen or twenty people who were coming to work at the Lab, mainly on the computer side, and getting their clearances. He was teaching such things as flowcharts and coding conventions for the IBM 701. The IBM 704 was on the horizon, but it was a little bit far out at the time. We talked a little bit about the UNIVAC 1 and the 650, which were the workhorses, the big computers at the time. The CPC card program calculator was already a little bit passé then.
GAM: Yes. Do you remember any of the people who were in the class with you and Tad?
NH: I would have to think.
GAM: Well, it's only of minor importance, but it was interesting that we thought that flowcharts were so important then.
NH: Yes. Well, back in school, D. H. Lehmer had given a course and had emphasized flowcharts. It seemed to be the important idea back at the time. Subroutines were certainly important, but it seems so obvious as not to require a lot of attention in the course.
I think I've already mentioned the machines I first worked with. The 701 scarcely had an assembler. I forget too much about the details of how we got code in it. At least it had some solution of the relocation program, so that as you wrote code you didn't have to worry about the absolute addresses that it went to. But the programs to help you do this were still rather rudimentary.
GAM: I remember there was a thing called DUAL or something like that.
NH: Yes, there was a dual board, a floating-point interpretive package. There were still no index registers or floating point on the machine.
Okay, let me talk about a code, which was the first big production job that I was involved with on the 701. It was the first big physics code, and it lived on the 701 to complete one big simulation and a few calibrations of the follow-on code on the IBM 704.
It was an advanced version of earlier codes that had preceded it. I hadn't had much experience with the earlier codes because I was new. But I roughly knew the physics, and I understood the differential equations that described the physics and how they were turned into difference equations. That was really the physics and mathematics part of the project. Leo Collins and Bill Schultz were working on the code itself. It was done in KOMPILER.
GAM: The K-OMPILER.  It was done by Kent Ellsworth and Robert Kuhn.
NH: Yes, the KOMPILER was spelled with a K, and it had been done by Kent Ellsworth, Ken Tiede, Leona Schloss, and Bob Kuhn. Later on, Leona Schloss married Sid Fernbach, the head of the Computation Department.
GAM: Yes. John Hudson also worked on the KOMPILER.
NH: John Hudson, yes. But I won't describe the KOMPILER very much. Leo Collins would remember the details of the KOMPILER very well. It was less than a FORTRAN, and it was very much oriented to support two-dimensional mesh calculations where the entire mesh would not fit in memory. Remember, the first memory on the 701 was Williams tubes.
GAM: Williams tubes, yes.
NH: And the FORTRAN version was nominally successful. Remember, this was compiled on a machine with 4K words, and it was several passes. But it worked, and it supported our big physics code. The one job that this code on the 701 did was to check special calculations in special geometries to see if things were going to work as our theory said they would. There was a race to get the calculation done soon enough perhaps to influence what was actually tested in the Pacific. I forget the dates.
GAM: This was 1956?
NH: Yes. And it got close enough to the right answer as I recall, but I'm rather hazy on that. I didn't follow the test results that well.
GAM: Well, there were all kinds of results, so it's not necessary that you follow them. There are several references that will be included in this study that the Defense Nuclear Agency (DNA) has collected, and it's got all that stuff in it. It was interesting. My remembrance of the 701 was its unreliability.
NH: Yes. That's interesting compared to today's computers. There was an IBM engineer allocated to be there twenty-four hours a day. On most shifts he would have something to do, and generally it was replacing a Williams tube. The machine had no parity, and so the requirement for maintenance was generally discovered by a program either crashing or getting absurd results�you know, getting exponents that were absurd. So some substantial fraction of productive time was lost, either by re-running or by the IBM engineers actually looking for the bad Williams tube.
GAM: Well, I remember running one sweep fifteen times and never getting the same answer. Do you remember Ernie, the guy with the bad leg, the IBM guy?
GAM: He came by with a plastic-handled screwdriver and was banging on the tube chassis. And he found the microphonic tube that was causing the trouble. What a way to find bugs!
NH: The UNIVAC at the time, with which I didn't have much direct experience, was highly checked.
GAM: I'll say.
NH: The 701 was much faster than the UNIVAC, but every once in a while people made the observation, "Wouldn't it be nice to have a fast, reliable computer?" And eventually they came.
NH: The physicists who worked on this code were Roland Herbst, Mike May, and to a much lesser extent, I was sort of in the mathematical interface between the physics and the code on that one. As soon as it had done its thing on a 701, we immediately started on the 704 version, as I recall. The 704 had not been delivered.
GAM: Well, it came in about April of 1956.
NH: Yes, 1956. I don't recall�perhaps the 704 had arrived, but we'd had good enough experience with the compiler on the 701 to plan to do this big code in FORTRAN for the 704. FORTRAN was a much superior language, but the size of the code, as it turned out, and the reliability of the computer and the speed of the compiler were such that it couldn't finish compiling. But I'm getting ahead of myself here. Chuck Leith joined us as we began to plan for the 704, and contributed a lot of interesting mathematical ideas about how to solve this.
GAM: He was working with SIMON at that point as I recall.
NH: Well, we worked together for a number of months on the difference equations, and we had a long discussion as to whether to use triangular or quadrilateral zones. Chuck Leith went off and did an experimental version called SIMON which used triangular zones. And SIMON continued to be not exactly a benchmark, but sort of a touchstone or an alternative way of doing physics for quite a number of years. I believe it was developed in special ways to fill certain physics niches. I don't recall.
GAM: You're right. I remember that, too.
NH: Anyway, this other big code became more the production code. Bill Schultz was in charge of all of the detailed equations, and Leo Collins wrote the code. I did a little bit of peripheral I/O routines, but nothing significant there. Again, Roland Herbst was the real physicist on the thing. He was the only one who knew all the physics and how it related to other things. Mike May wasn't quite so active in the 704 version as he had been on the 701.
After we discovered that the new FORTRAN would not handle our code, we went off and essentially wrote it in assembler, but the fact that it had been in FORTRAN was still quite an asset, because we knew what code to write in assembler. So we got much of the benefit from a higher-level language despite the fact that our compiler wouldn't work. Eventually, there was a FORTRAN version of this big code, although for a long time assembler versions existed. For every machine, the assembler versions were the workhorses, and the FORTRAN version was the definition of what the program was really supposed to do, as I recall. I may be a little bit vague here.
GAM: Well, that's about right.
NH: And I consider this a very valuable thing, because you could do exploratory physics in FORTRAN very quickly, find out what the results would be, and if it was profitable, then you could retrofit the assembler versions to do the new physics efficiently.
Some of the fun things�the IBM 704 had a 780 display device and a 740 film recorder. So, I had a great deal of fun writing programs to record on film the geometry inherent in the Lagrangian version of this calculation.
GAM: Oh, I should rush in and say that there was this absolutely seminal idea that you contributed in a sense in PLOTLA. Do you remember that?
NH: Oh yes! That is an idea that has been invented innumerable times. I'm not sure this was the first.
GAM: I understand. To us amateurs, at least, that was the first time I ever thought of that, and it was just exciting as hell!
NH: Yes, it seemed to me like a hard problem. And suddenly when you sort of divided it in the normal divide-and-conquer way, it suddenly occurred to everyone that it was simple.
GAM: Well, I don't know. I will say this: You wrote PLOTLA, and it was the fastest thing that was on the 704, etc., and it was an absolutely opaque piece of coding.
NH: Oh, yes, the little code to plot a line segment.
GAM: With a variable number of dots.
NH: Yes, it figured out how many dots to put on a line with obscure code.
GAM: There were no divides or anything like that in it. It was just a really interesting piece of code. Bob Cralle and I were both dumbfounded with what you told us was called the Minkowski metric�a lattice�because we'd never heard of it before. It was really great.
NH: A little bit of applied Banach algebra�only ten instructions' worth.
GAM: Great. That's all I can say. Well, let me leap in here, because we're at about 1958. Do you remember that Nick Christofilas was talking about this high-altitude shot, and you made a stereo view of the electrons trapped in the dipole magnetic field? I have that film somewhere.
NH: Oh yes! Yes.
GAM: I still have those pieces of film around. And it was done in simple stereo but it was tremendously effective.
NH: Yes, that was unreasonably effective, and I never figured out why, but I think it was the little imperfections in plotting that appeared as little nodules on the line which allowed the eye to lock in. I hadn't planned it that way, but it turned out very well.
GAM: Oh, it's a remarkable piece of stuff.
NH: It was a good stereo effect of the path of an electron in the magnetic dipole of the Earth. Evidently, people who had worried about cosmic rays had been worrying about such things for years and years, but they hadn't had computers to plot these things, so they were interested, too.
GAM: Well, these were relativistic electrons that were trapped in the Earth's magnetic field. Well, it was just an incredible piece of stuff. That's what made this part of our adventure really exciting.
NH: Yes, it taught me a lot about physics. I hadn't realized that electrons would migrate around the Earth from west to east or whichever direction they go, but then I understood why.
GAM: And then they rattled back from pole to pole�my God!
NH: Yes, from pole to pole. Anyway, we were discovering some stuff that a lot of people had already known, but then we had pictures that they had not guessed at.
The graphics that we did for this program included the isotherms, which was an interesting, logical exercise. I think I'm getting a little bit ahead, but we had this Houston Fearless machine, which was big and expensive-looking, but since it was originally built for doing movie processing, it was not that expensive. And we could make sequences of black and white frames, which would be color-mixed on the Houston Fearless machine, so we could make color movies way back then. We made a few of those, and while it seemed to be an expensive toy, when we actually did the calculation it's easy to see that, contrasted with printing numbers on paper, which was the classic way of doing things, it was cheaper. So, it was pleasing to have something that was both fun and economical.
Chuck Leith made a beautiful use of this in doing his SIMON and his Weather code. And his Weather code and SIMON had a lot of common physics and mathematics in their background, and coding techniques.
GAM: Well, I'm crediting him with having inspired the movies. At one of the LMG (Livermore Megaton Group) talks he pulled this strip of film through the projector, and the image moved. It was just astounding! So we went off and did all that stuff.
NH: Yes, back when it took 24 microseconds to do a single, fixed-point "add," then having real-time graphics was not in the game. It would take maybe a minute or so to plot one frame of a movie.
GAM: Oh yes, easily.
NH: So it took computing power to produce those movies, but, again, less computing power than it would have taken the computer to convert these numbers to decimal, and do what was necessary to get them printed.
GAM: And they were quite opaque when they were printed. We tried to figure out what the hell they were.
NH: Yes. You needed the numbers sometimes, but not until you got the lay of the land by looking at the pictures.
GAM: Well, we shared an office�do you remember? And you were sitting in the office and said, "Why can't we use the CRT to control a beam and probe where the marks are on the film?" 
NH: Oh, yes! I remember that.
GAM: That was the Eyeball, right?
NH: Yes. I didn't do much of the work on that, but I may have gotten the first idea, using the CRT as input instead of output. And then people took off on that idea and did all sorts of applications of that. I guess one of the ideas was analog equipment that ran during an explosion�nuclear or nonnuclear.
GAM: Well, the only thing we were doing is photographing all these A-scope and J-scope traces, as they were called. And our problem was to digitize those traces on the film, and calibrate them, etc.
NH: Yes, and turn the traces into numbers so the computers could understand them. From the test, we needed to capture analog signals that arrived in a few microseconds. The signals were displayed on CRTs and recorded on film. Digital circuitry was not fast enough to do the job then. The Eyeball read the film at its leisure.
GAM: Yes, and that was done for many, many years thereafter.
NH: Yes. It seems amazing, but I guess that was a good way to get numbers into the computer.
GAM: Well, a similar digitizing process was also the basis for Information International Incorporated initially�Ed Fredkins' company. He built the PFR (Programmable Film Readers), as he called them. They were good machines.
NH: Yes. An interesting point that some readers may not know is that back in those days the graphic output of a machine was made by computing X and Y coordinates, putting them into a register, and giving a plot command as it were. This would plot a single point on the screen that would persist only with the help of the phosphor. We called this operation plotting one point.
GAM: It was done on both the IBM 704 and 709.
NH: Yes. The CRTs that we had back then plotted the point in 140 microseconds. That was as fast as you could plot them. This was not a line segment, that was one point. So you could figure out how much time it took to make a picture or a movie.
One other thing that happened about that time was the job of moving a description in one Lagrangian coordinate system�a description of some physics�to another Lagrangian coordinate system. The reason this was usually done was because the first Lagrangian system had gotten itself sort of tied in knots. The physics was still good, but the coordinate system was bad, and so people would devise a new coordinate system. At the time we began to do this, there were a few physicists who had the combination of knowledge and intestinal fortitude to do this by hand. And Jim Wilson was one of these people. And that was a damned shame, because it was dreadfully dull work. People like Jim Wilson had other, more profitable, things they could have been doing if this hadn't been so damned important.
So, I noticed what he was doing and went away just to figure out if there was a way that a computer could do it. And this was sort of a physics-like application that didn't look anything like solving differential equations�rezoning. We went through a couple or three stages of rezoning ideas before we got to ones that really outperformed Jim Wilson and his friends. But we finally did arrive at one that depended on the task of finding the area of the intersections of two triangles, or more accurately, the volume of the rotation of the intersection of two triangles. 
That's still an interesting job, and I think the technology may have been lost. It's on my stack of things to do�to make sure that people understand how to do that, because there are people off in the CAD/CAM business who sound like they still haven't been able to solve that problem in boundary cases.
GAM: I'm not surprised.
NH: And why it's difficult is too long to describe here. But, it turns out, the boundary conditions are hellatious. Anyway, that was fun, and we finally got efficient ways to do that. And that turned into a long-term application that was translated onto a number of machines, and persisted at the Lab for a number of years to my knowledge.
One other interesting thing that happened very early�it must have been maybe the middle of '56�I don't remember�is that Ted Ross and I got interested in writing music programs.
GAM: I should say that Ted Ross was an IBM engineer.
NH: Yes. We wrote a routine. As I recall, we would make a card deck with one note per card describing the duration and the frequency on the card. And we laid these piled cards out sort of like a keyboard. Then we would pick up the cards in the order the music was to be played, to make a deck. And the 701 performed the music. This was in a time when to perform music required going into a loop, both for the duration of the note and the duration of the transitions between the cycles that the frequency of the note was built up out of.
We recorded that on a disk. Portable tape recorders were at that time even less common than disks. And I think I still have that record somewhere. We did one of Bach's partitas for solo violin, and that was a lot of fun.
GAM: I remember being in awe when I heard that. It was just great!
NH: I have a quick story to tell about Ted Ross. After the 704 had arrived in Livermore, the 701 went to UC Berkeley, and was retrofitted with a core memory.
GAM: Probably for Cunningham's astronomy students.
NH: Yes. And the core memory made the 701 a vastly more reliable computer. Ted Ross saw the NYAP assembly program from IBM for the 704, and realized that's what the 701 had needed. So he wrote one. I'd written a couple of subroutines, but he really wrote ninety-five percent of the assembler. And indeed that assembler was used at Berkeley.
UC Berkeley was looking forward to an upgrade of the 701 to the 704, and Ted Ross did something which I think was remarkable for its time: He wrote a cross-assembler, I believe. But what was significant was the emulator or simulator. The simulator ran on the 701 to simulate the 704, and it did demand paging. And this was, I believe, before we heard of demand paging being done at the Ferranti ATLAS in England. And so he could simulate, in the small memory of the 701, the substantially larger memory of the 704.
Another interesting project that I did�this was probably 1959 or '60, something like this�was the Koch curve, the Snowflake curve, one of the early fractals that was invented about the turn of the century. We got a CalComp plotter that would, I think, plot 100 bits an inch over 30-inch wide paper. We would write the blow-by-blow instructions on a mag tape, and this pen would move around in two dimensions and plot what we wanted. FORTRAN wasn't recursive, so seven subroutines, each calling the next, was necessary. I think I still have a copy of that curve somewhere. But that was fun. It also led to the routines which we could use to drive the CalComp plotter from a FORTRAN program.
GAM: You know, I wrote them.
NH: Oh, you wrote that?
GAM: In fact, you gave me the algorithm which you called the "nice" number routine. Do you remember?
NH: Yes, yes.
GAM: That was an astounding discovery of mine also. It was great!
NH: While plotting this fractal, it was interesting to listen to the CalComp plotter because it was mechanical. And it sounded�it was sort of growling; generally during plotting, the CalComp would be running continuously making sounds with definite pitches. Plotting the fractals was more of a Brownian motion, a Brownian sound, than a sound with no definite pitch.
Early on, when the 704 had been there for a while, I read the IPL-5 manual as I recall, which was the first time I had been introduced to the idea of linked lists. And that was sort of a blow, because it suddenly opened up all sorts of possibilities that hadn't occurred to me that a computer could do. You could think of a list without having prescribed beforehand how long that list was going to be. Before the linked-list idea came along, it was necessary to allocate an array, guessing how long the array would be, and hoping you were right. A linked list would allow a list to grow and shrink, just so long as the total memory of the computer wasn't exceeded.
Bernie Alder and Tom Wainwright had been doing hard Newtonian bouncing spheres in three dimensions, and I realized that it would be much quicker if a particle only had to consider the particles in the neighboring zones when computing impending collisions. And with lists, you could keep an array�a list from which you could know all the particles that were in a given zone, so that you could do this efficiently. So we wrote a version of STEP, which was the name of this program, that used such logic, and it was the first application of linked lists to a physics code that I knew of.
About the same time, perhaps earlier, I wrote a program called MCA�for minimal code analysis. I remember the deck was five cards long. It would go in and obliterate the first 64 words of memory and, running in that space, transform memory so that it was possible to write onto a mag tape�which would be later fed to a printer�a printout of every location that referred to each given location. And I would not have known how to do that without linked lists. Essentially all the locations that pointed to a particular location were linked together on a list, and, due to the instruction format of the 704, that was a valuable debugging technique. You'd often run that, the MCA, after a compile, and that would be part of the program documentation.
One other interesting, brief episode�this is physics again: There was a modeling program that had originated at Los Alamos, I think. It ran on the 701. It was a lot of equations, and a small amount of logic, and we needed a 704 FORTRAN version. The 701 could write all intermediate calculations on tape. I forget how, but somehow we got those tapes converted to the 704 so that the 704 could read them, they being incompatible otherwise. The technique was to cause the new 704 program to read this tape after having produced each of these intermediate values and comparing them one by one. That was a remarkably effective way to debug the FORTRAN program, because it led us immediately, with one, short machine shot, to each new bug in the 704 program. And, indeed, in one intensive evening, we effectively got all the bugs out of the 704 FORTRAN.
GAM: That was with Bill Lindley.
NH: Yes, Bill Lindley did most of the work on that. That was very stimulating, the fact that, hey, this is a new way to debug.
Bill Lindley and I worked on something that did not come to pass that was interesting. It was to have been a 2D code that included neutron transport. It would perhaps have surpassed the capacity of the 704, but it surpassed our ability to write code. My part of the project was too big. I bit off more than I could chew. We had all of the logistics of how you pass the tapes to get all of this data to come through at the right times, but there was just too much physics to do. Bill completed much of his part of the task, but mine was too much to do.
GAM: Well, it was too much to do in the limited memory, I think is the fairest thing to say. We do it today.
NH: Well, that's probably true. Yes, it's been done subsequently, but we got discouraged, and part of the discouragement was the size of the core memory.
GAM: You couldn't shoehorn everything into the memory. It's a very difficult thing; it still is, but we have bigger memories now.
NH: Yes, well you probably do it without an intermediate I/O these days.
NH: That reminds me: there was a hydrodynamics precursor to this production code, which ran on the UNIVAC 1. Then the 704 came. I remember the morning after we got the first evening's worth of production on the 704 program. Johnny Foster came running into the room, I can remember, and said, "We got a month's worth of work done last night!" I can still remember how pleased he was at that.
GAM: That's great.
NH: On the UNIVAC 1, you couldn't even get a row in. And so every time you wanted to move onto the next point, it took a tape I/O. So it was a few points a second at best. I'm not even sure that it was that fast. There were about five hundred calculations probably per point per cycle.
GAM: And there were a thousand words of memory that you had to fit all this stuff into. The 704 was considerably better than that.
NH: Yes, the 704 was a screamer. It had floating point. It had index registers. It had enough memory to hold a row. It seemed like falling off a log at that point.
GAM: Yes, I thought I'd died and gone to heaven.
NH: Yes. I seem to remember that Val Kransky wrote the SONNET program.
GAM: Why don't you take us back to your adventuring with IBM on the STRETCH stuff?
NH: Ah, yes. Well, the LARC came. I never had very much to do on the LARC. I did have the great opportunity to go back and study the logic diagrams of the LARC back in '59 or so. I don't recall the reason that was given, but they thought that it would be useful for a programmer to know the logic design. I'm not sure it did the Lab any good, but it did my understanding of computers a great deal of good to see the detailed design of a LARC.
But after it came to Livermore, I never had a lot to do with the LARC. IBM had contracted to deliver the STRETCH  to Los Alamos. IBM called it both the STRETCH and the 7030. And as was the pattern, Livermore contracted to get a STRETCH, perhaps the fourth one produced. I believe the HARVEST  included the second or the third STRETCH. I'll talk a little bit about that later.
GAM: All right.
NH: Since the Lab had a contract to get a STRETCH, I went back�sort of on loan, although I was employed by IBM�I went back for a year and a half to work on the STRETCH. And the STRETCH was an interesting machine. It had a 300-nanosecond clock cycle, and it would do a floating point add in 3 cycles and a multiply in 6 cycles. It had a 2.1-microsecond core memory, which was the precursor to the 7090's memory. The STRETCH was delivered with seven million bits of memory, which was unheard of at the time. It had ECC (Error Checking and Correction). It was a big machine. (As delivered to the Lab, the STRETCH memory was 98,304 words, each 64 bits plus 8 bits for ECC, leading to a total of more than seven million bits. Awesome!)
I went back to IBM and worked on assemblers and compilers for the STRETCH; I also worked on a HARVEST which was delivered to NSA. It was much bigger than the STRETCH, and it had a trillion bits of mag tape, which was also unheard of at the time. 
GAM: That was tractor tape. Well, I remember you telling me when we met once that you'd managed to write a hydrodynamics code for the HARVEST.
NH: Well, that actually didn't get written, but it was planned. It looked to see if there was any way to use the peculiar character-manipulating ability of the HARVEST to do physics codes. We did an outline of how to do it. I don't know whether it would have been successful. There were severe questions about how much precision you really needed.
We had some novel ideas of how to deal with small precision in doing mesh codes, but we didn't really try those out. It would still be interesting to try it out. I guess there's a lot of other sources for information on the STRETCH and the HARVEST.
GAM: There is, maybe, but anything you can deliver is appreciated.
NH: I have a HARVEST manual. It was not classified�that was sort of a borderline decision.
GAM: Well, we certainly would be interested in that, yes.
NH: There are a few HARVEST manuals in the world.
GAM: Very few. I talked to a Charlie Franzine. Do you remember him?
GAM: He was at IBM, Poughkeepsie, and he was very kind to me. He got me some history books about IBM's early stuff, and I had tried to get a HARVEST manual from him and learned it was no more.
NH: Oh. Well, okay. I recall the decision to declassify it bore on doing a global substitution of one word for another, so that's how it was published. It's a programming manual for the HARVEST, so it goes into a lot of detail. 
GAM: Well, you had a lot to do with Harwood Kolsky along that era, too, didn't you?
NH: Yes. Harwood Kolsky worked for IBM. He had come from Los Alamos, and he knew a lot about the physics and the programming styles of Los Alamos and Livermore. So he essentially worked in the marketing arm, helping IBM figure out what kinds of machines would be useful to sell to Livermore and such places.
GAM: Well, I remember in that same period�that was 1959�we went back on this sweep through the east, and we visited John Cocke there. He took us into the machine room where they were trying to build the STRETCH then. And I remember that they had left out the golf-ball typewriter, uncovered, and so we saw it much sooner than we should have.
NH: Oh, yes, the golf-ball typewriter was a big success for IBM. But it was delivered about a year later than they thought it would be. The STRETCH designers had included it as part of the design of the operator's console. And IBM was paranoid�I think they still are�about preannouncing things. So, all the pictures of the STRETCH console would have someone strategically standing so that you couldn't see the typewriter.
GAM: Well, one of the things that impressed me is that we saw the thing before Ed Vorhees down at Los Alamos.
NH: Oh, yes, I can remember. Vorhees gave us a hard time about that. I think he gave IBM a hard time about it.
GAM: Well, I think there were some bad decisions about the golf-ball typewriter, but generally it was a big success for IBM.
NH: Yes, it was a big success. It perhaps wasn't very effective as an operator's console, but as a typewriter it set the standard. I mean that after it came out, not many other electromechanical typewriters were sold as I recall.
GAM: I'll say. So, you did this stuff on the STRETCH and the HARVEST. For the moment here, let's jump forward a little bit to when you went back for another adventure just across the bay to the ACS.
NH: Ah, yes. This is sort of out of sequence, but I came back to Livermore, which we'll talk more about. Later on, in '66, '67, something like that, IBM set up an organization in Menlo Park on Sand Hill Road to design a supercomputer. The name of this computer was ACS, for Advanced Computer System. Had it been built, their plan was to deliver it in '70 or '71, something like that. It would be an impressive computer even by today's standards, something like five hundred instructions a microsecond at peak. No one suggested that it could sustain that. The clock cycle would have been 10 nanoseconds, and superscaler up to five instructions in a cycle, 48-bit word.
GAM: Yes, I remember you telling me that the coaxial cable they were using would have made a spider proud if he could have spun the thing that small.
NH: Yes, the idea was to put these chips down without the normal plastic containers�an array of chips 200 mils between chip centers. The array would put five chips per inch in both dimensions, going over something like a square meter.
NH: And there would be wires connecting the bonding pads of these chips. I think each chip had a hundred bonding pads on each of its four sides, and wires would be pressure-welded to these bonding pads. Any wire that was longer than 1.2 inches would be coaxial. And the difference between coaxial and the noncoaxial wires is that you could see the coaxial wires. By normal definitions, the noncoaxial wires were invisible. If I got up close, I could see them.
NH: But it was a little bit like mold: Mold is little fibers that you can see if you look very closely.
The device that did the pressure welding was to be delivered with each machine because it was necessary to do upgrades, fix bugs. Also that device would carry around the probe to debug the machine, to find transient errors. Just as scopes were used in those days to find out why a machine wasn't working right, the idea that a machine might go through its economic lifetime never having to be repaired was still not dreamt of.
GAM: Right. Well, I remember visiting your office over there, and seeing about an eight-foot-high stack of manuals that had been produced by CSC (Computer Sciences Corporation in El Segundo, California). They were doing the operating system or something like that?
NH: Oh, yes, CSC was planning to do the production compiler. The compiler research was very much an IBM thing, but CSC had built some good compilers for IBM, doing sophisticated FORTRAN compilers for IBM computers. Fran Allen and John Cocke were stationed in California, and they had a small crew which was doing the compiler development for this machine. IBM at that time was doing something that is often preached and seldom practiced these days: The compiler development went on concurrently with the hardware development, and they fed into each other. They considered the compiler design to be an integral part of the designed machine. You don't build a machine until you know that you can produce a compiler for it.
GAM: Yes, that's great.
NH: Jack Bertran and Max Paley were there.
GAM: And Gene Amdahl.
NH: Gene Amdahl was there. Oh, the FORTRAN guy who did the first�
GAM: John Backus.
NH: John Backus had an office there. He was not frequently active in the design. But this was really a high-horsepower IBM group.
GAM: I'll say.
NH: And, indeed, many of the people subsequently went back and worked on the machine that became the RS-6000�well, that's a long story. It was truly a "RISC" (reduced instruction set computer) machine. It was superscalar, and the instructions were rather simple.
GAM: Why don't you take a moment and define what you mean by superscalar here?
NH: Well, it was the first design that I had heard of with the idea of issuing more than one instruction per clock cycle.
NH: I can recall thinking that that was barely credible at the beginning. It turns out, in retrospect, to be the right way to build a machine.
NH: But I can remember being quite in awe of the very idea�in fact, I thought that executing one instruction per clock cycle was quite remarkable. The LARC would require at least eight, and the STRETCH would require at least three clocks to issue an instruction. Let me just see if there are other important things to say about the ACS project.
GAM: Well, it foundered, and I remember you telling me that there was this big argument about whether you wanted to build a machine that was x-percent reliable or change it to the architecture of the 360 line and get more reliability. I thought that was a really interesting argument. Eugene Fubini is the guy who killed that project, right?
NH: I don't recall directly now what the politics were. I recall hearing that the machine would have been something like a $100-million machine. Now, there are many senses of cost. There are prices and what it cost to build, and what it cost to develop, and I don't know quite what the $100-million figure was. I believe that IBM ultimately decided that they couldn't find a price for which the demand would warrant finishing the development. They stopped the development before the really expensive stuff began. Herb Shore was the chief architect.
GAM: Yes, and he's the one who mentioned that it was a $100-million development adventure, which was kind of rough for IBM at that point, because they were struggling with the 360, too.
NH: Yes. This was before the IBM 360 had, I guess, become a real cash cow.
GAM: But in any event, you had some remarkable adventures there.
NH: Yes, I learned a lot about computer design there. I learned about computer circuitry by studying the LARC. But I learned about the process of designing by being at ACS. I was in the software area; but on the ACS project, the operating system designers and the compiler designers got to work with the engineers to design the machine. So that was very stimulating.
GAM: So when that project died, you came back to the Lab?
NH: No, I went to Tymshare.
GAM: Okay, I think that it would be a nice thing for us to switch to. Now, we go back to 1960, '61, '62, when you went to spend the summer at MIT on Project MAC (Multiple Access Computer)�CTSS thing.
NH: Oh, yes, CTSS. Jim Slagle had been invited to go back and study�
GAM: Artificial intelligence?
NH: No, the Project MAC. CTSS, which stood for compatible time-sharing system, had just come up. It ran on a 7090 retrofitted to have two boxes of memory, which gave it over two million bits instead of one. Essentially the operating system ran in one box, and user programs ran in the other. It had a random access disk, which was still uncommon for the day. It had a 7770, which was an incredibly baroque attachment by which you could connect Teletypes to an IBM 7090. We used Model 28 Teletypes at the time, shifted Baudot code. Jim Slagle couldn't make it, so I was nominated to take his place. We had talked about time-sharing systems at Livermore.
GAM: Yes, you and I did, even before the STRETCH was delivered we talked about it, and we came up with the Octopus idea and all that stuff.
NH: Oh, we should talk about Octopus, yes. In a separate pass, yes.
GAM: We'll do that later.
NH: So the idea was familiar to us, but it was a very "research-y" idea. We hadn't seriously talked about doing anything this time. We talked about what you might do, but hadn't really lobbied to start, as I recall.
GAM: Well, Sid wasn't really going to let us, is what it amounts to.
NH: That's true, that's true.
GAM: We wanted to.
NH: We wanted to, but we�yes, that's true. So, Project MAC had done something more or less along the lines that we'd been thinking of. And it was up and running. So Project MAC invited a bunch of people to come in and see what they'd done. So I got this chance to go back.
Incidentally, Gene Amdahl had also been invited back, and so Gene Amdahl and I�I think it was for six weeks�shared an office, which was great good luck. Part of the day we would get lectures, and part of the day we would go play with the CTSS system. Some of the lectures were on the current system, and some of the lectures were on the MULTICS, which wasn't running but was being designed at the time.
GAM: Oh, that surprises me. I thought the CTSS was there, and MAC was being designed. Yes, that surprises me, because MULTICS came after we had our time-sharing system running at the Lab.
NH: Oh, yes. But recall that MULTICS was under design for years and years and years before it existed.
GAM: Oh, I didn't know that. Okay. So Vyssotsky from Bell Labs�
NH: Oh, Vyssotsky was there?
GAM: Yes. And you, and Marvin Minsky, and few others. I used to have a copy of the paper you wrote while you were back there, but it's out of print now, I'll tell you. And I don't know where mine is.
NH: Ah, I think I remember vaguely something like that. It's conceivable�this was on a subsequent trip that I learned about MULTICS�but I believe, maybe it doesn't make much difference, but anyway, we got lectures from Elliot Organnick which always seemed to be disorganized. But at the end of the lecture, I always suspected that�every time Organnick would jump into something, he said, "Ah, but before I can describe this, I must describe that." And later on, after the series of lectures was finished, I think that was his pedagogical style. I think that was his way of letting you know what the organization of the system was.
GAM: The interrupt system of lectures.
NH: Yes. But you always felt that you were ten interrupts deep before he got down to the details. Anyway, it must have been successful, because I remember to this day a great many of the details about how MULTICS worked, and I think that's where I got the information. He also wrote a good book on the subject.
GAM: Yes, he did.
NH: �which is out of print, unfortunately.
GAM: Which I have.
NH: Oh, if you have a copy of that, I know of someone who would love to get a photocopy of that.
GAM: Well, I'll have to locate it. I don't know really know where it is, but it's around here somewhere.
NH: Okay. I've lost my copy. So, I wrote a little math code, typed it into the computer, and got it debugged and run. It was a funny language, but you could learn it in a couple of days. It was sort of a stripped-down FORTRAN. It compiled and ran, and you could sit at your terminal and put in print statements to see how it worked, very much like you do these days. No real debugger, but you didn't really need a debugger. It was a subscript-safe language. When the program broke, it broke in its own level of abstraction, so it was easy to use.
GAM: Well, you had DDT (Digital Debugging Technique), effectively, on it.
NH: No, not on CTSS. Anyway, it was a very productive environment, just as it was supposed to be, despite the fact that you could only have one or two programs in core at a time. And, oh, I don't know, there were probably twenty people on at a time, and it was quite useful.
GAM: So, you came back here after that summer, and we began actual development of Octopus.
NH: Yes. We began the actual design of a time-sharing system for the 6600, substantially influenced by what we had seen at Project MAC, and influenced by�
GAM: The stuff at Berkeley?
NH: �the stuff that we had heard was being designed for MULTICS, although there were many ideas from MULTICS that wouldn't work for the 6600, so that was a weaker influence. The MULTICS ideas were interesting, but only a few of them would fit the 6600.
GAM: This is from the architecture point?
GAM: Well, to get us back to the 1960s�before the STRETCH got here, I remember walking out one night, we were going to the Hungry Truck�do you remember?
NH: Yes, probably about 3:00 a.m.
GAM: And you were saying, "I don't see how we can use the STRETCH efficiently unless we do some time sharing or multiprogramming." I forgot the word you used exactly now, but it was from that comment that we started the idea of Octopus and time sharing.
NH: Ah, yes.
GAM: And I remember Sid fought it for quite a while. And then, all of a sudden he let us do it.
NH: Yes. Yes.
GAM: Well he let you do it. He had this little group of Abbott and Tokubo�
NH: Shigeru Tokubo, Clifford Plopper, and Bob Abbott, and I think that was�
GAM: And Ed Nelson.
NH: Oh, Ed Nelson, and there was another�Fraser Bonnell?
GAM: Yes, and there was a woman, too. I've forgotten her name. Juliette something. We'll have to ask Ed Nelson or Shig, one or the other.
NH: Okay. Yes, I think so.
GAM: Anyway, you served as their architect.
NH: Yes, I was essentially the architect on that, although there were very substantial amounts of details in various areas for which I didn't do the architecture. I wrote code that ran in the PPUs, but I didn't do any of the mainframe code of the operating system. I think I wrote part of an editor.
GAM: I've forgotten the name of it now.
NH: Oh, there was Barbara Schell who wrote some of the�I think she wrote an editor, the first editor.
GAM: But you had one that was really a bare-bones thing.
NH: Oh, there was NAB.
GAM: NAB, that's it.
NH: Yes, an editor.
GAM: And then you had this what I thought of as brilliant use of a PPU to collect the bits from twelve Teletypes or something like that.
NH: Oh, yes. We should mention that. There was a fellow from NSA who mentioned to me, a year or so earlier, a UNIVAC computer that they had programmed to do an input instruction, and the 24 bits of the word that were input came from 24 different Teletype lines, reflecting the respective instantaneous states of those 24 Teletype lines. This was much cheaper�the alternative was to spend several thousand dollars' worth of equipment for each one of these Teletypes, trying to collect the bits out of the characters and put them in such a way that the computer could get them. There was a similar mechanism for output. And I thought this was an incredibly clever idea. So I programmed this for one of the PPUs. This would allow us to attach 48 teletypes, Model 33 Teletypes, to the 6600. Bob Wyman designed and built, or at least designed, the hardware part of that. So the hardware part would only have to worry about 1 bit�
GAM: Per line.
NH: �per Teletype at a time. One a bit in and one a bit out. We used current mode Teletypes.
GAM: Well it was just amazing to me that the bits come so quickly in my own experience, and yet they don't come quickly at all when you're dealing with a one-microsecond cycle.
NH: Yes, that's right. It was an interesting contrast with time scales that made this feasible. I understand that Seymour Cray was sort of appalled that you would attach such a slow device to such a fast computer. I don't know whether the fact that we attached 48 of them at once appeased him at all, but�
GAM: He should be less appalled in any event!
NH: Yes. But, in any case, it was the only way we knew how to attach terminals to our computer, and so we did it that way. That, incidentally, was a precursor to ways that we handle terminals in Tymnet, which was to be reinvented and reincarnated several times over for another whole decade�decade or two, I don't know which.
GAM: So, as I remember it this adventure on the time-sharing system for the 6600 got started in either late '62 or early '63, and was running when the machine got here.
NH: No, it didn't run very well when the machine got here.
GAM: Well, it didn't run very well, but it ran.
NH: Yes, it was a good number of months�I'll say this for Sid Fernbach: Sid gave us a lot of good machine time to get that debugged after the machine was here.
NH: One of the interesting political things that we did was to host the big production code. We worked closely with the code designers to make sure that the time-sharing system would support it early, so that while we were taking all this prime time to get the time-sharing system running, very often it was possible to run the production code while we were scratching our heads. Later on, as the time-sharing system came to be functional, the fact that it could run while the time-sharing users were using the machine was responsible for the fact that time-sharing was up during the prime shift, because after all, the reason we had bought the computer was to run big codes like this. The strategy for running time-sharing was to support the production jobs, so that we could schedule time-sharing.
GAM: Yes. Well, I remember the summer after it sort of became stable, Ed Nelson came back from school, and we did a watch bird program in one of the PPUs. Do you remember that? We used it to sample the behavior of the system�to tell what the various parts were doing and what the duty cycles were. That was very instructive. It gave us a thirty-percent boost in performance, just like that, because we knew where it was going.
NH: Yes. I had forgotten that, but I remember now.
GAM: I don't know if I've got a copy of that report anymore. Well, you know, given that this was successful, the first thing that happened in consequence in a way is that this big production code got to be too big. And more time-sharers got to be there, so there was always this fight about who got the machine. Now, remember talking about the technique for bidding?
NH: Oh, yes! Yes. The IBM 6600 time-sharing system was unusual in that the user would specify�in essence, bid for processor cycles. And on a command line he could, or perhaps had to, specify a time to run and a value to be placed on it. And the priority would be computed by the computer as the ratio of the value to the time, and instead of time slicing, the program with the highest such ratio would run. You can work out the details but, in essence, if I had something that was very valuable to me and was only going to take a few seconds to run, I could run it over a typical high-priority production job, because production jobs ran hours. And so they had to divide by a much, much larger number. And there were quite a few flaws in that model, but basically it served well as the�
GAM: As a paradigm. But, you know, I don't agree that there were flaws so much as it was a misuse. The fact that the computer floor supervisor could issue time to whoever asked for it meant that it was just like in our government: inflation. And you couldn't depend on a conserved quantity like time. If you spend your time too fast, you know�
NH: Well, I don't quite recall the chronology, but I recall there were periods of time�there were people, for instance in A Division�oh, you couldn't spend money that you didn't have. And one of the things you could do with your bank account is donate it to someone else.
NH: And there were people in A Division and B Division whose job it would be to own the respective bank accounts, and dole them out.
GAM: And then divide it, yes.
NH: But I don't recall that they could create money. They could only transfer money.
GAM: You could call the time administrator and get a little more time to save your files.
NH: But you had to get it from the guy who owned some money.
GAM: Yes, but the point was, if you added up all the time that was given out, it was more than twenty-four hours a day.
NH: I think there were some limits on it. I don't recall. It would be interesting to go back.
GAM: I remember bitterly about that, because I thought that the bidding system was a good idea, provided time was conserved. And remember we talked about, well, if the guy doesn't use the time, let's drain it out of his bank account and put it into a general fund, etc.?
GAM: Well, that couldn't work if they kept giving time away.
NH: Yes, it's clear that it has to be conserved to work well.
GAM: Yes. The original time-sharing system that got started initially was called GOB�Generous Omnipotent Benefactor, or something like that. And it was written in a maverick macro language that was developed by Bob Abbott. And we've heard from Hans Bruijnes how he wanted, and finally got, the entire time-sharing system put into FORTRAN so he could "move it from machine to machine." And there was some merit to that, I suppose. There was less loss of efficiency than one might have thought. But remember one of the things we started noticing after the 6600 started to run a lot, was that it was never enough memory.
GAM: Never enough places to store stuff. Do you remember then what you did? Do you remember we talked about Photostore and the precursor to that?
NH: Oh, Photostore! Photostore, yes. George and I started worrying about�let's see, we talked about networks of computers, we talked about time-sharing, and now this is somewhat before the 6600. We're going back in time some.
GAM: Yes, right.
NH: We began to dream about a fancy scheme where computers would be tied together, and people would access them through terminals. I think I'm not doing too much reconstruction here. And one of the critical components that was required was a large, online storage facility. It would be called a file server these days, but we didn't have such names then. It was necessary to have a computer that would access this file storage, and ship it over to the various worker machines.
GAM: Workers, right.
NH: It's essentially what is now called a LAN, a local area network. Back in those days, our plans�we didn't have anything planned very much like an Ethernet, but we did have a central computer that would have interfaces to a number of peripheral computers.
GAM: Well, the very first dream we had was of using this Burroughs disk with 7700 heads on it, remember?
NH: Yes. The idea behind the fixed-head device is a technique that has come to be called slot-sorting. In one revolution of the disk you could support several accesses, because you could switch heads several times, and this would give you many accesses per unit time for small pieces of data. The computer that was selected to do this was the PDP-6. Now the chronology is getting a bit confused in my head, but perhaps we should just talk now and sort later.
GAM: Yes. Well, we started with a PDP-1 and realized that it wasn't going to be strong enough. We went to the PDP-6.
NH: The PDP-6 had a 36-bit word. It had 18-bit addresses, which seemed large to us. It had a nice instruction set. We anticipated that it would be easy to interface to, as had been the PDP-1. Indeed it was.
GAM: It's true.
NH: So it was chosen to be this machine which would be the go-between, it would be sort of the central point through which all communications would go.
NH: It would have access to the central store, and it would be able to shovel data off to any of the other worker-computers. I know where we ended up plugging in the terminals, but I don't recall where we had planned to plug in the terminals.
GAM: They were supposed to be plugged into a PDP-1 or something like it. And then there was supposed to be on the disk�the messages would go on the disk and then be transferred to the machines involved, a line at a time.
GAM: And we had this thing called the basement tape�do you remember that?
NH: Oh yes! The basement tape was to have been, say, a thousand incredibly simple tape drives. The tape drive would have consisted of two reels, a simple motor on each reel, and a ballistic motor, in the sense that you'd turn it and it would turn through inertia. No take-ups�
GAM: Idle arms and stuff like that, tape loops.
NH: No tape loops. Just�the tape would come off a reel, go over the head, and go onto the next reel. The TX-2 had something like that at MIT. DEC's "DEC-tapes" were inexpensive descendants of this idea. (Both of these tape systems came from Tom Stockebrand's ideas. Tom was a staff member at MIT Lincoln Laboratory and subsequently joined DEC, where he produced the DEC-tape.)
NH: It would be up to software to accelerate and decelerate these reels so they wouldn't come off and would get to the right spot. You'd turn on the read heads when the data that you wanted was moving over the heads. And we had worked this out in some detail. We talked to Rabinow Engineering about building such a thing. And talking to Jacob Rabinow was one of the high points in my career.
GAM: Yes. Same here�it was an exciting thing.
NH: And he telling us how he invented things.
GAM: Well, some of the stuff that he did was just utterly brilliant!
NH: I hope someone writes a book about him. I don't know enough information about him, but there's certainly a book lurking.
GAM: He's a fertile person. Do you remember this thing that he had this flywheel on the end of the motor?
GAM: And then you could clamp it and store the energy up in torsion on the shaft? And then it would reverse in a millisecond or something like that, at full speed and so on.
NH: Yes. This little sewing machine motor could reverse in a millisecond, and in a quiet room you couldn't hear it reversing, because it would go through this reversal in about a millisecond. And Jacob said, "The biggest loss of energy was the skidding balls in the ball bearing."
GAM: He was impressive.
NH: Yes. He had many other interesting inventions.
GAM: I think it should be added, though, that we didn't get a basement tape system from them, unfortunately. Well, they were being conservative. They cut its performance by a factor of four and raised its price the same factor.
GAM: So then we went to IBM, essentially. Well, we went out for a bid that ultimately got us the Photostore, delivered in 1967.
NH: Yes. We talked to several people in IBM�two people that I recall, one in research.
GAM: Gil King.
NH: Gil King, yes. They had been building a special-purpose memory device to hold a�
GAM: For a translation project, yes.
NH: �to hold an English-to-Russian and Russian-to-English dictionary on digital film. And that digital technology seemed promising, and ultimately�well, there were a couple of competing projects using similar technology. And that was one of them that fed into the Photostore, which, to make a long story short, was a trillion-bit, write-once memory, with replaceable media, so that you could spill off the medium to shelf storage. But the online medium was a trillion bits. IBM called it the 1360 Storage System.
NH: It was a plumbing nightmare, because there was a fully automated photographic processing system inside.
GAM: Yes, a processing station.
NH: And these little chips of film would be moved into a vacuum chamber where an electron beam would write on them directly�this was silver film that was written not with light but with electrons. And it could be moved out of the vacuum chamber into a more or less classical development process, and be put into these cells a little bit smaller than a package of cigarettes, and automatically retrieved from there to be moved into a flying-spot CRT scanner to read the data. And it worked!
GAM: It worked beautifully.
NH: It worked quite reliably.
GAM: I think that there was a guy at Poughkeepsie who devised the error-checking code.
NH: Oh, yes!
GAM: One of the fundamental successes of that machine was in that error-checking code.
NH: Yes. In fact, the machine had two or three levels of error checking, and one or two of the levels were invented for the purpose and went on to become important elsewhere.
NH: Was that T. C. Chen?
GAM: No, not T. C. Chen. No. Another guy. I have this book around here somewhere about error-correcting codes.
NH: I don't recall those details.
GAM: Well, you know, I was fascinated by that machine. I could say to the civilians who came through, "When we put that thing on line, it gave Livermore more memory than rest of the world put together!"
NH: One of the difficult things about that machine was its many, many moving parts, especially to control the photographic processing lab. It had many valves and sensors. Someone designed the necessary digital logic to control these, and someone else decided that this was a lot of logic to build. Because the duty cycle for any one of these things was relatively low�at most tens of milliseconds�they wrote a program on an IBM 1800 computer to simulate all of this in real time, and it ran just fine. So they didn't have to build any of it! That was a big success. It was one of the first large-scale, computerized, process-control systems that I think anyone had built.
GAM: This is an anecdote for you. Livermore contributed one thing as far as I can tell that made the Photostore the big success that it was, and that is, they put a 55-gallon drum of surfactant agent down in the basement, and pumped the stuff up into the wash water so that the film got really washed. The NSA (National Security Agency) had two of these things. They would put their film away, and it was sort of wet. And then it would weld to the box, and they would break the film when they tried to pull it out. That didn't happen to us.
NH: Did they learn?
GAM: They didn't. Actually, the guy who made the Photostore a success for us was Jim Dimmick.
NH: Jim Dimmick, as I recall, was just hired as one of the standard IBM maintenance people, but he was also an inventor, sort of like Ted Ross. He solved problems that you normally think of a machine designer as having solved. He also was known, as I recall�I heard this story years later�as the only person on the west coast who could make Data Cells work. A Data Cell was this IBM technology for storing 600 megabytes, which was too big for a disk in those days. The Data Cell had many magnetic strips.
Jim Dimmick had the knack for figuring out what was wrong with these devices, and people who were too far for Jim Dimmick to go and visit occasionally would stop using the Data Cells because they couldn't be maintained. The Data Cell was used at Livermore to index the Photostore.
GAM: Yes. Well, it was bigger than an ordinary disk on a machine. We had this Librascope disk on the PDP-6.
NH: Yes. A pair of these disks would store a billion bits.
GAM: That wasn't enough.
NH: But it was fine for working storage. And it was fixed-head disks in it, so it gave you fast access. It gave you many accesses per unit time, as many as the PDP-6 could handle. But the index to the Photostore was kept on the Data Cell.
GAM: Yes, twice!
NH: Yes. One interesting thing I think that we should mention here�one of the failures of planning on the Octopus. We neglected to compute, from the beginning, the reliability of components.
GAM: Ah, yes.
NH: And the PDP-6 design worked, but the PDP-6 had so much hardware on it, and it was built in such a way that when the hardware was under repair, the system was down. And as a net result, the entire system was down too much. We had the figures, we should have known, but we didn't compute that.
GAM: Well, I think that's exactly the best way to say it.
NH: The other thing which I consider was wrong is we vastly underestimated how much programming there was to do to get this whole thing up. Back in the beginning, we could stand up and answer any particular objection about how are you going to solve the following problem�and we would describe a program that you could write that would solve that problem, and people would see that you could write that program. And, indeed, we did write those programs. We were just off by an order of magnitude how long it would take to finish them all.
GAM: Yes, and to get them interrelated correctly. That was the brutal thing.
NH: Yes, to get them integrated. The good news is that it finally worked! The bad news is that two generations of computers had come and gone before it began to work. I mean, there were two generations of computers that had gone and been forgotten before these programs really began to work smoothly.
GAM: Well, yes, I've heard it called�like the little girl with the curl in the middle of her forehead: When she was good, she was very, very good, but when she was bad, she was horrid!
NH: Yes. No one had heard of big software, or at least we hadn't. I think some of the SAGE people were beginning to get a clue as to what the problems were.
NH: I understand that they went through some of the same cycles. And maybe they learned sooner than we did, but anyway we learned.
GAM: I think that the biggest importance of SAGE was it put IBM into the core memory business.
GAM: And have you read Emerson Pugh's book on that?
NH: No, I haven't.
GAM: Well, it's a great book. Get it out of the Stanford Library�"Memories That Shaped an Industry." He's got several of them out now, but this is one of his first, and it's quite good. And it makes a lot of things clear that were opaque when they happened. I didn't understand them then, but now I do.
NH: I can sort of get back in the frame of mind and, back in the days when programs were only a few thousand words long, you'd have an idea. And a few days later, if you weren't disrupted, it would be running. And the idea that some collection of programs might take years to do just hadn't occurred to us.
GAM: Yes. I think that the difficulty is partly with the phenomenon that Chuck talks about. You know, every average programmer has one, big program in him, and then he's burned out. So you do this big thing and then you need another one. But you're burned out, so you get somebody else and he goes through the same error generation.
NH: The programmers who contributed to the Octopus eventually, as it turned out, are many people that I've never even met.
NH: Oh, I'm sure, yes.
GAM: I think, and I've said so on tapes and stuff like that, that the Photostore owes its "component success"�that is, as a real, usable thing�to Garret Boer. And John Fletcher to some extent.
GAM: They designed it from the MULTICS plan and so forth. And I think they also did some stuff that was a little difficult to understand, like they had a "time-sharing system" on a PDP-6 that didn't even swap memory.
GAM: So it was a mixed bag.
NH: An interesting piece of history: The PDP-6 came from DEC without paging. It had some elementary-style memory protection, but with no paging. We and some engineers at the Lab undertook to build paging and segmentation hardware for the PDP-6 memory. Was Bob Wyman connected? I don't know.
GAM: No, I think it was Bill Mansfield and Dave Pehrson. Do you remember there was a Pag-Seg (paging and segmentation) unit that was being built at our place? Bill Mansfield did the STAR paging hardware, I think.
NH: I thought he hadn't arrived at Livermore by that time.
GAM: I don't know.
NH: But he may have. Anyway, someone at Livermore designed a paging unit.
GAM: Well that was Pehrson, and possibly others.
NH: Oh, Pehrson, yes! That was the guy I think that I remember.
GAM: Pehrson, right, and Bill Mansfield.
NH: Yes. Well, I was still at IBM during part of this time. A paging unit was built and put on the PDP-6. And much of that design was adopted by�at least the basic ideas�were adopted by DEC for the PDP-10.
GAM: Yes. And we, in turn, got them sort of from the GE 635.
NH: Yes. Now we weren't inventing paging, but we did an early version of it there.
GAM: And ATLAS.
NH: ATLAS. Yes, Ferranti ATLAS preceded that.
GAM: Let's talk about when you were a big consultant on the Sigma 7 time-sharing system. We got a lot of ideas from Berkeley's GENIE Project, with your guidance, and Butler Lampson, and so forth?
NH: Okay. Well, let me see now. I don't think that Butler Lampson worked on the Sigma 7, which was Scientific Data System's (SDS) attempt at a large, general-purpose computer to supplant the 930-940 systems. The Sigma 7 software architects at SDS had learned little from the Berkeley 940 software architecture, and produced a system that was not suitable for Tymshare's business. Gary Anderson did the GEM system for the Sigma 7. That was much more nearly suitable, but there were also problems with the Sigma 7 hardware compared with the PDP-10. Perhaps Gary's system was related to work done at Livermore while I was not there.
GAM: We wanted a machine for graphics, and Sid forced us to get a Sigma 7.
NH: Yes. The Sigma 7 wasn't a bad machine.  Well, I mean, Livermore got too many kinds of computers. It's not that the PDP-10 was a lot better computer than the Sigma 7, but�
GAM: But we had all the software ready for it.
NH: Yes, the software was�
GAM: That was one of the problems.
NH: Yes, lack of software for the Sigma 7 sort of sunk it. That sunk it for both the Lab and Tymshare, because Tymshare for different reasons had a Sigma 7 thrust on them for a while.
GAM: Oh, I didn't know that.
NH: And it was more a lack of software. Well, the Sigma 7 had other drawbacks, but it had pros and cons vis-a-vis the PDP-10. But the main problem was the lack of software. And we'd learned by that time that you don't get software quickly for a computer.
GAM: Well, I'll tell you a famous rule of Norman Hardy's: It takes seven years to mature a software system.
NH: Well, yes, that rule came a lot later.
GAM: Nonetheless, it applies!
GAM: Well, I remember the Sigma 7 system, and I think it was called GORDO.  And I know you had a lot to do with it. And you were arguing with Butler and Wayne Lichtenberg.
NH: Yes, well they visited off and on.
GAM: And Mel Pirtle?
NH: They and Mel Pirtle were designing the SDS 940, yes. It was what they called the 930 from XDS, which was the same company that built the Sigma 7.  The 930 was a machine much inferior to the PDP-10, but it was cheaper. And Berkeley got one, and sort of did magic on it�they did their own paging stuff. And that machine became the prototype for the 940.
GAM: And Mel Pirtle did a lot of the system design?
NH: Yes. They did a magnificent job of stuffing a lot of function into a small number of gates, and came up with the software and hardware. SDS, or XDS, as it had become, took the Pirtle design and copied it very faithfully.
GAM: That was smart.
NH: Smart. And it delivered the machines, which were designated the 940s, to Tymshare. And that got Tymshare its start. And we took the software, modified it greatly, but still pretty much preserved the architecture from UC Berkeley.
GAM: Well, GORDO had that idea. It had some MULTICS ideas in it, and it had some LTSS (Livermore Time-Shared System) ideas in it.
NH: Yes. Well, I never really learned very much about GORDO. I know that you were doing something with the Sigma 7. And Gary Anderson�where did he come in?
GAM: Well, he was one of the smart guys working on the machine, but not developing the time-sharing system. There was Gary Anderson and Ken Bertran at that point, and Dick Conn was the Group Leader.
NH: Oh, yes.
GAM: And Gary Anderson modified the print head of an ASR 33 Teletype, and wrote the necessary programs, so that it would produce Braille for blind people. He's off designing computer-controlled ski bindings up in Tahoe now.
NH: Yes. I haven't talked to him for a while.
GAM: I'll say.
NH: But he did some nice things at the Lab. So, I don't really recall very much about GORDO. I know you were working on it.
GAM: Well, no, but you were involved with guiding them. You talked to Dick Conn a lot, I remember, telling him this was good and that wasn't good. In other words, you were functioning as a senior architect there.
Well, we can switch off of that and then jump into another thing. Do you remember JELLO?
NH: Oh, yes! One of my favorite programs. It was a two-dimensional elasto-dynamics code. It did only the simplest physics. And we made a movie from it.
GAM: Numerous movies, as a matter of fact.
NH: And the one that I remember�we didn't show the little man kicking the lower left-hand corner, but that's what we should have done if we'd been into a little bit more animation. There was a square, and an impulse was imparted to the lower left-hand corner. And it jiggled very much like JELLO would, and it began to rotate. Years later, when I got my Macintosh, I made a three-dimensional version of that.
GAM: Oh, really?
NH: Yes. I'd always wanted to do a three-dimensional�my Macintosh had more compute power than the computers we were using back then.
GAM: What did you show, a wire model or a solid?
NH: A wire model. I don't have good solid display software.
GAM: Yes. It's hard to do, and it's not clear it does you any good. It's better to just whiten the lines that are behind, so that you know there's some depth there.
GAM: Well, we have shown that JELLO code all over the world.
GAM: And the passionate version of it is in flaming red.
NH: I'm not sure I've seen that version. I remember the old one.
GAM: Well, John Blunden and I are right now indexing all the films that we have saved. There are about five hundred of them. That JELLO thing is in there, and you will see them eventually.
GAM: Everything is being archived, and I've got a crummy database that tells me what's where. It's just a fantastic amount of work that was done in the '50s and '60s�just incredible.
GAM: And it's hard to get around it now, there's just so much there.
NH: Yes, and so many of the programs are dead because the machines, or the compilers, or the card readers for heaven's sakes, are long since departed.
GAM: They're gone. That's right. Let's quit here, and we'll revisit this later.
 Years ago, the story of the KOMPILER was researched and documented by Professor Donald Knuth of Stanford University. See "Early Development of Programming Languages, " by Luis Trabbpardo and Donald Knuth, in Encyclopedia of Computer Science Technology, vol. 7, pp. 419-493 (New York: Marcel Dekker, 1977); also reprinted in A History of Computing in the Twentieth Century, N. Metropolis, Jack Howlett, and Gian-Carlo Rota (Eds.), pp. 197-273 (New York: Academic Press, 1980).
 This amounts to a functional description of a computer-controlled, flying-spot digitizer.
 Editor's note: The development of a successful, generalized rezoner is one of the more difficult logical programming tasks ever attempted.
 Chronologically, the Lab first procured a LARC. About a year or so later, the Lab procured a STRETCH.
 The HARVEST was a very advanced, multi-processing system created for the National Security Agency. The machine was, without doubt, the largest non-von Neumann architecture ever built. The HARVEST included the equivalent of several STRETCHes and a very fast stream processing unit. See, e.g., Herwitz, Paul S., and Pomerene, James H., "The Harvest System," in Proceedings of the Western Joint Computer Conference, May 3-5, 1960, San Francisco, CA, pp. 23-32.
 When the STRETCH got to Livermore, as it turns out, Clarence Badger, Gale Marshall, Garret Boer, and I worked on the OS, while Bill Mansfield, Ann Hardy, Barbara Schell, Jeanne Martin, and I worked on the FORTRAN compiler for the STRETCH.
 Editor's note: A search of the archives has failed to produce any evidence that the HARVEST manual was ever classified, and it no longer is available.
 Actually, I don't remember being involved with the Sigma 7 at Livermore.
 In addition to the Lab personnel, the development of the Sigma 7 time-sharing system, GORDO, had the benefit of advice from the following outside consultants: Butler Lampson, Wayne Lichtenberg, and Mel Pirtle, among others.
 At approximately the same time, Xerox Corporation bought SDS and renamed it XDS.
For information about this page, contact us at: email@example.com