An Interview with Sidney Fernbach

Sidney Fernbach



Introduction by George Michael



Sid was director of the Computation Department from its inception until 1982; an interval of some thirty years. He then stepped down and joined the staff of Gus Dorough, who was then Associate Laboratory Director for Chemistry and Computation. (Sid would steadfastly insist that he didn't step down; he was removed. In due course, I will relate the relevant data as objectively as possible. The reader must decide on his own where the truth lies.)

Some of Sid's story has been pieced together from strange sources, and I've not been able to verify some of the data. But if it's here, it means that, verified or not, the content is faithful to Sid's persona.

A lot of what follows was collected for a small biographical sketch I wrote for a memorial seminar sponsored by the Laboratory and held in Sid's honor shortly after his death.

How Sid Fernbach got to the Lab



Sid was born and raised in Philadelphia, and earned his B.S. degree in Education from Temple University. He entered graduate school at the University of Pennsylvania, but was drafted into the Navy. Sid became a naval officer and attended a Japanese language school in Norman, Oklahoma. The war ended before he finished so, on resuming civilian life, he transferred to Berkeley. He became one of J. Robert Openheimer's students, which was just fine until Openheimer decided to leave to become head of the Institute for Advanced Study in Princeton, New Jersey. Sid finished his graduate work under Robert Serber, who left U. C. for Columbia University because of the loyalty oath, and began a year of post-graduate work at Stanford University. The year was 1952, and Edward Teller contacted Sid to invite him to join the Radiation Laboratory, as it was then known. Teller invited Sid to manage the procurement and use of a UNIVAC computer. At the time, many designers believed it was all the lab would ever need. So, Sid joined in September of 1952. Almost immediately, he went to the UNIVAC factory in Philadelphia as the head of a group of programmers and engineers who were being introduced to the wonders of using and maintaining a very fractious machine. The UNIVAC was finally delivered to the laboratory in April, 1953. It had been kept in Philadelphia to allow CBS to predict the results of the November, 1952 presidential election. Early on, the machine had predicted the victory of Dwight Eisenhower. So early, in fact, that many of the existing pundits refused to believe the prediction. But the UNIVAC was right and did it sooner than any other forecast.

It took the better part of a month to assemble the machine. During this "interregnum," the engineers under Lou Nofrey trained operators, programmers, and various camp followers so, when the machine finally passed its acceptance tests, there was a trained operating staff and an overabundance of users. Sid's principal problem at that time was recruiting enough of a programming staff to keep the development of the simulations on track. Perhaps it will come as no surprise that most of the Laboratory's scientific staff did not trust the UNIVAC to compute correctly. In my case, the program we were developing was shadowed by a hand calculation in which every number that the machine produced during one time step was expected to match the results of the hand check to twelve decimal digits! Today, some would insist that such a demand was totally unnecessary. Then, it was a different story; I won't belabor this now, except to note that reliable, numerical methods for digital computers were relatively unknown then.

What Happened Then



So the UNIVAC was delivered, assembled, and made to support a heavy production schedule. The Laboratory busied itself preparing to participate in the upcoming test series. This included the study of suitable numerical methods for use with a digital computer. It is true that lots of methods already existed for simple hand calculation, but their effect when used by a digital computer was unknown. In addition to running the computation department, Sid and a colleague, F. Bjorklund were studying the behavior of the then-called, "Liquid Drop Model of the Nucleus."

The UNIVAC, very early on, was proven to be too slow and too small to meet the needs of the Nuclear Design Divisions. Thus, a large portion of Sid's time was spent either hiring new programmers or searching for a bigger, faster machine. He quickly became the center of a staff that was very knowledgeable and agile. This group had persons from all over the Lab, and was quite skilled in evaluating the potential of the various machines that were proposed by visiting salesmen. Over the ensuing years the Laboratory obtained a veritable galaxy of some computers that were, each in their time, the world's fastest. Here is a list of such computers that span the time from 1953 to 1978; the golden years, as it turned out:

Table 1
MFR/Model First Delivery Number
UNIVAC 4/53 1
IBM 701 6/54 2
IBM 704 4/55 4
IBM 709 9/58 4
IBM 7090 4/60 4
UNIVAC LARC 6/60 1
IBM 7094 9/60 5
IBM 7030 3/61 1
CDC 1604 2/62 1
CDC 3600 5/62 2
CDC 6600 4/64 4
CDC 7600 3/69 5
CDC STAR 100 7/76 2
CRAY 1 5/78 4


One can note that this list does not contain any of the smaller computers that were acquired during this interval. In fact, one could suspect that here is evidence of a strategic oversight on Sid's part; he failed to appreciate the overwhelming effects of the small machines. He thought, perhaps correctly, that purchasing so many small machines might impact the acquisition of the big computers. The increasing capabilities of these small machines blurred the distinctions between large and small, and fast and slow(er) computers. Sid merely delegated all support of these Supercomputer-killer computers to a staff member, whereas his involvement would have legitimized their use. He could have been the leader in their proper utilization.

The list of computers in Table 1 is an expressive and silent testimony to Sid's skills. Ironically, as he advanced in experience and wisdom, resistance to certain Computation Department policies grew. Technical people, especially physicists, don't like constantly being promised one thing and being given something quite different. Practically every computational project was late or fell short of its stated goals. Whether it was a three-year-late LARC, or the software to run a remote printing facility, any slippage or failure gave the sidewalk superintendents a chance to gripe and blame Sid.

It's worth stressing that this was a time when we were confronting really difficult and totally unprecedented problems. Estimates of delivery dates were not much better than guesses. It's true that many of the people doing the work were the best in the world, and that there was sufficient funding to get the job done, and that the projects were blessed with a management that was smart enough to stay out of the way. Notwithstanding, things, especially new things, generally take more time than initially estimated.

(Before going further, I want it understood that the following interpretation of events is largely mine. No official agreement is implied.)

Candidly, there just wasn't anyone one on the Director's staff who could have taken over and done things better. So complaining was their only alternative. The years rolled on and Sid kept doing his thing; getting newer computers and growing an ever-larger staff of programmers. The work never got easier, so delays were part of the scene and the users kept up a steady barrage of complaints.

Then two conditions combined to upset the situation: First the representatives of the user groups became increasingly active; they demanded first, more and more of Sid's time at meetings to discuss project delays and, second, they insisted that things be done "their way," irrespective of whether that made sense or not. One could hear louder and louder, "Listen, it's our money that's paying for everything. We're going to decide how it's spent." That's probably defensible superficially, but it also acted as a means for destroying the creativity of the Computation Department. We ended up with a dinner cooked by too many chefs.

The second force that upset the apple cart was the "Saga of two Stars." That story has been told elsewhere; suffice it to say here that, as the world's first large vector computer, it was late, and that as the users spent years learning how to write programs for it, they didn't do enough improving of the basic physics in their simulations.

The straw that broke this camel's back was that the laboratory, with Sid's help, got two Stars. Frankly and simply, the Stars were less than effective in running the kinds of large programs then in use at the Lab. The orthodoxy at the time was that the mistake of one Star might be forgivable; two were not. It is doubtful that a complete story will ever be told, but it is clear that the decision to procure the Stars caused undesirable reverberations throughout the Lab, and especially in Comp. So, granting the ethics of modern managers, it is not surprising that they went hunting for a scapegoat. They found Sid. Even worse than picking on Sid, Comp was besieged by a veritable horde of pseudo computer experts, both from within the Laboratory and from DoE headquarters. Basic technical decisions were being made by lavishly unqualified people.

The Stars were commendably fast provided one's program precisely fitted the computer's architecture. One awkward flaw in the Star was that scalar operations were very much slower than vectors and, initially, practically every program contained some unfortunate but necessary scalar procedures. If these two styles were not perfectly matched, the Star's speed dropped from something faster than a CDC 7600 to approximately one half the speed of a CDC 6600. Of approximately three hundred design programs, something like twelve were able to be warped into forms that fit the Star architecture. Since then, things have improved a lot, but it took time. No surprise then, that the designers were irritated. It was a sad day when they started to meddle in the development of computer systems, disastrous in their consequence of forcing the use of commercially-developed software that was generally inadequate for the tasks. There is still a strong feeling that, if a bit more time had been allowed, and additional good programmers had been added, Comp would have continued to produce software tools better suited to supporting the designers than the commercial products that came in.

In early 1958, the director of the Theoretical Physics Division, Mark Mills, was killed in a helicopter accident at the Pacific Proving Ground at Eniwetak Atoll in the Pacific. Sid was asked to serve as acting director while a search was made for a replacement. Thus, he had two management assignments, Computation and Theoretical Physics. No real search seems to have been made, and Sid held these assignments for almost twenty years. During that time, in addition to increasing the size and reach of Computation, Sid was instrumental in establishing a Scientific Information Exchange Panel among the nine DoE National Laboratories. This helped to spread the best algorithms and other computing methods for use on large computers. Later, he added a panel that served the needs of the managers of these computer centers. He also started a Supercomputer Studies Special Interest Group within the Computer Society of the IEEE.

Although I was unable to interview him directly, Sid's widow, Leona, provided me with an interview of Sid conducted by Marilyn Ghausi in February, 1989. It is Sid's story told by Sid, and is very interesting and contains elaborations as well as other versions of some of the things only mentioned here. I have corrected typographical errors, corrected a few dates, and completed the names of many of the persons he mentions. As far as I know, this interview has not been published elsewhere, nor is it copyrighted. In many respects, it is vintage Sid; chafing under a management that didn't understand nor appreciate what he was doing.

I feel it necessary to wish that Sid could have reviewed many of the changes I made. I'm reasonably sure that the dates and names I've inserted are correct, even though some did not agree with Sid's memory. In several instances I found substantive errors in the typescript. I couldn't resist the urge to change these, as I believe Sid would had done, given the opportunity. It would have been wonderful having his agreement. So, even though some of these words are not Sid's, it is still his interview, as those familiar with him will agree. If I can find Marilyn, I will invite her to review, and add to the interview whatever she deems necessary to make it as accurate as possible. Further, I invite others to let me know of the mistakes I've introduced.




Interview with Sidney Fernbach, February 8, 1989

Sidney Fernbach
Sketch of Sidney Fernbach, by his brother Frank Fernbach


SF = Sidney Fernbach
MG = Marilyn Ghausi

MG: Dr. Fernbach, I wonder if we could start off with the circumstances which led to your joining Livermore?

SF: Yes. I had just gotten my Ph.D. in physics at the University of California, Berkeley. Actually, after my Ph.D. I went to work at Stanford University and it was there that Edward Teller contacted me. He was the one who convinced me to come to Livermore. And so I joined a group there.

MG: When was that?

SF: 1952.

MG: And what group did you join?

SF: Well, when I went to Livermore, he said, "You're going to run the computer department."

MG: Was there such a department?

SF: No. There was nothing. "You're going to run the computer!" You know, what had happened was that there was a man named Bob Jastrow who had already started working for the laboratory even though it didn't exist before that first day that we walked in. And he had ordered a computer called the UNIVAC. Somebody had to be in charge of it, and Edward thought that I should do it. So, I undertook to do that. I took the responsibility for the UNIVAC 1. We hired people, programmers and operational people to run the machine. We signed a contract with UNIVAC people to maintain the machine. There was an engineering department that was offering service to computation, if you could call it computation then.

MG: What was your title?

SF: I don't know, I never had any title. [Laughter]

MG: Had a theoretical physics department or group been set up at the time you were hired?

SF: Everyone was a theoretical physicist because Edward Teller was and all the people he hired were theoretical physicists, but there was no theoretical physics department. It was just a conglomeration of people.

MG: Where you all in one building?

SF: Yes. In a sense, I was also responsible for physics. And the people in physics and computations reported to me. It wasn't until Mike May became director that he set up a theoretical physics or physics department. I don't know why, but he felt he was a theoretical physicist. We went to school together. He came out of Berkeley too. He said he was hired by Herb York who was director of the laboratory. Between Herb York and Bob Jastrow and Edward Teller, they essentially were responsible for the entire staff, who were mostly physicists and engineers. I was named Acting head of Theoretical Physics, June 1958 to replace Mark Mills, who was killed in a helicopter crash at the AEC's Pacific Proving Ground.

MG: Then did you become head of theoretical physics?

SF: I don't know. I guess I was always acting head. [Laughter]. I was responsible for it until I left. Well, it was before I left; there was a lot of reorganization. They put computation under the associate director for chemistry in the 1970s. That was Gus Dorough. Because of that, I left. Such political games made me uncomfortable. For me, it was an impossible situation. He was quite political. So was his assistant, Walter Nervik. He was associate director for chemistry and when they put chemistry and computations together, Walt Nervik was sort of a deputy associate director for chemistry. And I was the same for computation. They put Bob Lee in my old job�in charge of the computation department. I was certainly uncomfortable with all these changes

MG: What was his background?

SF: He was either a mathematician or an engineer but, at least, at the start of his administration, I didn't agree with his views of what computation was all about. Well, going back to the earlier days...

MG: Who were some of the early people you hired?

SF: Mostly mathematicians. There was no computer science in those days. So most of the people were mathematicians, or physicists, for example, Leota Barr, Kent Ellsworth, who ran the first compiler group, John Hudson, Oscar Palos, Jack Rose, Gil (I'm sorry, I'm blocking on his last name). Those were all mathematicians, I hired, some as early as the summer of 1952. We first opened our doors in September of 1952.

MG: [inaudible]

SF: Well, I just loved the job. And they gave me sort of complete responsibility for everything. We learned quickly that UNIVAC 1 was just not a very powerful machine and, starting in 1953, there were new machines coming to market. I was always looking out for a new machine. I worked with a man from Berkeley, Jim Norton, who was in the engineering department at Berkeley. He helped me in writing contracts and in making contacts with computer people. I don't know how it happened; Berkeley seemed to take a lot of responsibility for Livermore at that time.

MG: Well they were actually,...the organizational arrangement was that Livermore reported through the head of the Berkeley laboratory at that time.

SF: Yes. Probably so. Herb York reported to Lawrence [Ernest Lawrence]. I guess there was a lot of rapport between the two laboratories. We didn't have enough staff to do all of our own work anyway. The Berkeley people did come to Livermore. Yes. As a matter of fact, I think I commuted from Berkeley too. I'm a little confused about that now because I had started to move to Stanford.

In any case, Jim Norton was the one person I relied on most. We worked generally as a team. There was another man, Louis Nofrey, who joined shortly afterwards and who worked for Jim Norton as an engineer. He was responsible for the maintenance of the UNIVAC 1.

MG: So Nofrey was based at Livermore?

SF: Yes. Jim Norton was at Berkeley. Jim Moore was an engineer who worked for Louis Nofrey. Nofrey built up quite a staff. There was Chuck Kenrich , Jim Moore, Dick Karpen, Bob Crew and some others I no longer can recall. They were a very competent staff who maintained the UNIVAC. Cecilia Larson worked for them. She started out working for Nofrey, I think.

MG: Who was Bill McNaughton?


Figure 1
Left to right: Sid Fernbach, James Norton, Bill McNaughton, at LBNL where the details of the UNIVAC procurement were handled. This photograph is courtesy of Jerry Russell, a member of Norton's staff and one of the first LLNL general computer engineering staff members.


SF: He was sort of an assistant to Norton. I don't know what his background was - an engineer probably. The person who wrote most of our contracts, together with Norton, was Bill Masson. McNaughton had no real power. Jim Norton did. He was a good negotiator. We would go into a company like IBM...

MG: Yes. Tell me about the process of getting new computers.

SF: We decided we wanted a new machine and we looked at the possibilities. There were only two possibilities, as successors of the UNIVAC 1. One was built by IBM and called the 701 and the other was built by the UNIVAC people, which was taken over by another company and called ERA. That machine was the 1103. There had been a 1101 too, but we hadn't looked at it. It was between the 1103 and the 701 that we wanted to make a decision. It was a much more powerful machine than the UNIVAC. The 701 won. Furthermore, the UNIVAC 1 was really a machine aimed at the business world. It was a decimal machine. No floating point. It was not a scientific type of machine. So, we were anxious to get this other type of machine. Well, we would go to visit with IBM. There was an excellent salesman in the IBM Oakland office, Bud Coker. He and his people would take us out to lunch and have conversations and dicker about the machines capability and costs. Eventually, we acquired the 701 rather than the ERA machine.

The early machines were not custom designed. IBM built nineteen 701s. It was their standard product. They built two machines that were intended for general scientific use. One was a scientific version called the 701 and one was aimed at the business/commercial market and was called the 702. It was more like the UNIVAC than the scientific machines we have today. I don't know what serial numbers we received.

MG: What is the importance of the serial number?

SF: None whatever. Except that people take great pride in saying "We have the first one". We took pride too. But, at that time, it didn't occur to us to take the responsibility. [Laughter]. Well, it's what starts reputations. You get to know the people who are using these machines. So, if there are nineteen 701s, you know practically all the people in the country who run them. Later on, I thought it was important for DOE people to know each other, so I started a group called the Scientific Information Exchange (SCIE). We had regular meetings, people at various DOE laboratories [AEC then ERDA] who had responsibility for computers, who ran them. And we would meet and exchange information. It was a good way of communicating with your colleagues.

At that time, the programs that were written were not so great. We relied heavily on IBM to provide us with software...[inaudible]. When they said they provided software, they [meant] operating system only. That enabled you to use the machine. But if you are trying to solve problems, they had not provided the wherewithal to solve the problem. Our physicists and mathematicians were the people who wrote down the equations and put them into the right form to enter into the computer for solution. We started by assigning a mathematician to every physicist who had a problem. So, it was always a cooperative effort between the mathematician who was running the problem on the computer and the physicist who looked at the results and/or originated the equation for the problem. So a very close relationship existed. I guess I got ambitious. I don't remember how, or why, but I decided we needed another machine. And IBM was producing a new one called the 704. We were very close with IBM at that time. Coker was an excellent salesman and another man in Oakland, named Bob Fairbanks, who's still there, worked closely with Coker. A third man, who worked out of New York, was Cuthbert Hurd. He was a close friend of ours as well as more of a generalist. You need people who can understand what we were doing with our computers, and help with the selection of the newer ones. And Hurd was very good at that. He was very close to the scientific community, whereas Bud Coker tended not to look past the actual sale.

MG: Going back, when the machine was delivered did the IBM people come to work with the Livermore staff at all?

SF: No. They just installed the machine. We sent our people to classes. IBM taught them, methods on how to program the machines, and how to maintain them. Initially we insisted on maintaining our own machines because we had a classified facility, so using uncleared maintenance people was rather difficult. We had to learn to live completely on our own. As a matter of fact, for some reason, we felt the software provided to us was not very good. Then we set up people like Kent Ellsworth, for example, to run this FORTRAN-FORTRAN project and my wife worked on that project along with Bob Hughes and Bob Kuhn.

MG: Wasn't FORTRAN developed by IBM?

SF: Yes.

MG: What did Livermore add to it?

SF: We were trying to develop a new language, a better suited to mathematics, and it was earlier than FORTRAN. It was called Kompiler, but it never really succeeded. It had chosen a very fragile notation for handling superscripts and subscripts. I suppose it was reasonable from the point of view of mathematics, but it was almost unusable in programming. I don't know why we didn't see this soon enough. We thought we were smarter than IBM and Kent was a very intelligent person. So were the others too. We felt that we could do better. We undertook things like that. I never thought about not having the manpower, not having the money in those days, because we got everything we wanted. I could hire anybody I wanted. Nobody seemed to keep tabs on what I was doing. So, we started all kinds of things. And the physicists always supported us when we said, "Hey, we need a new machine". They were delighted to get more power. And the 704 at the time was twice as fast as the 701. And IBM continually upgraded their systems and they followed that with the 709 which was very much like the 704 but a little more powerful. Then we got smart and decided that these companies can't build these machines for us. We needed to do something to stimulate them to go faster and do more. So we got together a solicitation to have a company build for us a more powerful machine, from scratch. We would get number one. The award was made to Remington Rand (later, Sperry Rand) which had taken over the UNIVAC development, and there were some good people there.



Mauchly was a great mathematician, Eckert was a fiddler and engineer who built these things and he was very good. Anyway, we got into a relationship with the people at the Sperry company. We specified this machine and they said, "We'll build it for you." At the time, as I recall, we had only 2.5 million dollars. And we talked to IBM about it and they said, "Well, we can't build it for 2.5 million dollars. We could build it for 3.5 million." But we only had 2.5 million.

Anyway, Sperry built the machine called the LARC�we called it the Livermore Advanced Research Computer. And IBM, not to be outdone, built that 3.5 million dollar machine, and called it the STRETCH, and sold it to Los Alamos. So, here we had a competition going between the Sperry people and the IBM people. There was also a competition between Livermore and Los Alamos. We wanted to outdo them all the time and they wanted to outdo us. It was a friendly competition. We knew people very well. Of course, we were jealous. They had more money than us at the time. [Laughter]. In any case, LARC was, well, not the greatest success at the time, but the STRETCH wasn't either. I think IBM built about seven STRETCHs. We acquired one too, based on our test runs that showed it was about twice as fast as the LARC. So we were in a competitive stage, and we and Los Alamos would try to outdo each other.

MG: In terms of faster computers?

SF: Yes. Speed is of the essence in computing. There are problems that can't be solved today on the fastest machines we have. There will always be a need for faster computers. It's hard to convince people of that. Look at the weather predictions�they use the fastest computers and it's just not good enough. To solve their equations, it takes minutes to a number of hours to solve a problem. So in a sense, we will never be able to completely satisfy our needs for computing.

MG: How did the Sperry Rand people work with the laboratory in developing the LARC?

SF: Well, the company had complete responsibility for building the machine. But we would send people there to see what they were doing and to offer any advice. For example, are there any instructions to be added to the machine? Are there any changes you would make based on what we have learned from our experiences at Livermore? But really, we had very little influence on them because, basically, it was a hardware project. And our people were not that adept with hardware. I'm talking about the computations people not the engineering people. Our people were more adept at the mathematics and computational portions of it; the programming. We would say, "Yes, we need this instruction or program to help us". And if they could put it in, they did. That sort of thing, a little give and take but mostly give on their part. Of course, when you are buying a machine of which there are only one or two to be built, they aren't going to provide much software. They did have a software group. I can't remember the name of the guy who ran that group. So they would help with the software too.

MG: But wasn't software developed at Livermore?

SF: Well, the applications were developed at Livermore. They were responsible for giving us a first version of the operating system, which we then could improve upon.

MG: Would that make the machine go faster?

SF: No. Possibly slower. Whenever you start fiddling with the operating system, unless you could find something that really needs attention, chances are that you are not going to speed it up. The operating system is not that responsible for the speed of the machine.

MG: Can anything speed it up, the programs that are developed?

SF: No. It's the hardware that controls the actual speed. There's a clock in the machine that times every cycle, issuing instructions. And that's basically the speed. If you're clever you can learn to organize your mathematical program in such a way that you can skip steps, if you're ingenious to program to do something in a different way from what you originally intended to do but with the same end result, so you get a faster machine. So that if you know hardware real well you can use tricks. That's the only way to speed it up. But, basically, once the hardware clock is built into a machine, you are stuck with it. Kent Elsworth's project was dabbling in software but, as I said, nothing much came out of it.

When the STRETCH came, that finished an era in computing. It was, at that time, that we became acquainted with Control Data Corporation [CDC]. Control Data started I believe in 1957. There was a man named Seymour Cray who worked for them designing machines. He designed a machine called the CDC 1604, which was supposed to be like the IBM 709. I forgot to mention that the 709 was followed by the 7090, by IBM. And the 1604 was in the same class as the 7090. It was now an integrated circuit machine.

MG: What were the previous machines?

SF: The others were transistorized machines. The vacuum tubes were in the first machine, the UNIVAC and the IBM 701, 704 and 709. They later transistorized the 709�it became the 7090. Now you started to get integrated circuits machines. I must say that one thing that keeps improving on machines performance on certain jobs is the amount of memory space you have. When you are doing a problem, you keep referring to information which is stored locally in a memory. The faster you can bring it in and out the faster you can get the job done. The early machines memory, like the UNIVAC 1, had a thousand words. That's barely enough to support anything. So you back it up with magnetic tapes and you have to read magnetic tapes in a sequence. It takes a lot of time. Later they developed disk drives to provide two-dimensional access to the information. Disks run at much faster speeds and get your information much faster. But, even so, disks were slowed down by rotational latency; you had to wait until the data was under the read heads. And so, both discs and tapes had a hard time competing against the main memory, which was, built right into the machine. Increasing memory size became very important and has been ever since.

With the 7090, you had not only a transistorized machine but a much larger memory, something like 32,000 words, which was considered large at the time. And the 1604 was the first machine that Control Data put out. It was about half the performance of the 7090, but it was less than half the price. It was a forerunner to a series of machines that Seymour Cray was designing. After the 1604, Control data designed another machine called the 3600, which was really more like the 7090. I should say, at this time, that IBM replaced the 7090 with the 7094. So it kept going up each few years. Well, they really didn't experience much competition until Seymour Cray designed a machine called the 6600. That really raised everything up. It was the most powerful machine of its time. A very interesting machine.

The 6600 is the first supercomputer ever built. We finally acquired four of them. We had a 1604 and two 3600s. And at that time we also decided to start doing our own software. We decided that these companies really don't know how to do software, that we knew much more. When you are young, you will undertake anything! And so we designed a system initially called GOB, and that became LTSS, the Livermore Time-Sharing System. And the idea was to make it easy for people to get on and off the machine. It was a resource-sharing scheme; that means that several people could be in the machine simultaneously. The time-sharing software fixed it so the users did not kill each other's programs, and when one person's program came to a point where it had to wait for any reason, another person's program was started. So, the idea was to keep the machine fully busy. So, what you want to do is to fill the gaps. If this person had to wait for another set of instructions, which might be here, somebody else would be ready to run and use computer time that would otherwise be wasted. And these people built the system to take advantage of that.

MG: Whose idea was it? Was it a group?

SF: It was a group, but the main instigator was Norman Hardy. The original project to write a time sharing system was given to a group under Bob Abbott. He had about six persons to do the work, but I can't remember any names now. John Ranelletti would know better than I.

MG: OK. Was it a unique idea?

SF: Not really; the idea of time sharing was being studied at several place around the country. MIT was one place and if our approach was considered unique, it was because we were running an arbitrary mixture of very large and very small programs with some running for long times and some for very short times. The system programs had to manage the sharing of this limited computer resource, and this led to some rather difficult programs. As far as we could tell, nobody in the country was being very successful developing such general software. It's just that I had never been happy with the software development that was going on at the time, and thought it would be worthwhile trying it at the Lab. I still think it was a good idea. There was a part of management that didn't believe we should be developing such software. We were supposed to stay close to our original charter which was nuclear design. What we did was tolerated mainly because there was no place where we could obtain such software in a timely manner. So, we had a strong need to supply these things to support the design work. It still exists.

MG: How does OCTOPUS work?

SF: The idea of OCTOPUS was to share our resources among all the users, so that when someone needed computer service, it could be made available to that person on a moment-by-moment basis. The strategy was to tie all the computers together, and to provide uniform interfaces for all the services you might want. Input and printing and data storage as well as running user programs was the intention of the OCTOPUS scheme. If he has a terminal and a place in which he can type in his information, he can use any of those machines, if the wires are connecting him to those machines. So, OCTOPUS is a system to interconnect machines and people. LTSS made it easier to use the OCTOPUS; OCTOPUS made it easier for LTSS to exist.

MG: OCTOPUS was first?

SF: Yes. I would say so. Again, OCTOPUS was a scheme for tying all our big machines together thereby creating a large computer resource, while LTSS was the time-sharing software that managed the resource. The important thing is that LTSS allowed users to access the machines to obtain those free slots and the OCTOPUS made it easier to communicate those information bits to each machine. So each one could be provided with a workload and no time would be wasted. At that time, people throughout the country were experimenting with what was called time-sharing systems. The basis of it was to use up all of the operational cycles by providing information to be worked on. One after another. Not caring whose problems were being solved.

MG: So was LTSS on the leading edge of experimentation in time-sharing systems?

SF: Yes.

MG: But not the first?

SF: No. It was not the first. Among the first. It was probably the biggest because it ran on 6600s. That's a lot of power. I don't think there was anybody in the country except me with four 6600s. Another interesting thing about it was that it was written in FORTRAN and we could transfer LTSS from one machine to another very easily. We originally wrote it on a 1604, that's why we obtained a 1604 from CDC, to produce LTSS; then we moved it to the 3600; then we moved it to the 6600 and to the successor to the 6600, the 7600. Again, we had four 7600s. It was very powerful. The 7600 was a Seymour Cray brainchild. It was a great machine. He knew how to build machines, in those days, better than anyone else. IBM fell flat on its face trying to keep up.

SF: IBM was also trying to compete with Control Data at the time and produced competitive products. But we were so tied in with Control Data that we didn't see any way of changing. We would have to provide our LTSS to run on these other machines as well. It was a lot easier to continue with Cray because his machines were rather similar to each other.

Seymour Cray had a sidekick by the name of Jim Thornton who wanted to go off on his own. He was tired of being in the shadow of Seymour and so he wanted to design a new machine. Now, there was a guy at IBM named Iverson who invented a new programming language called APL, (A Programming Language). Thornton studied that and thought that he could build a machine, which would use APL as a basis. At that time, we were actively soliciting proposals for a much faster machine. He made a proposal to us that blended a new architecture with APL. So we scrounged up 5 million dollars and had them build one for us. That was STAR 100. And we put the same operating system on the STAR. It was the first of a new kind of design using some new integrated circuits. For several reasons, it was deemed a failure as a project, but on the other hand, it was the first vector machine that ever existed.

MG: Why was it a failure?

SF: Well, the inventor made a serious mistake. Initially, we underestimated how serious it was. It was our first vector machine. Sometimes it is possible, when solving mathematical problems, to make a statement for an operation and have that operation automatically apply to a whole series of operands. Say you are trying to multiply a whole sequence of numbers. On a normal machine you would issue an instruction to say multiply A times B, then an instruction to multiply C and D and yet another instruction to multiply E times F. Each of those is called a scalar instruction, and it applies to just one pair of operands. On a machine like the STAR 100, it was possible to issue one special (vector) instruction to, say, multiply A times B and then C times D and E times F are all multiplied automatically. That's called a vector instruction and it speeds up the operations immensely. And so this machine did that and did it beautifully. Except they didn't pay enough attention to making the scalar operations go fast, so there was always a big difference between the speeds of vector and scalar operations. In other words, the more you could fully vectorize your program, the faster it would run. It turned out, however, that it was very very hard to completely eliminate the scalar operations, so this would really slow the operation of the machine. And so we didn't get the speed-ups that we expected. So people complained. I guess these complaints rubbed off our backs and we didn't pay much attention to it. We thought the STAR was an interesting machine. As a matter of fact, a favorable deal arose so we bought two of them.

MG: These were basically experimental machines?

SF: Yes. If it hadn't been for the STAR, I personally feel that Seymour Cray would never have gone into business for himself and would never have built the Cray machine, his product. He learned an awful lot from that, and he doesn't learn quickly. Anyway, I think except the time was right for the advent of the vector machine. Well, Seymour Cray, at the same time, was trying to build a new machine called the 8600. It's a four-part version. The first of its kind. Four machines working together. And he just couldn't get enough money out of the Control Data people to finish it, so he just quit, in 1972, and formed his own company. In 1976, he came up with the first machine which he called the Cray 1, which was similar to the STAR 100 except that it was better balanced in that it executed scalar instructions much faster, so it maintained a better balance.

What happened also was that when the STAR 100 was being built, the competition for that was from Texas Instrument. Texas Instrument had a designer by the name of Harvey Cragen who, with some colleagues at TI, designed a machine very similar to the STAR called the Advanced Scientific Computer (ASC). They built about seven of them or so. But even though their machine was competitive, our evaluation was that there was not enough advantage to warrant changing what we were doing.

Another competitor, a friend of mine, Dan Slotnik, built a machine under government contract called the SOLOMON. It was a strange machine called a Single-Instruction, Multiple-Data Computer. It had a thousand processors. Say you issue a single instruction, say A times B. Then every processor will multiply A times B out of its own memory, or it will do nothing. So, it was a thousand times faster than a single processor would be. As I said, it was called a SIMD machine, with Single-Instruction, Multiple-Data paths. It was the first SIMD machine and I liked the machine. So, I got 5 million dollars together to try to get the machine. The AEC and ERDA by now were getting smart and started understanding machines better. And they declared "You can't have a machine like that too, you just got the STAR". And so we worked up a deal. I went to Washington D.C. to talk to the National Science Foundation and to DARPA, the Advanced Research Projects Agency in the Department of Defense. Ivan Sutherland was then head of a special office for information processing within DARPA. Sutherland, Dan Slotnik, and I conceived the idea that they would support the machine jointly out of Washington. Well, they finally built it, but they decided to put it at the University of Illinois. It was called ILIAC 4.

MG: Why at Illinois?

SF: Well, they were going to put it at Illinois because Dan Stotnik became a professor of physics at Illinois. But the Vietnam war caused some universities to be up in arms about defense projects, so they put it at Ames Research Laboratories here at Moffett Field. It was the only ILIAC 4 ever built. So, we dabbled in that thing there. Harvey Krogan would try to sell us some ASCs, but we already had the STARs. In any event, Seymour walked off and started his own computer company and we, like others, followed him.

MG: What did you intend to do about software?

SF: LTSS was our standard. We did FORTRAN. We did everything ourselves.

MG: Could the software that was developed by the laboratory be used by the manufacturer?

SF: Yes.

MG: And the rights to it, was there any question about that?

SF: No. The manufacturers didn't want any part of it. Today, the software developed at Livermore is being used by several DOE laboratories. In addition, the University of Illinois and UC San Diego is using it. LTSS, is now called CTSS, the Cray Time Sharing System. It was the same thing, except it was modified for the Cray. The laboratory, in the meantime, was developing a new system called NLTSS, the Network Livermore Time-Sharing System, which they've been pushing ever since. I guess its been operational for year or two now. Again, Ranelletti would know more about that.

MG: At one point you were involved in a big antitrust case with IBM, how did this happen, how did Livermore get involved?

SF: All that I remember is that I was involved. The government attorneys asked me if I would help them and I agreed. And so I did work with them to try to establish a case against IBM. At that time, I thought IBM was unfair and was gobbling up all the computing and data base contracts. I was of a different mind insofar as AT&T was concerned. They were being attacked at the same time. AT&T was finally pulled apart, which I think was a disaster for the U.S. But I don't think it would have been bad if IBM were broken up because it's s too big. It still dominates our computer industry. If you can get them to do things right it would be great, but they do things their own way.

MG: What happened to the case?

SF: It was finally dismissed. [In 1982?] Eric Bloch was the IBM person on the other side. Eric is director of the National Science Foundation today. So we became close friends during the case. I didn't pay any attention to it after that. It was dropped. It was decided that there was no reason for breaking up IBM.

MG: Going back for a moment, what do you recall as the reasons for making computations a department instead of a division?

SF: It's just a whim of somebody in the Director's office; making a department out of computations was Mike May's decision. I didn't care. He confronted me with it and that was it.

MG: So you were at the laboratory until 1979?

SF: Ten years ago. My last effort was to try to get a Cray computer in there. At the time Seymour Cray built the Cray1, in 1976, he was ready to sell it. Chuck Breckenridge was in charge of CDC sales at Livermore for a while. He's back in Maryland now. Chuck, Seymour, and I got together and decided what we could do about getting a Cray 1. The government, in all its wisdom, said "Fly before you buy. You can't buy a machine unless it's been tried by somebody". The Cray 1 was just finished and no one had used it ever, so we conceived the idea of Cray lending us the machine for six months, and then trying it out, and if it was a successful machine, we'd buy it. So we made the proposal to the government. And the government said, "what the hell are you trying to do? You just got two STAR 100s. You can't have that machine too". So, they let Los Alamos have it. They gave it to Los Alamos. I was very saddened. But that was the first Cray 1 delivered. Since that time, Cray has delivered 270 machines. The laboratory has about ten of them. Los Alamos has about 8 of them.

One of the more interesting things I really feel I accomplished was that the DOE had an office of Energy Research run by a man named Al Trivelpiece. His is now director of the Oak Ridge National Laboratory. At that time, he was a presidential appointee, head of the Office of Energy Research. He and some of his people felt that they ought to get a computer to do the research for magnetic fusion energy research. The magnetic fusion program was getting to be pretty big. Livermore had part of it, Los Alamos and lots of other DOE laboratories participated. Princeton University had a big laboratory to do research on fusion energy and try to develop it as a new major source of energy. Al felt that there ought to be a computer somewhere in the facility. So he let the laboratories propose schemes for obtaining, installing, and running one, and we made the proposal which won. And so we set up the Magnetic Fusion Energy Computer Center at Livermore. I was very proud of that. We put John Kileen in charge of that as director and Hans Brujines as his deputy. It has been a model for the entire world. On how to run a computer center. I'm very proud of that.

MG: There are two computer centers at Livermore?

SF: Yes. The Livermore Computer Center was the original one that grew out of 1952s UNIVAC 1. So that grew up and then, at the request of the DOE, we set up the second center for magnetic fusion energy. I think we did an excellent job. I think I was highly instrumental in getting it for Livermore. So those are the highlights. During the time I was running it, it was considered one of the biggest and most important facilities in the world.

MG: In terms of state-of-the-art computer?

SF: Yes. As well as we provided a powerful and mature operating environment and robust applications. In all respects. Livermore was considered very highly as a place to go if you wanted to find out anything about computing, to get started in computing. It still has a reputation similar to that, but there's a lot of competition now; more competition then there was before. I think Los Alamos started out as the number one facility and Livermore overtook it and became one of the most powerful computer facilities in a lot respects. They would come to Livermore from everywhere to study our methods of operation, to solicit papers and help in organizing conferences. So Livermore had a very fine reputation, especially with computer manufacturers.

Another thing that we did that's been forgotten: At one time we decided that we needed to get our output faster. Printing a thousand lines a minute was just not fast enough. So we had a printer built for us to run thirty thousand lines a minute. I was responsible for getting that. Another time, we said we need more storage capability. So how do we store things? We put them on disks. But the disks are not big enough to hold all the information we have. We need a storage capability for trillions of bits. Out of a competitive bid to provide a mass storage facility, we selected IBM. They built a mass storage unit for us. It held a trillion bits on line. That was enormous in 1967. I guess it's out of commission now. At one time, it was the biggest storage device in the world.

MG: Did this have a name?

SF: The Mass Storage Device is what we called it; IBM referred to it as the Photodigital Store. It had little boxes where we stored film in each box with information on it. Jack Kuehler, from IBM, built this for us in San Jose. Now he's the third top man in IBM's organization. So, he has risen to the very top. The laboratory used that device for a long time before they gave it up. It was after I left. I don't know the whole story on that. So, there were things besides just the computers. We pushed in all areas. Software, peripheral equipment, computers. We didn't try to build them ourselves because we didn't know how to do that. On the other hand, Lowell Woods tried to build an advanced supercomputer called the S-1. He got the money from the Navy to build the computer.

Lowell was Edward Teller's fair-haired boy. So they started building a machine under his direction. But the people who were designing it finally left and the project was dropped. Computations department had no part of it. It's strange, things go on in the Laboratory that you're not aware of. Even now, people are doing things they shouldn't be doing because they're in different departments. I feel that the computation department should be pulled together to do things in a coordinated way. But it's not being done. I have ideas for research.

[Discussion of other people to interview and completing this interview.]

MG: Did you ever teach in DAS?

SF: No, never. I may be in a faculty photograph because I was involved in DAS with Wilson Talley and others.

MG: What would you like to talk about at some future time in more detail?

SF: Oh. Just my ideas of where it's going. I'm very disappointed about computing. I just feel that the laboratory could do a lot more. It has capable people and it's not doing its fair share to keep up with the computing world. So much needs to be done. More needs to be done today than 20 years ago. Software is in terrible shape. Hardware is being overtaken by the Japanese hardware. Japan is doing a much better job than we are. We're doing very little about peripheral equipment today, like storage devices and printers. We're depending on PCs [personal computers] to do the work for us. People have gotten the idea of building fast processors; all you have to do is build a hundred of them, a thousand of them and put them together and try to solve a single problem. But nobody's ever figured out how to do that; it's very difficult. They sell these machines and nobody knows how to use them.

MG: What is the thrust of computing today?

SF: Speed. Power. More speed. They're now putting computers on a chip. And people think "Aha, here we have a single chip that's a computer, why don't we put a million of them together and do a million times the work?" Not so! How do you organize the work to do a million times. There are some problems that are easy to do, but very few. We have just never learned how to do it. We have not yet developed the compilers and support structures needed to permit such parallel computation. It is turning out to be a very tough problem..

And there's not enough research going on to indicate how do it. I feel that the Livermore Laboratory has enough people who really understand the business to do that kind of research, parallel processing research. As a matter of fact, I am chairman of a committee for IEEE; the Supercomputer subcommittee and we have written a number of papers, which advocate doing something in hardware and software. And we made a proposal that a special laboratory be set up by the NSF to do research in advanced computer development, mainly parallel processing. And we made this proposal to Erich Bloch my old friend from the IBM case. And he listened attentively and he said, "Well why don't we get a panel together". So they brought together a workshop consisting of mostly people who work in universities, professors. And we made the proposal to them. And they said, "Why should we change? We, as individuals, get money to do our own individual research. Why should we get together to do organized research? It doesn't make sense to each of us." So the proposal died. Each one goes off on and does his own thing. Nobody pays attention. Nobody continues on the work. My feeling is that the Livermore Laboratory could do that sort of thing and do it well because they have the applications and the problems to solve and a good staff to use the machines to solve the problems. Chuck Leith would know more about it than the other people there. Chuck doesn't seek responsibility. You need to have somebody who will run with the ball. There's nobody left to run with the ball.

MG: What was the hardest part of your job?

SF: Getting money was hard. In the initial years, the government would come to me and ask how to do things, because the government was unaware of computing for a long time. And then the government got smart and started to do things on its own, like COCOM. Years back, the government decided, in all its wisdom, especially the DOD, that machines should not be sold to third-world countries, especially powerful machines. So they put together a coordinating committee, COCOM which would meet in Washington. It was run mostly by the State Department, the Commerce Department and Defense Department. Sometimes they invited outsiders like me to come in and advise them "When is a computer good enough that we should protect it? When is it such that we can sell it?". My sights were much higher than theirs. You could never convince them to sell a machine. Today, these foreign countries all have them anyway. Richard Perle thought that nobody should get anything.

Government started clamping down; getting people who would start looking over your shoulder. In DOD, Bob Greaves was given the responsibility of saying "no". We managed to get what we wanted, but Greaves was set up there to try to stop us. They were always trying to stop you from spending any kind of money; even if it was needed and you could make a great case. Their position was "no, not available". Just as today, our budget is so high, there is no way of getting money for decent things, you have to spend it for the military.

MG: Was the time of the AEC the best time?

SF: Yes. The AEC was great. A great organization and we never had any problems with them. They believed what we said and they had a charter to get things done. Then we started with ERDA and that was the beginning of the downslide. DOE is now, just...impossible. Too big and too many activities. And they feel that computing is just a trivial part of the job, that anybody can make the right decisions. Whereas I think that computing is one of the most essential things in the world to solve those problems and choosing correctly is a tricky matter. Computers can help solve almost any of these problems that DOE is facing up to. But the DOE doesn't know how to use this tool. The laboratories are better prepared to do that but they don't really try convince the DoE, as long as they can continue getting things for themselves.

MG: Well, thank you again and I will be in touch with you.

SF: Well, thank you.


The front of a gift given to Sid: an emergency alarm box containing an abacus, to be used in the event of computer failure

The read of the emergency computation box, showing the labeling



[1] Prior to 1950, Engineering Research Associates (ERA) in St Paul had been formed (1946) Similarly, the Eckert & Mauchly Computer Company (EMCC) in Philadelphia had been formed (~1947). Based on their ENIAC war work, EMCC were developing a computer called the UNIVAC.

In 1950 Remington Rand bought the EMCC and its UNIVAC. In 1953 Remington Ramd bought ERA and their 1100 series machines. In 1955 Remington Rand merged with Sperry to form the Sperry Rand Corporation

To finish this sad tale of commercial incest, in 1986 the Sperry Corporation was taken over by the Burroughs Corporation.