*** Welcome to piglix ***

ILLIAC IV


The ILLIAC IV was one of the first attempts to build a massively parallel computer. One of a series of research machines (the ILLIACs from the University of Illinois), the ILLIAC IV design featured fairly high parallelism with up to 256 processors, used to allow the machine to work on large data sets in what would later be known as vector processing. After several delays and redesigns, the computer was delivered to NASA's Ames Research Center at Moffett Airfield in Mountain View, California in 1971. After thorough testing and four years of NASA use, ILLIAC IV was connected to the ARPANet for distributed use in November 1975, becoming the first network-available supercomputer, beating Cray's Cray-1 by nearly 12 months.

By the early 1960s computer designs were approaching the point of diminishing returns. At the time, computer design focused on adding as many instructions as possible to the machine's CPU, a concept known as "orthogonality", which made programs smaller and more efficient in use of memory. It also made the computers themselves fantastically complex, and in an era when CPUs were built from individual transistors, and later from small or medium-scale integrated circuits, the cost of additional orthogonality was often very high. Adding instructions could potentially slow the machine down; maximum speed was defined by the signal timing in the hardware, which was in turn a function of the overall size of the machine. The state of the art hardware design techniques of the time used individual transistors to build up logic circuits, so any increase in logic processing meant a larger machine. CPU speeds appeared to be reaching a plateau.

Several solutions to these problems were explored in the 1960s. One, then known as overlap but today known as an instruction pipeline, allows a single CPU to work on small parts of several instructions at a time. Normally the CPU would fetch an instruction from memory, "decode" it, run the instruction and then write the results back to memory. While the machine is working on any one stage, say decoding, the other portions of the CPU are not being used. Pipelining allows the CPU to start the load and decode stages (for instance) on the "next" instruction while still working on the last one and writing it out. Pipelining was a major feature of Seymour Cray's groundbreaking design, the CDC 7600, which outperformed almost all other machines by about ten times when it was introduced.


...
Wikipedia

...