Evolution of Supercomputers

Bigger, faster, stronger and better – man seems to thrive on using superlatives, especially in the realm of technology. With mobiles and computers, the trend seems to be speedier and smaller. Then there are supercomputers, which are the brains or “Einsteins” of the computing world. Supercomputers are faster, more powerful and very large, when compared to their everyday counterparts. Supercomputers have a wide range of uses in complex and data consuming applications. Where did it all began? To answer that question, read on for a brief history of supercomputers.

The Beginning of the Supercomputer Age – The 1960s

Livermore Advanced Research Computer
In 1960, the UNIVAC LARC (Livermore Advanced Research Computer) was unveiled. It cannot be considered as the first supercomputer, since its configuration was not as powerful as expected but rather is considered the first attempt at building such a machine. Its inventor was Remington Rand. At the time of its invention, it was the fastest computer around. Following is a list of features of UNIVAC LARC:

1. Had 2 Central Processing Units (CPU’s) and one I/O (input/output) processor.
2. Had a core memory power of 8 banks, which stored 20,000 words.
3. Accessing memory speed was 8 microseconds and cycle time was 4 microseconds.

1961 saw the creation of the IBM 7030 or Stretch. In the whole rat race to build the first supercomputer and sell it, IBM had designs and plans but lost the first contract to the LARC. Fearing that the LARC would emerge as the ultimate winner, IBM promised a very powerful machine and set high expectations, that ultimately it could not live up to. The 7030 was compared to an earlier IBM model, the IBM 7090, which was a mainframe computer released in 1959. Its computational speed was projected to be 100x times faster than the IBM 7090 but once made, it was only 30x times faster.

Its selling price was greatly reduced, few models were sold and this machine was a major embarrassment for IBM. But this machine contributed greatly to key computer concepts, such as multiprogramming, memory interleaving and protection, 8-bit byte and instruction pipelining. IBM implemented these concepts in their upcoming models, spawning successful machines in the business and scientific lines of use. Such concepts are used today in microprocessor systems, such as the Intel Pentium and the Motorola/IBM PowerPC.

CDC 6600
What marks the beginning of a species or objects evolution? One success or one being that surpassed the others to form the prototype from which future generations would be formed. You could say the evolution of supercomputers began with the CDC 6600. It was designed by Seymour Cray, the man regarded as the creator of supercomputers, for the Control Data Corporation, in 1964. A few of the features of CDC 6600 are listed below:

1. 1 CPU for arithmetic and logical operations, different simpler processors (I/O processors or peripheral processors) for other tasks.
2. Introduced Reduced Instruction Set Computer (RISC) concept, where instruction set of the main CPU was smaller, different processors could work in parallel and clock speed was very fast (10 MHz).
3. Introduced logical address translation.

The 6600 had a performance figure of 1 MFLOP (10 6 floating-point operations per second), making it 3 times faster that the Stretch and it reigned supreme as the world’s fastest computer till 1969.

Timeline of Supercomputer Evolution

1969 CDC 7600 was released. It surpassed the 6600 with a clock speed of 36.4 MHz and used a pipelined scalar architecture. It surpassed the 6600 by 10x times, with its performance figure of 10 MFLOPS.

1972 Seymour Cray left CDC to form his own computing firm, Cray Research.

1974 CDC released the STAR-100, a supercomputer with a vector processor. It had a performance speed of 100 MFLOPS.

1976 The Cray-1 was unveiled, a machine with a vector processor and had a clock speed of 80 MHz and a performance figure of 160 MFLOPS. This system was a 64-bit system and had its own OS, assembler and used a FORTRAN compiler.

1982 The Cray X-MP was unveiled. This machine was designed by Steve Chen and used a shared-memory parallel vector. Its clock speed was 105 MHz or 9.5 nanoseconds. This was the first multi processor supercomputer.

1985 Cray-2 was born. This machine exceeded the MFLOPS factor and touched GFLOPS (1000 MFLOPS) with a performance figure of 1.9 GFLOPS. It had 4-8 processors in a completely new design and structure, with pipelining and a high memory latency.

1990 The Fujitsu Numerical Wind Tunnel was created. It had a vector parallel architecture and its sustained performance factor was 100 GFLOPS, with a clock cycle time of 9.5 nanoseconds. It had 166 vector processors, each with a speed of 1.7 GFLOPS.

1996 HITACHI SR2201 used a distributed memory parallel system to attain a performance of 600 GFLOPS from 2048 processors.

1997 Intel and Sandia Labs jointly created the ASCI RED. This mesh-based machine was designed for extremely large parallel processing and had 9298 Pentium II processors. Its performance touched 1.34 TFLOPS, making it the first supercomputer to do so and it remained the king of its kind, till the year 2000. It was also a very scalable supercomputer, with its processors found in most home computer systems.

2004 The Earth Simulator was designed to simulate the world’s climatic conditions, on both land and sea, as well as atmospheric. It was built by NEC and had 8 vector processors. Its performance factor was 131 TFLOPS.

2005 The first machine from the IBM Blue Gene supercomputer series, was the Blue Gene/L. This machine started out with a peak performance of 280 TFLOPS. There are 4 main Blue Gene projects and 27 supercomputers using the architecture, which uses approx 60,000 processors.

2008 The IBM Roadrunner is a hybrid supercomputer, with 2 different processor architectures working in tandem. It uses Red Hat Enterprise Linux and Fedora as its OS and its performance is 1.456 petaflops at peak.

2010 Tianhe-I was a record breaker in so many ways. It was China’s first supercomputer to enter the Top500 list of supercomputers. It has a performance factor of 2.566 PFLOPS, which made it the fastest supercomputer till 2011.

2011 The reigning champion amongst supercomputers is the K computer, a Japanese supercomputer, which touches performance rates of 8.162 PFLOPS. It uses 68,544 8-core processors and its construction is still being completed.

It’s clear to see that from the march of time, the configuration and strengths of one supercomputer model has served to enhance and result in a better and more advanced newer model. Another point to note, is that the supercomputer of yesteryear is more backward that the desktop of today!

Linux: History and Introduction

Linux history
Linux is one of the popularly used operating systems and a free software supporting open source development. Originally designed for Intel 80386 microprocessors, Linux now runs on a variety of computer architectures and is widely used.

A Brief History

Unix was the third operating system to CTSS, the first one followed by MULTICS. A team of programmers led by Prof. Fernando J. Corbato at the MIT Computation Center, wrote the CTSS, the first operating system supporting the concept of time-sharing. AT&T started working on the MULTICS operating system but had to leave the project as they were failing to meet deadlines. Ken Thompson, Dennis Ritchie, and Brian Kernighan at Bell Labs, used the ideas on the MULTICS project to develop the first version of Unix.

MINIX was a Unix-like system released by Andrew Tenenbaum. The source code was made available to the users but there were restrictions on the modification and distribution of the software. On August 25, 1991, Linus Torvalds, a second year computer engineering student studying in the University of Helsinki made an announcement that he was going to write an operating system. With an intent to replace MINIX, Torvalds started writing the Linux kernel. With this announcement of Torvalds, a success story had begun! Linux was previously dependent on the MINIX user space but with the introduction of the GNU GPL, the GNU developers worked towards the integration of Linux and the GNU components.

An Introduction to the Linux Operating System

The Unix-like operating system that uses the Linux kernel is known as the Linux operating system. In 1991, Linus Torvalds came up with the Linux kernel. He started writing the Linux kernel after which, around 250 programmers contributed to the kernel code. Richard Stallman, an American software developer, who was a part of the GNU project, created the General Public License, under which Linux is distributed. The utilities and libraries of Linux come from the GNU operating system.

By the term ‘free software’, we mean that Linux can be copied and redistributed in the altered or unaltered form without many restrictions. Each recipient of the Linux software is entitled to obtain the human readable form of the software and a notice granting the person the permissions to modify its source code. In other words, the distribution of the Linux software implies the distribution of a free software license to its recipients. Linux supports open source development by which we mean that all its underlying source code can be freely modified, used and distributed. The open source method of development enables the users to access its source code.

A Linux distribution is a project that manages the collection of Linux software and the installation of the OS. It includes the system software and the application software in the form of packages and the initial installation and configuration details. There are around 300 different Linux distributions. The most prominent of the Linux distributions include Red Hat, Fedora and Mandrake. Fedora Core came up after the ninth version of Red Hat Linux. Fedora Core is a rapidly updated Linux distribution. Most of the Linux distributions support a diverse range of programming languages. Most of them include Perl, Python, Ruby, and other dynamic languages. Linux supports a number of Java virtual machines and development kits as also the C++ compilers.

Linux is a freely available OS based on the Linux kernel. It is an inexpensive and effective alternative to UNIX programs and utilities. Its open source implementation enables any programmer to modify its code. Linux supports a multi-tasking and multi-user environment as also the copy-on-write functionality. The monolithic Linux kernel handles the process control, networking and the file system. Device drivers are integrated in the kernel. The Linux operating system is equipped with libraries, compilers, text editors, a Unix shell, and a windowing system. Linux supports both the command line as well and the graphical user interfaces. It is popularly used in servers and also with desktop computers, supercomputers, video games and embedded systems. I have always enjoyed working on the Linux platform, have you?