What are the Computer Generations?

First Generation (1951 to 1958) First Generation

computers used bulbs to process information.
Programming was done through machine language. The memories were built with thin tubes of liquid mercury and magnetic drums. The operators entered the data and programs in special code by means of punched cards. Internal storage was accomplished with a rapidly rotating drum, onto which a read/write device placed magnetic marks.
These computers used the vacuum valve. So they were extremely large, heavy equipment and generated a lot of heat.
The First Generation begins with the commercial installation of the UNIVAC built by Eckert and Mauchly. UNIVAC’s processor weighed 30 tons and required the entire space of a 20-by-40-foot room.

Second Generation (1959-1964)
The Limited Compatibility Transistor replaces the vacuum tube used in the first generation. The second generation computers will be faster, smaller and with less cooling needs. These computers also used networks of magnetic cores instead of spinning drums for primary storage. These cores contained small rings of magnetic material, linked together, in which data and instructions could be stored.
Computer programs also improved. COBOL developed during the 1st generation was already commercially available. Programs written for one computer could be transferred to another with minimal effort. Writing a program no longer required a full understanding of computing hardware.

Third Generation (1964-1971)
Integrated Circuits, Major Equipment Compatibility, Multiprogramming, Minicomputer
Third generation computers emerged with the development of integrated circuits (silicon chips) in which thousands of electronic components are placed, in an integrated in miniature. Computers again became smaller, faster, gave off less heat, and were more energy efficient.

Before the advent of integrated circuits, computers were designed for mathematical or business applications, but not both. Integrated circuits allowed computer manufacturers to increase the flexibility of programs and to standardize their models.

The IBM 360, one of the first commercial computers to use integrated circuits, could perform both numerical analysis and file management or processing. Customers could scale their 360 systems to larger IBM models and still run their current programs. Computers worked at such speed that they provided the ability to run more than one program simultaneously (multiprogramming).

Fourth Generation (1971 to 1981)
Microprocessor, Memory Chips, Microminiaturization
Two improvements in computer technology mark the beginning of the fourth generation: the replacement of memories with magnetic cores by those with silicon chips, and the placement of many more components in a Chip: product of the microminiaturization of electronic circuits. The reduced size of the microprocessor and chips made possible the creation of personal computers (PCs).
In 1971, Intel Corporation, which was a small semiconductor manufacturing company located in Silicon Valley, introduced the first 4-bit microprocessor or chip, which contained 2,250 transistors in a space of approximately 4 x 5 mm. This first microprocessor, shown in Figure 1.14, was named the 4004.

Silicon Valley (Silicon Valley) was an agricultural region south of the San Francisco Bay, which, due to its large silicon production, from 1960 became a fully industrialized area where a large number of semiconductor manufacturing companies and microprocessors. It is currently known throughout the world as the most important region for computer-related industries: software creation and component manufacturing.

Currently, a huge number of manufacturers of microcomputers or personal computers have emerged, which using different structures or architectures are literally fighting for the computing market, which has grown so much that it is one of the largest worldwide; especially since 1990, when surprising advances were made on the Internet.

This generation of computers was characterized by great technological advances made in a very short time. In 1977 the first microcomputers appeared, among which the most famous were those manufactured by Apple Computer, Radio Shack and Commodore Business Machines. IBM joins the microcomputer market with its Personal Computer (figure 1.15), from which the name PC has remained synonymous, and most importantly; A standardized operating system is included, MS-DOS (MicroSoft Disk Operating System).

It is becoming more and more difficult to identify the generations of computers, because the great advances and new discoveries no longer surprise us as they did in the middle of the 20th century. There are those who consider that the fourth and fifth generation have ended, and place them between 1971-1984 as the fourth, and between 1984-1990 as the fifth. They consider the sixth generation to be in development from 1990 to date.

Following the trail of technological developments in computing and informatics, we can point out some dates and characteristics of what could be the fifth generation of computers.
Based on the great technological developments in the field of microelectronics and computing (software) such as CADI CAM, CAE, CASE, artificial intelligence, expert systems, neural networks, chaos theory, genetic algorithms, optical fibers, telecommunications, etc., The 1980s laid the groundwork for what may be known as the fifth generation of computers.
Two great technological advances must be mentioned, which serve as a parameter for the beginning of this generation: the creation in 1982 of the first supercomputer with parallel processing capacity, designed by Seymouy Cray, who had already been experimenting with supercomputers since 1968, and founded in 1976 Cray Research Inc.; and the announcement by the Japanese government of the “fifth generation” project, which was established in the agreement with six of the largest Japanese computer companies, should end in 1992.

The parallel process is one that is carried out in computers that have the ability to work simultaneously with several microprocessors. Although in theory the work with several microprocessors should be much faster, it is necessary to carry out a special programming that allows assigning different tasks of the same process to the various microprocessors involved.
Memory must also be adapted so that it can meet the requirements of the processors at the same time. To solve this problem, shared memory modules capable of allocating cache areas for each processor had to be designed.

According to this project, which was joined by the most technologically advanced countries so as not to be left behind by Japan, the main feature would be the application of artificial intelligence (Al, Artificial Intelligence). Computers of this generation contain a large number of microprocessors working in parallel and can recognize voice and images. They also have the ability to communicate with a natural language and will gradually acquire the ability to make decisions based on learning processes based on expert systems and artificial intelligence.

Information storage is done in magneto-optical devices with capacities of tens of gigabytes; DVD (Digital Video Disk or Digital Versatile Disk) is established as the standard for video and sound storage; the data storage capacity grows exponentially making it possible to store more information in one of these units, than all that was in the Library of Alexandria. The components of current microprocessors use high and ultra integration technologies, called VLSI (Very Large Scale Integration) and ULSI (Ultra Large Scale Integration).
However, regardless of these “miracles” of modern technology, the gap where the fifth generation ends and the sixth generation begins cannot be distinguished. Personally, we have not seen the full realization of what has been exposed in the Japanese project due to the failure, perhaps momentary, of artificial intelligence.

The only forecast that has been made without interruption in the course of this generation is connectivity between computers, which since 1994, with the advent of the Internet and the World Wide Web, has acquired vital importance in large companies. , medium and small businesses and, among private computer users.

The purpose of Artificial Intelligence is to equip Computers with “Human Intelligence” and the ability to reason to find solutions. Another fundamental factor of the design, the capacity of the Computer to recognize patterns and processing sequences that it has previously found, (Heuristic programming) that allows the Computer to remember previous results and include them in the processing, in essence, the Computer will learn from your own experiences will use your Original Data to obtain the answer through reasoning and will retain those results for later processing and decision-making tasks.

Since the sixth generation of computers is supposedly underway since the early 1990s, we should at least outline the features that computers of this generation should have. Some of the technological advances of the last decade of the 20th century and what is expected to be achieved in the 21st century are also mentioned. Computers of this generation have combined Parallel / Vector architectures, with hundreds of vector microprocessors working at the same time; computers capable of performing more than a million million floating-point arithmetic operations per second (teraflops) have been created; World Wide Area Networks (WAN) will continue to grow exorbitantly using means of communication through optical fibers and satellites, with impressive bandwidths. The technologies of this generation have already been developed or are in that process. Some of them are: distributed intelligence / artificial; chaos theory, fuzzy systems, holography, optical transistors, etc.

Leave a Reply

Your email address will not be published.

Back to top button