Evolution of Computer Technology, Research Paper Example
Words: 1361Research Paper
How Concepts Like RISC, Pipelining, Cache Memory, And Virtual Memory Have Evolved To Improve System Performance
Microprocessors are most important technical performance with an increasing computing power and complexity. This makes them play a major role in the modern society. Microprocessors are mainly used in controlling and monitoring machine tools, cars, aircrafts, and computer electronics. The new design represents a continuous challenge for engineers and technologists striving to give market for the products. The monolithic microprocessors are overtaking most of the computers. The absorption of these generations takes place in phases. The minicomputers were common in the 80s, the mainframe computers in the 90’s and the super computers at the beginning of the century. These devices use all the possible technological innovations in their evolution. The story of microprocessors began in 1972 with Intel 4004, designed to build calculators. This technology was also used in various applications. Since this time, microprocessors have become powerful computers taking over other ranges of computers.
The evolution of microprocessors continues at a sustainable rate. The complexity is rapidly increasing at a steady rate of 37% per year, with their performance also increasing at the same rate. By the year 1985, the VLSI processor moved from being pushed by the market due to evolution in technology to a market in which the requirements of the end-users determine the rate of evolution. This evolution led to the growth and development of powerful computers. The market size continued growing making people to set aside large amounts of money for technological improvement.
With their evolution, microprocessors are using technological innovations conceived for previous generations of computer like pipeline architecture, cache, super-scalar architecture, RISC, out-of-order execution and branch prediction among others. These features comprise of the most advanced processors present in the market. Therefore, microprocessors form the central component of the computer technology. This makes building processors using MSI technology completely supreme and far outside the economic optimal.
Computers operate using a set of instructions known as programs. The CPU constructs processors using logic gates to execute instructions. To ensure that the number of logic gates remains small, the complexity of the commands the CPU recognizes requires reduction. In the early computer systems, the computer wiring determined the type of problem that a computer would solve. To alter a particular program, one had to rewire the circuitry, which was a difficult task. The next level of machines consisted of a programmable computer system. A computer program had a set of rows of holes with each row representing one operation during execution of the program. A programmer had to plug a wire into a particular socket to obtain the desired instruction.
The concept of RISSC computers came from John Cocke in 1975. RISC stands for Reducted Instruction Set Computer. RISC computers are well balanced and can optimize the use of each part of the computer hardware. RISC computers do posses special features that increase the speed and memory of a machine. They have a large number of registers to decrease the memory access. They also have few instruction formats, which make instruction-decoding simple. Sequencing of executions is much simpler as execution of all the instructions takes place in the same number of clock cycles. Lastly, only few specific instructions carry out the memory accesses. These accesses perform move operations between memory and registers (Grossschadl, 2003). The hardware/software interface of the RISC is also lower than in the CISC processors. This low interface is as a result of the reduction in the functions performed by the hardware and the increase in the complexity of the functions executed by the software. Programs meant for a RISC processor are generously proportioned than those of a CISC are because the simplicity of the RISC instructions gives room for optimization of programs by compilers. The optimization and the short clock cycle give the RISC computers a higher global performance than the CISC. The hardware of the RISC microprocessor is simpler than that of the CISC as the control section of the RISC is totally hardwired. Its area is almost the same as that of its data-path (Deng, 2008).
Pipeline execution is a technique that splits the execution of instructions into several sub steps. Several hardware blocks execute these steps with each step of the pipeline performing one operation in the execution. This takes place as the instruction in execution gets its data into the input register and puts the results in the output register. Throughout the process, the registers contain an execution queue where execution of instructions occurs. The hardware complexity of a pipeline is larger by several times as compared to those essential for sequential execution. The major disadvantage of the pipeline comes in the dependencies.
There are two main dependencies, the data dependency and the instruction dependency. Data dependency takes place when a command in the pipeline requires fetching data not computed, as these data has to be computed by an extra command deeper in the pipe. The instruction dependency occurs when the computer does not know the next instruction to fetch due to lack of evaluation of the instruction used in the instruction set. The solution to these dependencies ids stopping the instruction causing the problem in the pipe until there is computation of the missing data.
Virtual registers is one of the solutions to the elimination of data dependency. It is possible to access directly the places that store the value of the operand of the instruction within a register in the pipeline. In a case where an instruction still awaits the data that is not executed, the position that stores this value is a virtual register. This cancels the process of fetching data from memory. When the instructions, which computes the data have performed their function, the results, which are in parallel arrangement, get loaded into the various virtual registers (Schlansker & Rau, 2000). This process of memory access and elimination of numerous data fetching from memory increases the performance of the computer.
One can also reduce instruction dependency by making a hypothesis on the behaviour of the condition branch. The better the hypothesis, the better the performance improvement. When the hypothesis is false, the instructions fetched in the pipe are dropped. To give room for back-tracking, the instructions, which were fetched on a hypothesis, are marked as conditional. These instructions do not amend the programming framework of the computer.
Super-scalar is another technique, which involves putting numerous pipelines in parallel so as to augment the efficiency of the computer. These pipelines are often specialized or identical. There are computers that use specific pipeline for integer computation and another for floating point. In such a scenario, each pipe will have to fetch only the instruction it is able to execute. The specialized pipelines use different sets of registers with no reliance between them. If the pipelines are matching, it allows sharing of resources. Many conflicts and dependencies can occur. Therefore, it is vital to extend to al pipelines the notion of virtual registers (Yeager, 1996).
Super-pipeline is an efficiency technique that reduces the drawback of the dependencies causing the designers to increase the depths of the pipelines. This takes place to reduce the amount of job performed at each stage. Such an evolution decreases the clock cycle of the computer. In return, the rate of instructions getting into the pipeline increases. The computer begins to execute more instructions per second and its performance increases.
In summary, the evolution of the microprocessor is an exciting technical story. With time, they are substituting themselves to other technological processors. Today, processors are a set of sub-processors working for a single program. We also hope that they hold several processors in future, working in parallel for several program threads. With the growth in technology, these ought to lead us to very impressive computing power that produces fine results.
Grossschadl. J. (2003). Architectural support for long integer modulo arithmetic on RISC-based smart cards. The International Journal of High Performance Computing Applications, 17(2), 135.
Deng, Y. (2008). RISC: A resilient interconnection network for scalable cluster storage systems. Journal of Systems Architecture, 54(1/2), 70.
Yeager, K.C. (1996). The MIPS R10000 super scalar microprocessor. IEEE Micro, 16(2): 28-40.
Schlansker M.S. & Rau B.R. (2000). EPIC: explicitly parallel instruction computing. IEEE Computer , 33 (2): 37-45.
Time is precious
don’t waste it!