Tuesday, April 2, 2019
Evolution of Computer Technology in Last 25 Years
Evolution of Computer Technology in Last 25 geezerhoodThe publicity of the computing technology could uncouthly identify in 6 contemporariess. The physical size of the estimators significantly decreased from the first generation make clean tube reckoners to third generation computers ground on the integrated check technology. Fourth and fifth generation computer technology increased computer speckles efficiency by developing the very oversize scale consolidation (VLSI) and ultra large scale integration (ULSI) technology. (Halya, 1999) During the fifth generation computing, the fancy of using multiple computer chips to solve the same problem flourished, which was based on the earlier design of parallel computing that was developed during the after part generation. With the rise of ironw are, increased net start bandwidth, and developing to a greater extent efficient algorithms, massively parallel architectures allowed fifth generation computers to increase the effici ency of computing significantly. (Drako, 1994) This interrogation paper is mainly going to discuss how the computer technology evolved from the displace of the fifth generation to current day sixth generation computers.The improvement in micro movementor chips technology allowed millions of transistors to be placed on a one integrated chip, which opened the generation of computers based on ultra large scale integration, or ULSI. The 64-bit microprocessor was developed during this time and became the fifth generation chip we mostly enforce today. Even the older fourth-generation chip architecture concepts same Reduced Instruction Set Computers (RISC) and Complex Instruction Set Computers (complex instruction rank computer) derived the improvement of ULSI technology. During the fourth generation period, microprocessors were commonly classified into RISC or complex instruction set computer type architectures. The difference between RISC and CISC were very all the way distinguis hable. RISC has a very simple set of operating instructions which compulsory a low number of transistors but needed a high recollection to do a task. CISC has more instructions set available compared to RISC which required more transistors but less memory space. (Hennessy, 1991) Due to the limited computing resources, each programmer decided the peculiar(prenominal) chip type to deliver the endstate the application delivery. However, with the advancement of microprocessors, the 64-bit chip at present has more transistors and memory address access available for computing. Today, the need of differentiating what use to be two main categories of the microprocessor is almost pointless because of the level of complexness in modern day 64-bit chips for both(prenominal) CISC and RISC. Many newfound CISC chips behave same RISC with the increased processor quantify cycle while the new RISC has increased number of instructions available like CISC (Cole, 2015).Two of the most importan t hardware techniques used to improve coiffeance during the fourth and fifth generation of computer development have been pipelining and caches. Both techniques rely on using more devices to achieve high work. Pipelining might have been available only to any(prenominal) mainframe computers and supercomputers during fourth generation computing however, the technique became very common within computer architecture during the fifth generation computing which became the service line for the sixth generation computer which uses decentralized computing process to perform as an artificial intelligence and neural network computing. Pipelining improves the throughput of a appliance without changing the basic cycle time and increases performance by exploiting instruction-level symmetricalness. (Hennessey, 1991) Instruction-level parallelism is available when instructions in a sequence are independent and thus can be executed in parallel by overlapping. Unarguably, the pipelining techno logy led to faster speeds and better performances but the hardware performance couldnt keep up with the demand of even faster hardware that could help applications that required processing a large amount of data or critical commercial transaction very fast. Addition to advances in pipelining, the advancement in cache memory technology also significantly intensify performance of how computer access data. By creating a small share of memory either in the actual processor or very close to it decreased the need of frequent access of data immediately from the memory. This technique made cache memories one of the most important ideas in computer architecture. (Uri, 2010)Cache memories substantially amend performance by the use of memory. Cache memories were first used in the third-generation computers from the late 60s and early 70s, both in large machines and minicomputer. From the fourth-generation and on, virtually every microprocessor has included support for a cache. Although la rge caches can certainly improve performance, total cache size, associativity, and delay size all directly impact the performance and have optimal values that depend on the details of a design. (Hennessey, 1991) Just like microprocessor and pipelining, the cache technology improved significantly last two decades. traditionalistic cache architectures are demand fetch, cache lines are only brought into the cache when they are explicitly required by the process. Prefetching increased the efficiency of this process by anticipating that some memory will be used tight fitting future, thus, proactively fetched into the cache. Earlier of prefetching was either done through software program or hardware prefetching. As the complexity of prefetching increases, some more recent research has looked at combining the imprecise future knowledge available to the compiler with the elaborated run-time information available to hardware like programmable prefetching engine consisting of a run-ahea d put off that populates using explicit software instruction. (Srinivasan, 2011)With such advancement in heart and soul computer technologies, the ability to process data and store information in truth became increasingly decentralized. From cloud to PC over IP technology, cheaper storage, faster processor, and higher bandwidth wide area network allowed the modern day computer to work in collaboration rather than isolation. If from the first generation to the fifth generation focused on improving the efficiency of the hardware to meet demands of software engineers, the current sixth generation is more about how merciful interacts with the computers to ameliorate human lives. Computers became smaller while still sufficient to process infallible application by itself or using servers through the internetwork. Everything has become smarter, faster, smaller, and connected. With the improved network and parallel computing, the sixth generation computers definitely getting surround ing(prenominal) to simulate how the human brain functions. Using basic algorithms, probability and statistic, and stinting theories, new computer technology could simulate human-like decision-making process to improve human lives and help to solve more complex issues. In the sixth generation, we are actually experiencing the true potential of commercial Artificial Intelligence.ReferencesCole, Bernard, (2015). New CISC computer architecture Takes on RISC. EE Times, Retrieved from http//www.eetimes.comDrako, Nikos, (1995) . An Overview Of Computational Science. The Computational Science program line ProjectHaldya, Micky, (1999). Computer Architecture. Biyanis Think Tanks Chap 5, 26 27Hennessy, tin can L. Jouppi,Norman P., (1991). Computer Technology and Architecture An Evolving Interaction.Computer, vol. 24, no., 18 29Srinivasan, James R., (2011). modify Cache Utilization. Technical Report no 800., 31 35Uri, Cohen, (2010). From Caching to Space-based Architecture The Evolution of Memory. Enterprise System Journal. Retrieved from https//esj.com/
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.