Computer architecture

 

  • The following technologies are used in bigger companies like Intel, and were estimated in 2002[14] to count for 1% of all of computer architecture:

    Macroarchitecture: architectural layers more abstract than microarchitecture
    Assembly instruction set architecture: A smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations.

  • Performance
    Modern computer performance is often described in instructions per cycle (IPC), which measures the efficiency of the architecture at any clock frequency; a faster IPC rate means the computer is faster.

  • A new improved version of the chip can use microcode to present the exact same instruction set as the old chip version, so all software targeting that instruction set will run on the new chip without needing changes.

  • The term “architecture” fits, because the functions must be provided for compatible systems, even if the detailed method changes.

  • Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible.

  • A good ISA compromises between programmer convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the computer to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time).

  • Design goals
    The exact form of a computer system depends on the constraints and goals.

  • that may be implemented at the logic-gate level, or even at the physical level if the design calls for it.

  • [13]
    Subcategories
    The discipline of computer architecture has three main subcategories:[14]

    Instruction set architecture (ISA): defines the machine code that a processor reads and acts upon as well as the word size, memory address modes, processor registers, and data type.

  • Computer architectures usually trade off standards, power versus performance, cost, memory capacity, latency (latency is the amount of time that it takes for information from one node to travel to the source) and throughput.

  • Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts.

  • Microcode can present a variety of instruction sets for the same underlying chip, allowing it to run a wider variety of software.

  • For example, one system might handle scientific applications quickly, while another might render video games more smoothly.

  • Computer organization and features also affect power consumption and processor cost.

  • [8]

    Subsequently, Brooks, a Stretch designer, opened Chapter 2 of a book called Planning a Computer System: Project Stretch by stating, “Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.

  • The case of instruction set architecture can be used to illustrate the balance of these competing factors.

  • Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but do not offer similar advantages to general tasks.

  • Later, computer users came to use the term in many less explicit ways.

  • Computer organization also helps plan the selection of a processor for a particular project.

  • Design validation tests the computer as a whole to see if it works in all situations and all timings.

  • The increased complexity from a large instruction set also creates more room for unreliability when instructions interact in unexpected ways.

  • Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs.

  • [11]

    The earliest computer architectures were designed on paper and then directly built into the final hardware form.

  • [17]
    Instruction set architecture
    An instruction set architecture (ISA) is the interface between the computer’s software and hardware and also can be viewed as the programmer’s view of the machine.

  • The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt).

  • Sometimes certain tasks need additional components as well.

  • Implementation is usually not considered architectural design, but rather hardware design engineering.

  • While building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the stored-program concept.

  • Circuit implementation does transistor-level designs of basic elements (e.g., gates, multiplexers, latches) as well as of some larger blocks (ALUs, caches etc.)

  • [18] This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it.

  • More complex instruction sets enable programmers to write more space efficient programs, since a single instruction can encode some higher-level abstraction (such as the x86 Loop instruction).

  • Interrupt latency is the guaranteed maximum response time of the system to an electronic event (like when the disk drive finishes moving some data).

  • Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse, but makes throughput better.

  • They may need to optimize software in order to gain the most performance for the lowest price.

 

Works Cited

[‘1. Dragoni, Nicole (n.d.). “Introduction to peer to peer computing” (PDF). DTU Compute – Department of Applied Mathematics and Computer Science. Lyngby, Denmark.
2. ^ Clements, Alan. Principles of Computer Hardware (Fourth ed.). p. 1. Architecture describes the internal organization of a computer in an abstract way; that is, it defines the capabilities of the computer and its programming model. You can have two computers that have been constructed in different ways with different technologies but with the same architecture.
3. ^ Hennessy, John; Patterson, David. Computer Architecture: A Quantitative Approach (Fifth ed.). p. 11. This task has many aspects, including instruction set design, functional organization, logic design, and implementation.
4. ^ Williams, F. C.; Kilburn, T. (25 September 1948), “Electronic Digital Computers”, Nature, 162 (4117): 487, Bibcode:1948Natur.162..487W, doi:10.1038/162487a0, S2CID 4110351
5. ^ Susanne Faber, “Konrad Zuses Bemuehungen um die Patentanmeldung der Z3”, 2000
6. ^ Neumann, John (1945). First Draft of a Report on the EDVAC. p. 9.
7. ^ Reproduced in B. J. Copeland (Ed.), “Alan Turing’s Automatic Computing Engine”, Oxford University Press, 2005, pp. 369-454.
8. ^ Johnson, Lyle (1960). “A Description of Stretch” (PDF). p. 1. Retrieved 7 October 2017.
9. ^ Buchholz, Werner (1962). Planning a Computer System. p. 5.
10. ^ “System 360, From Computers to Computer Systems”. IBM100. 7 March 2012. Archived from the original on April 3, 2012. Retrieved 11 May 2017.
11. ^ Hellige, Hans Dieter (2004). “Die Genese von Wissenschaftskonzeptionen der Computerarchitektur: Vom “system of organs” zum Schichtmodell des Designraums”. Geschichten der Informatik: Visionen, Paradigmen, Leitmotive. pp. 411–472.
12. ^ ACE underwent seven paper designs in one year, before a prototype was initiated in 1948. [B. J. Copeland (Ed.), “Alan Turing’s Automatic Computing Engine”, OUP, 2005, p. 57]
13. ^ Schmalz, M.S. “Organization of Computer Systems”. UF CISE. Retrieved 11 May 2017.
14. ^ Jump up to:a b John L. Hennessy and David A. Patterson. Computer Architecture: A Quantitative Approach (Third ed.). Morgan Kaufmann Publishers.
15. ^ Laplante, Phillip A. (2001). Dictionary of Computer Science, Engineering, and Technology. CRC Press. pp. 94–95. ISBN 0-8493-2691-5.
16. ^ Null, Linda (2019). The Essentials of Computer Organization and Architecture (5th ed.). Burlington, MA: Jones & Bartlett Learning. p. 280. ISBN 9781284123036.
17. ^ Martin, Milo. “What is computer architecture?” (PDF). UPENN. Retrieved 11 May 2017.
18. ^ “Integrated circuits and fabrication” (PDF). Retrieved 8 May 2017.
19. ^ “Exynos 9 Series (8895)”. Samsung. Retrieved 8 May 2017.
20. ^ “Measuring Processor Power TDP vs ACP” (PDF). Intel. April 2011. Retrieved 5 May 2017.
21. ^ “History of Processor Performance” (PDF). cs.columbia.edu. 24 April 2012. Retrieved 5 May 2017.
22. John L. Hennessy and David Patterson (2006). Computer Architecture: A Quantitative Approach (Fourth ed.). Morgan Kaufmann. ISBN 978-0-12-370490-0.
23. Barton, Robert S., “Functional Design of Computers”, Communications of the ACM 4(9): 405 (1961).
24. Barton, Robert S., “A New Approach to the Functional Design of a Digital Computer”, Proceedings of the Western Joint Computer Conference, May 1961, pp. 393–396. About the design of the Burroughs B5000 computer.
25. Bell, C. Gordon; and Newell, Allen (1971). “Computer Structures: Readings and Examples”, McGraw-Hill.
26. Blaauw, G.A., and Brooks, F.P., Jr., “The Structure of System/360, Part I-Outline of the Logical Structure”, IBM Systems Journal, vol. 3, no. 2, pp. 119–135, 1964.
27. Tanenbaum, Andrew S. (1979). Structured Computer Organization. Englewood Cliffs, New Jersey: Prentice-Hall. ISBN 0-13-148521-0.
Photo credit: https://www.flickr.com/photos/clairity/3104672186/’]