banner
cos

cos

愿热情永存,愿热爱不灭,愿生活无憾
github
tg_channel
bilibili

Review Summary of Computer Organization Principles (Part 1) Introduction to Computer Systems

Chapter 1 Introduction to Computer Systems#

1.1 Computer Classification#

Analog Computers#

Characteristics: Numerical values represented by continuous quantities, and the operation process is also continuous

Digital Computers#

Characteristics: Bitwise operations and non-continuous calculations
Analog computers have limited precision and problem-solving capabilities, and their application scope is small. When people refer to electronic computers, they generally mean the widely used digital computers. Digital computers can be further divided into specialized computers and general-purpose computers.

  • Specialized computers are the most efficient, economical, and fastest computers, but they have poor adaptability.
  • General-purpose computers sacrifice efficiency, speed, and economy, but they have great adaptability.

1.2 A Brief History of Computer Development (Key Points)#

GenerationDevice UsedTime Period
First GenerationVacuum Tube Computers1946-1957
Second GenerationTransistor Computers1958-1964
Third GenerationSmall and Medium-Scale Integrated Circuit Computers1965-1971
Fourth GenerationLarge-Scale and Very Large-Scale Integrated Circuit Computers1972-1990
Fifth GenerationSuper Large-Scale Integrated Circuit Computers1991-Present

Since the birth of computers in 1946, the computing speed has increased by approximately 10 times, reliability has increased by 10 times, and the size has decreased by 10 times every five years.

Moore's Law#

Proposed by Gordon Moore (former Chairman of Intel) in 1965.

The surface area of a transistor (the area on the integrated circuit board) shrinks by about 50% every 18 months. In other words, the performance of microprocessors doubles every 18 months, while the price is halved.

1.3 Computer Performance Metrics#

Throughput#

  • Represents the amount of information a computer can process in a certain time interval, measured in bytes per second (B/s).

Response Time#

  • Measures the time from when the input is valid to when the system produces a response, measured in time units such as microseconds (10^-6^s) or nanoseconds (10^-9^s).

Utilization#

  • Represents the ratio of the actual time the system is used within a given time interval, usually expressed as a percentage.

Word Length of the Processor#

  • Refers to the number of bits that can be processed in one binary operation in the arithmetic unit of the processor. The current processors have word lengths of 8 bits, 16 bits, 32 bits, and 64 bits. A longer word length indicates higher calculation precision.

Bus Width#

  • Generally refers to the number of binary bits in the internal bus that connects the arithmetic unit and the memory in the CPU.

Memory Capacity#

 

  • The total number of storage units in the memory, usually expressed in KB, MB, GB, and TB.
  • Where 1K=2^10^B, 1M=2^20^B, 1G=2^30^B, 1T=2^40^B, and 1B=8 bits (1 byte). The larger the memory capacity, the more binary numbers can be stored.

Memory Bandwidth #

  • A speed indicator of memory, representing the amount of binary information read from the memory in a unit of time, usually expressed in bytes per second.

Clock Frequency/Clock Cycle#

  • The working beat of the CPU is controlled by the main clock. The main clock continuously generates a clock with a fixed frequency, which is called the CPU's clock frequency (f).
  • The unit of measurement is MHz (megahertz) or GHz (gigahertz). For example, the Pentium series ranges from 60MHz to 266MHz, and the Pentium 4 reaches 3.6GHz.
  • The reciprocal of the clock frequency is called the CPU clock cycle (T), that is, T=1/f, measured in microseconds or nanoseconds.

CPU Execution Time #

  • Represents the CPU time occupied by executing a program, which can be calculated as follows:
    CPUExecutionTime=CPUClockCycleCount×CPUClockCycleLengthCPU Execution Time = CPU Clock Cycle Count × CPU Clock Cycle Length

CPI (Cycles Per Instruction)#

  • Represents the average number of clock cycles required to execute one instruction. It can be calculated as follows:
  • CPI=Number of CPU Clock Cycles for Executing a ProgramNumber of Instructions in the ProgramCPI = \frac{Number\ of\ CPU\ Clock\ Cycles\ for\ Executing\ a\ Program}{Number\ of\ Instructions\ in\ the\ Program}

MIPS (Million Instructions Per Second)#

  • Represents the number of instructions executed per second, calculated as follows:
    MIPS=Number of InstructionsProgram Execution Time×106=Clock FrequencyCPI×106MIPS = \frac{Number\ of\ Instructions}{Program\ Execution\ Time\times10^6} = \frac{Clock\ Frequency}{CPI\times10^6}
  • MIPS is the number of instructions executed in a unit of time, so the higher the MIPS value, the faster the machine.
    The program execution time (Te) is calculated as follows:
    Te=Number of InstructionsMIPS×106Te = \frac{Number\ of\ Instructions}{MIPS\times10^6}

MFLOPS / TFLOPS#

  • MFLOPS represents the number of millions of floating-point operations per second, calculated as follows:
    MFLOPS=Number of FloatingPoint Operations in the ProgramProgram Execution Time×106MFLOPS = \frac{Number\ of\ Floating-Point\ Operations\ in\ the\ Program}{Program\ Execution\ Time\times10^6}
  • MFLOPS is based on operations rather than instructions and can only be used to measure the performance of machine floating-point operations, not the overall performance of the machine. TFLOPS represents the number of trillions of floating-point operations per second and is generally used in supercomputers.

1.4 Von Neumann Architecture (Key Points)#

The basic design concept of the Von Neumann computer: Stored Program and Program Control!
It has the following characteristics:

  1. The computer system consists of five major components: arithmetic unit, memory unit, control unit, input devices, and output devices, and defines the basic functions of these five parts.
  2. It adopts the method of stored program, where the program and data are stored in the same memory (Harvard architecture stores them in different memories), and instructions and data can be sent to the arithmetic unit for calculation. In other words, the program composed of instructions can be modified.
  3. Data is represented in binary code.
  4. Instructions consist of an operation code and an address code.
  5. Instructions are stored in memory in sequential order, and the instruction counter (PC) indicates the address of the instruction to be executed. It generally increases sequentially, but can be changed based on the calculation result or external conditions.
  6. The machine revolves around the arithmetic unit, and data transfer between I/O devices and memory is done through the arithmetic unit.

1.5 Hierarchical Structure of Computer Systems#

A computer cannot simply be regarded as an electronic device, but rather as a complex combination of hardware and software. It usually consists of more than five different levels, each of which can be used for program design.Hierarchical Structure of Computer Systems

Characteristics of Hierarchical Structure#

  • Each level can be used for program design and is supported by the lower levels.
  • Levels 1 to 3 mainly use binary language, which is easy for machines to execute and interpret.
  • Levels 4 and 5 use symbolic languages, which are convenient for people who are not familiar with hardware to use computers.
  • The lower the level, the closer it is to the hardware, and the higher the level, the more convenient it is to use the computer.

Level 1: Microprogram Design Level#

Level 1 is the microprogram design level. This is a real hardware level where microinstructions are executed directly by the machine hardware. If an application program is written directly in microinstructions, it can run at this level.

Level 2: General Machine Level#

Level 2 is the general machine level, also known as the machine language level. It is interpreted by the microprogram to execute machine instruction systems. This level is also a hardware level.

Level 3: Operating System Level#

Level 3 is the operating system level, implemented by operating system programs. These operating systems consist of machine instructions and macro instructions. Macro instructions are software instructions defined and interpreted by the operating system, so this level is also called the mixed level.

Level 4: Assembly Language Level#

Level 4 is the assembly language level, which provides a symbolic language for programmers to reduce the complexity of program writing. This level is supported and executed by the assembler program. If an application program is written in assembly language, the machine must have the functionality of this level; if the application program is not written in assembly language, this level can be omitted.

Level 5: High-Level Language Level#

Level 5 is the high-level language level, which is user-oriented and designed to facilitate users in writing application programs. This level is supported and executed by various high-level language compilers.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.