Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

 

Computer


A computer is a device or machine for making calculations or controlling operations that are expressible in numerical or logical terms. Computers are constructed from components that perform simple well-defined functions. The complex interactions of these components endow computers with the ability to process information. If correctly configured (usually by programming) a computer can be made to represent some aspect of a problem or part of a system. If a computer configured in this way is given appropriate input data, then it can automatically solve the problem or predict the behavior of the system.

The discipline which studies the theory, design, and application of computers is called computer science.

Contents

General principles

Computers can work through the movement of mechanical parts, electrons, photons, quantum particles, or any other well-understood physical phenomenon. Although computers have been built out of many different technologies, nearly all popular types of computers have electronic components.

Computers may directly model the problem being solved, in the sense that the problem being solved is mapped as closely as possible onto the physical phenomena being exploited. For example, electron flows might be used to model the flow of water in a dam. Such analog computers were once common in the 1960s but are now rare.

In most computers today, the problem is first translated into mathematical terms by rendering all relevant information into the binary base-two numeral system (ones and zeros). Next, all operations on that information are reduced to simple Boolean algebra.

Electronic circuits are then used to represent Boolean operations. Since almost all of mathematics can be reduced to Boolean operations, a sufficiently fast electronic computer is capable of attacking the majority of mathematical problems (and the majority of information processing problems that can be translated into mathematical ones). This basic idea, which made modern digital computers possible, was formally identified and explored by Claude E. Shannon.

Computers cannot solve all mathematical problems. Alan Turing identified which problems could and could not be solved by computers, and in doing so founded theoretical computer science.

When the computer is finished calculating the problem, the result must be displayed to the user as output through output devices like light bulbs, LEDs, monitors, and printers.

Novice users, especially children, often have difficulty understanding the important idea that the computer is only a machine, and cannot "think" or "understand" the words it displays. The computer is simply performing a mechanical lookup on preprogrammed tables of lines and colors, which are then translated into arbitrary patterns of light by the output device. It is the human brain which recognizes that those patterns form letters and numbers, and attaches meaning to them. All that existing computers do is manipulate electrons that are logically equivalent to ones and zeroes; there are no known ways to successfully emulate human comprehension or self-awareness. See artificial intelligence.

Etymology

The word was originally used to describe a person who performed arithmetic calculations and this usage is still valid (although it is becoming quite rare in the United States). The OED2 lists the year 1897 as the first year the word was used to refer to a mechanical calculating device. By 1946 several qualifiers were introduced by the OED2 to differentiate between the different types of machine. These qualifiers included analogue, digital and electronic. However, from the context of the citation, it is obvious these terms were in use prior to 1946.

See the Wiktionary entry for the word "computer" for definitions, translations and a detailed etymology)

The exponential progress of computer development

Computing devices have doubled in capacity (instructions processed per second per $1000) every 18 to 24 months since 1900. Gordon E. Moore, co-founder of Intel, first described this property of computer development in 1965. His observation has become known as Moore's Law, although it of course is not actually a law, but rather a significant trend. Hand-in-hand with this increase in capacity per unit cost has been an equally dramatic process of miniaturization. The first electronic computers, such as the ENIAC (announced in 1946), were huge devices that weighed tons, occupied entire rooms, and required many operators to function successfully. They were so expensive that only governments and large research organizations could afford them and were considered so exotic that only a handful would ever be required to satisfy global demand. By contrast, modern computers are orders of magnitude more powerful, less expensive, smaller and have become ubiquitous in some areas.

Classification of computers

The following sections describe different approaches to classifying computers.

Classification by intended use

The colloquial nature of this classification approach means it is ambiguous. It is usual for only current, commonly available devices to be included. The rapid nature of computer development means new uses for computers are frequently found and current definitions quickly become outdated. Many classes of computer that are no longer used, such as differential analyzers, are not commonly included in such lists. Other classification schemes are required to unambiguously define the word "computer".

Classification by implementation technology

A less ambiguous approach for classifying computing machines is by their implementation technology. The earliest computers were purely mechanical. In the 1930s electro-mechanical components (relays) were introduced from the telecommunications industry, and in the 1940s the first purely electronic computers were constructed from thermionic valves (tubes). In the 1950s and 1960s valves were gradually replaced with transistors and in the late 1960s and early 1970s semiconductor integrated circuits (silicon chips) were adopted and have been the mainstay of computing technology ever since.

This description of implementation technologies is not exhaustive; it only covers the mainstream of development. Historically many exotic technologies have been explored and abandoned. For example, economic models have been constructed using water flowing through multiple-constricted channels, and between 1903 and 1909 Percy E. Ludgate developed a design for a programmable analytical machine based weaving technologies in which variables were carried in shuttles.

Efforts are currently underway to develop optical computers that use light rather than electricity. The possibility that DNA can be used for computing is also being explored. One radical new area of research that could lead to computers with dramatic new capabilities is the field of quantum computing, but this is presently in its early stages. With the exception of quantum computers, the implementation technology of a computer is not as important for classification purposes as the features that the machine implements.

Classification by design features

Modern computers combine fundamental design features that have been developed by various contributors over many years. These features are often independent of implementation technology. Modern computers derive their overall capabilities from the way these features interact. Some of the most important design features are listed below.

Digital versus analog

A fundamental decision in designing a computer is whether it should be digital or analog. Digital computers process discrete numeric or symbolic values, while analog computers process continuous data signals. Since the 1940s digital computers have become by far the most common, although analog computers are still used for some specialized purposes such as robotics and cyclotron control. Other approaches, such as pulse computing and quantum computing are possible but are either used for special purposes or are still experimental.

Binary versus decimal

A significant design development in digital computing was the introduction of binary as the internal numeral system. This removed the need for the complex carry mechanisms required for computers based on other numeral systems, such as the decimal system. The adoption of binary resulted in simplified designs for implementing arithmetic functions and logic operations.

Programmability

The ability to program a computer - provide it with a set of instructions for execution- without physically reconfiguring the machine is a fundamental design feature of most computers. This feature was significantly extended when machines were developed that could dynamically control the flow of execution of the program. This allowed computers to control the order in which the program of instructions was executed based on data calculated by the program as it executed. This major design advance was dramatically simplified by the introduction of binary arithmetic which can be used to represent various logic operations.

Storage

During the course of a calculation it is often necessary to store intermediate values for use in later calculations. The performance of many computers is largely dictated by the speed with which they can read and write values to and from this memory, and the overall capacity of the memory. Originally memory was used only for intermediate values but in the 1940s it was suggested that the program itself could be stored in this way. This advance led to the development of the first stored-program computers of the type used today.

Classification by capability

Perhaps the best way to classify the various types of computing device is by their intrinsic capabilities rather than their usage, implementation technology or design features. Computers can be subdivided into three main types based on capability: Single-Purpose devices that can compute only one function (e.g. The Antikythera Mechanism 87 BC, and Lord Kelvin's Tide predictor 1876), Special-Purpose devices that can compute a limited range of functions (e.g. Charles Babbage's Difference Engine No 1 . 1832 and Vannevar Bush's Differential analyser 1932), and General-Purpose devices of the type used today. Historically the word computer has been used to describe all these types of machine but modern colloquial usage usually restricts the term to general-purpose machines.

General-purpose computers

By definition a general-purpose computer can solve any problem that can be expressed as a program and executed within the practical limits set by: the storage capacity of the computer, the size of program, the speed of program execution, and the reliability of the machine. In 1934 Alan Turing proved that, given the right program, any general-purpose computer could emulate the behavior of any other computer. This mathematical proof was purely theoretical as no general-purpose computers existed at the time. The implications of this proof are profound, for example, any existing general-purpose computer is theoretically able to emulate; albeit slowly any general-purpose computer that may be built in the future.

Computers with general-purpose capabilities are called Turing-complete and this status is often used as the threshold capability that defines modern computers, however, this definition is problematic. Several computing devices with simplistic designs have been shown to be Turing-complete. The Z3, developed by Konrad Zuse in 1941 is the earliest working computer that has been shown to be Turing-complete, so far (the proof was developed in 1998). While the Z3 and possibly other early devices may be theoretically Turing-complete they are impractical as general-purpose computers. They lie in what is humorously known as the Turing Tar-Pit - "a place where anything is possible but nothing of interest is practical" (See The Jargon File). Modern computers are more than theoretically general-purpose; they are also practical general-purpose tools. The modern, digital, electronic, general-purpose computer was developed, by many contributors, over an extended period from the mid 1930s to the late 1940s, during this period many experimental machines were built that were possibly Turing-complete (ABC, ENIAC, Harvard Mk I, Colossus etc see the History of computing hardware). All these machines have been claimed, at one time or another, as the first computer, but they all had limited utility as general-purpose problem-solving devices and their designs have been discarded.

Stored-program computers

During the late 1940s the first design for a Stored-Program Computer was developed and documented (see The first draft) at the Moore School of Electrical Engineering at The University of Pennsylvania. The approach described by this document has become known as the Von Neumann architecture, after its only named author Jon von Neumann although others at the Moore School essentially invented the design. The Von Neumann architecture solved problems inherent in the design of the ENIAC, which was then under construction, by storing the machines program in its own memory. Von Neumann made the design available to other researchers shortly after the ENIAC was annouced in 1946. Plans were developed to implement the design at the Moore School in a machine called the EDVAC. The EDVAC was not operational until 1953 due to technical difficulties in implementing a reliable memory. Other research institutes, who had obtained copies of the design, solved the considerable technical problems of implementing a working memory before the Moore School team and implemented their own stored-program computers. In order of first successful operation the first 5 stored-program computers, that implemented the main features of the von Neumann Architecture were:

The Stored Program design defined by the von-Neumann Architecture finally allowed computers to readily exploit their general-purpose potential. By storing the computer's program in its own memory it became possible to rapidly "jump" from one instruction to another based on the result of evaluating a condition defined within the program. This condition usually evaluated data values calculated by the program and allowed programs to become highly dynamic. The design also supported the ability to automatically re-write the program as it executed - a powerful feature that must be used carefully. These features are fundamental to the way modern computers work.

To be precise, most modern computers are binary, electronic, stored-program, general-purpose, computing devices.

Special-purpose computers

The special-purpose computers that were popular in the 1930s and early 1940s have not been completely replaced by General-Purpose computers. As the cost and size of computers has fallen and their capabilities have increased it has become cost effective to use them for special-purpose applications. Many domestic and industrial devices including; mobile telephones, video recorders, automotive ignition systems, etc now contain special-purpose computers. In some cases these computers are Turing-complete (Video Games, PDAs) but many are programmed once in the factory and only seldom, if ever, reprogrammed. The program that these devices execute is often contained in a Read Only Memory (ROM chip) which would need to be replaced to change the operation of the machine. Computers embedded inside other devices are commonly referred to as microcontrollers or embedded computers .

Single-purpose computers

Single-purpose computers were the earliest form of computing device. Given some inputs they could calculate the result of the single function that was implemented by their mechanism. General-Purpose computers have almost completely replaced single-purpose computers and in doing so have created a completely new field of human endeavor - Software Development. General-purpose computers must be programmed with a set of instructions specific to the task they are required to perform and these instructions are collectively know as computer software. The design of single-purpose computing devices and many special-purpose computing devices is now a conceptual exercise that consists solely of designing software.

Classification by type of operation

Computers may be classified according to the way they are operated by the users. Two main types exist: batch processing and interactive processing.

Computer applications

The first electronic digital computers, with their large size and cost, mainly performed scientific calculations, often to support military objectives. The ENIAC was originally designed to calculate ballistics firing tables for artillery, but it was also used to calculate neutron cross-sectional densities to help in the design of the hydrogen bomb. This calculation, performed in December, 1945 through January, 1946 and involving over a million punch cards of data, showed the design then under consideration would fail. (Many of the most powerful supercomputers available today are also used for nuclear weapons simulations.) The CSIR Mk I, the first Australian stored-program computer, evaluated rainfall patterns for the catchment area of the Snowy Mountains Scheme, a large hydroelectric generation project. Others were used in cryptanalysis, for example the world's first programmable (though not general-purpose) digital electronic computer, Colossus, built during World War II. Despite this early focus of scientific applications, computers were quickly used in other areas.

From the beginning, stored program computers were applied to business problems. The LEO, a stored program-computer built by J. Lyons and Co. in the United Kingdom, was operational and being used for inventory management and other purposes 3 years before IBM built their first commercial stored-program computer. Continual reductions in the cost and size of computers saw them adopted by ever-smaller organizations. And with the invention of the microprocessor in the 1970s, it became possible to produce inexpensive computers. In the 1980s, personal computers became popular for many tasks, including book-keeping, writing and printing documents, calculating forecasts and other repetitive mathematical tasks involving spreadsheets.

The Internet

In the 1970s, computer engineers at research institutions throughout the US began to link their computers together using telecommunications technology. This effort was funded by ARPA, and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic institutions and became known as the Internet. In the 1990s, the development of World Wide Web technologies enabled non-technical people to use the internet, and it grew rapidly to become a global communications medium.

How computers work

While the technologies used in computers have changed dramatically since the first electronic, general-purpose, computers of the 1940s (see History of computing hardware for more details), most still use the von Neumann architecture.

The von Neumann architecture describes a computer with four main sections: the Arithmetic and Logic Unit (ALU), the control circuitry, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by a bundle of wires (a "bus") and are usually driven by a timer or clock (although other events could drive the control circuitry).

Memory

In this system, memory is a sequence of numbered cells, each containing a small piece of information. The information may be an instruction to tell the computer what to do. The cell may contain data that the computer needs to perform the instruction. Any cell may contain either, and indeed what is at one time data might be instructions later.

In general, the contents of a memory cell can be changed at any time - it is a scratchpad rather than a stone tablet.

The size of each cell, and the number of cells, varies greatly from computer to computer, and the technologies used to implement memory have varied greatly - from electromechanical relays, to mercury-filled tubes (and later springs) in which acoustic pulses were formed, to matrices of permanent magnets, to individual transistors, to integrated circuits with millions of capacitors on a single chip.

Processing (Processor)

A
Enlarge
A CPU

The arithmetic and logical unit, or ALU, is the device that performs elementary operations such as arithmetic operations (addition, subtraction, and so on), logical operations (AND, OR, NOT), and comparison operations (for example, comparing the contents of two bytes for equality). This unit is where the "real work" is done.

The control unit keeps track of which bytes in memory contain the current instruction that the computer is performing, telling the ALU what operation to perform and retrieving the information (from memory) that it needs to perform it, and transfers the result back to the appropriate memory location. Once that occurs, the control unit goes to the next instruction (typically located in the next slot (memory address), unless the instruction is a jump instruction informing the computer that the next instruction is located in another location). When referring to memory, the current instruction may use various addressing modes to determine the relevant address in memory.

Input and output

The I/O allows the computer to obtain information from the outside world, and send the results of its work back there. There is an broad range of I/O devices, from the familiar keyboards, monitors and floppy disk drives, CD/DVD Drives, to the more unusual such as webcams.

What all input devices have in common is that they encode (convert) information of some type into data which can further be processed by the digital computer system. Output devices on the other hand, decode the data into information which can be understood by the computer user. In this sense, a digital computer system is an example of a data processing system.

Instructions

The machine set of instructions are not the rich instructions of a human language. A computer only has a limited number of well-defined, simple instructions. Typical sorts of instructions supported by most computers are "copy the contents of cell 5 and place the copy in cell 10", "add the contents of cell 7 to the contents of cell 13 and place the result in cell 20", "if the contents of cell 999 are 0, the next instruction is at cell 30".

Instructions are represented within the computer as binary code - a base two system of counting. For example, the code for one kind of "copy" operation in the Intel line of microprocessors is 10110000. The particular instruction set that a specific computer supports is known as that computer's machine language. In practice, people do not normally write the instructions for computers directly in machine language but rather use a "high level" programming language which is then translated into the machine language automatically by special computer programs (interpreters and compilers). Some programming languages map very closely to the machine language, such as assembler (low level languages); at the other end, languages like Prolog are based on abstract principles far removed from the details of the machine's actual operation (high level languages).

Architecture

Contemporary computers put the ALU and control unit into a single integrated circuit known as the Central Processing Unit or CPU. Typically, the computer's memory is located on a few small integrated circuits near the CPU. The overwhelming majority of the computer's mass is either ancillary systems (for instance, to supply electrical power) or I/O devices.

Some larger computers differ from the above model in one major respect - they have multiple CPUs and control units working simultaneously. Additionally, a few computers, used mainly for research purposes and scientific computing, have differed significantly from the above model, but they have found little commercial application, because their programming model has not yet standardized.

The functioning of a computer is therefore in principle quite straightforward. Typically, on each clock cycle, the computer fetches instructions and data from its memory. The instructions are executed, the results are stored, and the next instruction is fetched. This procedure repeats until a halt instruction is encountered.

Programs

Computer programs are simply large lists of instructions for the computer to execute, perhaps with tables of data. Many computer programs contain millions of instructions, and many of those instructions are executed repeatedly. A typical modern PC (in the year 2003) can execute around 2-3 billion instructions per second. Computers do not gain their extraordinary capabilities through the ability to execute complex instructions. Rather, they do millions of simple instructions arranged by people known as "programmers." Good programmers develop sets of instructions to do common tasks (for instance, draw a dot on screen) and then make those sets of instructions available to other programmers.

Nowadays, most computers appear to execute several programs at the same time. This is usually referred to as multitasking. In reality, the CPU executes instructions from one program, then after a short period of time, it switches to a second program and executes some of its instructions. This small interval of time is often referred to as a time slice. This creates the illusion of multiple programs being executed simultaneously by sharing the CPU's time between the programs. This is similar to how a movie is simply a rapid succession of still frames. The operating system is the program that usually controls this time sharing.

Operating system

A computer will always need at least one program running at all times to operate. Under normal operation (in the typical general-purpose computer), this program is the operating system (OS). The operating system decides which programs are run, when, and what resources (such as memory or input/output - I/O) the programs will get to use. The operating system also provides a layer of abstraction over the hardware, and gives access by providing services to other programs, such as code ("drivers") which allow programmers to write programs for a machine without needing to know the intimate details of all attached electronic devices.

Most operating systems that have hardware abstraction layers also provide a standardized user interface.

See also

External links

The contents of this article are licensed from Wikipedia.org under the GNU Free Documentation License. How to see transparent copy