Computer

Computer is a kind of machine unlike any other. Most machines are designed for one particular task. But people use computers in countless ways. A person can communicate with distant friends, explore the internet, read books, watch videos, take classes, and play games—all on a single computer. A person might even use the same computer to write a research paper, store personal videos and photographs, and create new games. A computer takes on new functions by running programs, also called applications or apps for short. A program is a set of instructions that tells the computer what to do. The ability to run programs is what makes computers such remarkable devices. A traditional telephone cannot run a program that transforms it into a calculator. But a computerized phone can.

Computer laboratory
Computer laboratory

Computers are so useful that they are often built into other kinds of machines. For example, modern automobiles include dozens of tiny computers. These embedded computers ensure that the car’s inner workings perform properly. Other machines, such as phones and watches, have become more and more like computers themselves. Many people still own and use personal computers. These devices have large screens and keyboards. Businesses, scientific organizations, and governments run the most powerful computers.

Almost all computers today are digital, meaning they work through the use of numerical code. Digital computers use just two digits: 1 and 0. These two digits can form a code for almost any kind of information. Short strings of 1’s and 0’s can represent letters and words. Longer strings may encode pictures, sounds, and videos. Computer programs, too, are encoded in numbers. Software is a general term for computer programs.

Inside of a desktop computer
Inside of a desktop computer

Hardware refers to a computer’s physical parts. Computer hardware is designed to perform and make use of rapid numerical calculations. In performing such calculations, a computer processes (changes and records) the information contained in digital code. The fastest computers can make trillions of calculations per second. Almost all computers perform their calculations on chips of silicon, a material with unique electrical properties. On the surface of a computer chip is a complex electric circuit. Switches in this circuit, called transistors, can turn on and off, representing the 1’s and 0’s of computer code.

Silicon film research in Delaware
Silicon film research in Delaware

With advances in technology, transistors have shrunk rapidly in size, enabling more and more transistors to be placed on smaller and smaller chips. As a result, computers have become vastly more powerful. Today, an inexpensive computer chip may contain billions of transistors. A smartphone equipped with such a chip has more computing power than room-sized computers used in landing the first astronauts on the moon in 1969. Virtually no other form of technology has advanced at such a fast rate.

This article discusses what computers are, how their hardware works with software programs, and the history of computer science and the computer industry. For a more detailed discussion of how electric circuits process information, see Electronics. See Internet for more information on the global network of computers.

Kinds of computers

Today’s computers come in many forms. But almost all of them share certain basic characteristics. They are electronic, meaning they make use of electric circuits. They are automatic, so they do not require such human actions as hand-cranking to perform their calculations. They are digital, representing information with just the digits 1 and 0. Finally, most computers are general-purpose devices. A general-purpose computer can run multiple software programs on the same set of hardware.

Not all computers share these characteristics. The earliest computers were built before electronics technology had been invented. They were mechanical devices that worked through the use of gears and levers. Some computers, including some modern computers, are analog, rather than digital. An analog computer performs calculation using varying physical quantities—such as voltages, flows of fluids, or distances along a scale—rather than numbers. Since the 1960’s, however, digital, electronic computers have become the most effective and widely used type by far. The following sections describe the chief kinds of digital computers.

Mobile computers

include smartphones, tablets, and similar handheld devices. They are typically controlled using a touch screen. Mobile computers are designed to connect to the internet wirelessly, often through a cellular telephone network. Since the 2000’s, mobile computer use has skyrocketed. They are the most popular kind of multiple-use computers ever made, with over 1 billion sold each year.

Smartphones
Smartphones

Mobile computers are less powerful than larger types of general-purpose computers. Mobile devices are designed to be small and portable. Batteries take up much of their internal space. Computation on mobile devices is typically designed to use as little power as possible, to preserve battery life.

Programs designed for mobile computers are commonly called apps. Because they are generally limited to small screens and a touch-based interface, mobile apps tend to be relatively simple. But there is tremendous variety among mobile apps. Millions of mobile apps can be downloaded over the internet, often from an online store run by the mobile computer’s manufacturer.

Personal computers (PC’s)

are larger and more powerful than mobile computers. They typically have keyboards instead of—or in addition to—touch screens. Laptop computers are PC’s with rechargeable batteries. They are small enough to carry around, and they typically connect to the internet wirelessly. In contrast, desktop computers are PC’s designed to be stationary, plugged into a wall outlet.

Desktop computer
Desktop computer

Until the 2010’s, PC’s were the most common type of everyday-use computer that people owned. Mobile computers have since become more popular, but people still use PC’s to work and create. With their larger screens, variety of input and output devices, and powerful processors, PC’s can more easily handle complex tasks than can mobile computers. For example, PC users can more easily view and switch between multiple programs running at the same time. In addition, most people can type much faster on a PC keyboard than on a mobile touch screen.

Some people do not use the term PC to refer to all personal computers. Instead, they apply the term only to machines using technology originally developed by the International Business Machines Corporation (IBM). This usage comes from the name of IBM’s first personal computer, the PC, introduced in 1981. Some people use the term PC for all computers designed to run Microsoft’s Windows software. By contrast, a computer designed by Apple Inc. is often called a Mac, short for the brand name Macintosh.

iMac desktop computer
iMac desktop computer

Mainframes and servers.

Mainframes are powerful computers designed for large-scale tasks. Historically, mainframes performed centralized computing functions for businesses, governments, and scientific organizations. Mainframes stored important data and performed critical computing tasks. Some such organizations continue to use powerful mainframes in the same way. For example, the U.S. Internal Revenue Service (IRS) uses mainframes to process federal tax returns.

Mainframe computer
Mainframe computer

Mainframes often function as servers. The term server refers to the client-server model of networked computers. In this model, a centralized computer, the server, connects to a number of client computers. People enter and retrieve data using the client computers, but most of the processing and data storage are handled by the server. A server may be specialized to do just one thing, such as print documents, run e-mail programs, or host electronic games.

Much of the internet is organized according to the client-server model. When a person looks at a website on a smartphone, for example, the smartphone acts as a client computer. The website’s data is actually stored on a distant server, which sends information to the smartphone. Server mainframes are also used for cloud computing, in which people store data and use programs over the internet, rather than on personal or mobile computers.

Not all servers are mainframes. A simple PC, for example, can function as a server. More often, specially built computers are used as servers. Many of these computers can be arranged in racks and networked together. On a larger scale, buildings called data centers house huge numbers of servers. Data centers often incorporate such features as backup power supplies and security systems. Powerful air conditioning counteracts the heat given off by servers in data centers. Companies that run data centers may “rent out” their servers’ computing capabilities to other businesses or to individuals.

Supercomputers

are the most powerful kind of computers. They are used for tasks that require calculations to be performed as quickly as current technology allows. Such tasks include accurately modeling real-world systems or events. For example, scientists use supercomputers to forecast the weather and study Earth’s climate. Supercomputers can simulate the behavior of molecules and other microscopic structures. They can predict precisely what will happen to vehicles during a crash or how a nuclear explosion will unfold.

NASA's Discover supercomputer
NASA's Discover supercomputer

Supercomputers often make extensive use of parallel processing. In this technique, a task is divided up among multiple computer processors, which all work on the problem at the same time. Relatively simple parallel processing can be performed on a typical PC, which may include four or eight separate processors. Supercomputers, in contrast, may use tens of thousands of powerful processors in parallel.

Supercomputer performance is often measured in flops, which stands for floating-point operations per second. A floating-point operation is a type of mathematical calculation involving decimal points or fractions. A petaflop is 1015 flops, or 1,000,000,000,000,000 such calculations per second. The fastest computers can reach hundreds of petaflops.

Embedded computers

are the most widespread kind of computer. Billions are in use every day. They are found in many kinds of machines, from such simple devices as clocks and coffeemakers to complicated industrial robots and airplanes.

Technically, embedded computers are general-purpose computers. Their chips are manufactured much like those of other computers, so they can theoretically run multiple programs. But an embedded computer is usually designed to run just one program. For example, an automobile’s electronic fuel injection system is controlled by a microprocessor. This embedded computer runs a program that determines how much fuel should flow to the engine at any given moment. The average new car contains dozens of embedded computers.

Other types of computers

may resemble PC’s or mobile devices but are used in more limited ways. Some computers are small enough to be worn as jewelry.

E-readers

are electronic devices that can download and display books in electronic format, called e-books. Most can also access and display websites. The simplest e-readers have black-and-white screens and limited computing power. More powerful e-readers resemble tablet computers.

Set-top boxes

are small, cheap internet-connected computers that hook up to a television and work with a remote control. A set-top box can stream (download and play) internet-based videos and other content through the television. Similar devices may come in the form of small “sticks” that plug directly into a television.

Video game consoles,

designed to play electronic games, also hook up to televisions. Early consoles could only run one type of software—games. Since the 2000’s, many consoles have been able to connect to the internet and run other types of applications. They can also stream video content as do set-top boxes. Consoles come with one or more devices called controllers. A person generally interacts with games and other programs using the controller’s buttons and joysticks or gestures. A handheld game system is a portable console that includes its own screen.

PlayStation 3 with Move controller
PlayStation 3 with Move controller

Wearable computers

first became practical in the 2010’s. Some of the most popular wearable computers developed from fitness-tracking devices. The Fitbit, first introduced in 2009, clipped onto a user’s clothes. By sensing acceleration, the device could calculate roughly how many steps its user took each day. Later versions of the Fitbit resembled watches.

Many digital watches have featured simple programs, such as calculators and trivia games. The term smartwatch, on the other hand, generally refers to a programmable device with a touch screen. These devices are often designed to communicate wirelessly with the user’s smartphone, relying on the smartphone for internet connectivity and other features.

Since the 2010’s, companies have marketed various augmented reality wearables, such as special eyeglasses that display information overlaid on top of the user’s view of the physical environment. However, these devices have so far proved unpopular with consumers.

Smart fabrics, also known as e-textiles, are fabrics embedded with conductive threads and other electronic components. Designers and engineers use smart fabrics to create wearable devices, such as shirts that can track the wearer’s heart rate, and jacket sleeves that the wearer can use to control a smart phone.

Representing computer data

A digital computer’s data is encoded in the form of numbers. The computer uses electric charges to represent numbers. Only two levels of charge are used. One level represents the digit (number symbol) 0, and the other level represents the digit 1.

Counting in binary.

Because computers count with just 1’s and 0’s, they are said to use the binary numeration system. The word binary comes from a Latin word meaning two at a time. In the binary system, a 0 or 1 by itself is called a bit, which is short for binary digit.

The binary system contrasts with the familiar decimal system, which uses 10 digits—0 through 9. But the same numbers can be represented in either system. For example, the decimal number 2 is written 10 in binary. The first digit stands for the twos place. The second digit stands for the ones place. In binary, 10 essentially means 1 two and 0 ones. Likewise the decimal number 5 is written 101—1 four, 0 twos, and 1 one—and so on. Neither system is better or worse at representing numbers. The main reason human beings use the decimal system is because we happen to have 10 fingers with which to count.

Computers count with tiny electronic switches called transistors. Each switch operates much like an ordinary light switch. When a circuit is off, it corresponds to the binary digit 0. When a circuit is on, it corresponds to the bit 1. Bits, like decimal numbers, can be added, subtracted, multiplied, and divided. Thus, a computer can perform all the basic arithmetic operations.

Representing information.

Bits represent all the data that the computer processes and all the instructions used to process the data. Bits may represent logical ideas. While processing data, for example, a computer often uses 1 to represent true, and 0 to represent false. Combinations of bits can also represent numbers, letters, and portions of pictures and sounds.

A byte is a combination of eight bits. Because each bit may be a 1 or a 0, a byte has 256 possible values. These values can be used to represent ideas. For example, in a common code known as the American Standard Code for Information Interchange (ASCII), the byte 01000001 represents the capital letter A. Other bytes in ASCII represent lower-case a and all the other letters, all the decimal digits, and certain punctuation marks and mathematical symbols.

Bytes—and not the smaller bits they are made of—are commonly thought of as “building blocks” for computer information, since each printed character in a book can be represented by a single byte. A computer’s capacity to hold data is often measured in multiples of bytes. A kilobyte equals 1,024 bytes; a megabyte, 1,048,576 bytes; a gigabyte, 1,073,741,824 bytes; and a terabyte, 1,099,511,627,776 bytes. For simplicity, 1 kilobyte, 1 megabyte, 1 gigabyte, and 1 terabyte are often said to equal 1 thousand, 1 million, 1 billion, and 1 trillion bytes respectively.

The amount of information contained in computer data varies enormously based on its form. The plain text of a typical book contains about a megabyte of data. Songs and high-quality photographs may be several megabytes in size. Videos may include several gigabytes of data. A typical PC can store a terabyte of data.

Still larger data quantities include the petabyte (1,000 terabytes), the exabyte (1,000 petabytes), and the zettabyte (1,000 exabytes). All of the data on the internet probably totals in the zettabytes.

Computer hardware

Computers vary greatly in size, shape, and function. And even computers that appear similar can vary greatly in their capacity to store and process data. Despite the differences, all computers include similar types of hardware components. These components serve four general types of functions: (1) processing, (2) memory and storage, (3) input, and (4) output.

Computer hardware
Computer hardware

Processing—calculating with data—is handled by a powerful computer chip called the central processing unit (CPU). Memory chips hold data and processing instructions for use by the processors. Storage devices hold data for longer periods of time. The computer receives data through one or more input devices, such as a keyboard, a mouse, or a camera. Output devices, including screens, speakers, and printers, produce the processed data in a form that can be viewed, heard, or otherwise received.

Microprocessors

control computer systems and process information encoded as binary sequences of electric charge. The CPU is the main microprocessor, but it may work with other microprocessors designed for more specialized tasks. For example, a graphics processing unit (GPU) is a powerful microprocessor that handles graphics (visuals). Other microprocessors specialize in processing sound or certain mathematical calculations.

Microprocessors on a silicon wafer
Microprocessors on a silicon wafer

After completing an operation, a microprocessor may send the result to the computer’s memory until it is needed for another operation. Or the result may be directed to an output device or a storage device.

A microprocessor consists of millions or billions of transistors along with other electronic devices and wires. These parts are arranged in circuits on a single chip, almost always made of silicon, that is no larger than a fingernail. Circuits and their electronic components are etched into a chip’s surface in a process called lithography, which somewhat resembles printing on a microscopic scale. Multicore microprocessors have two or more processing units on the same chip. Modern PC’s often have multiple cores, enabling them to run several programs at the same time, each on its own core.

A microprocessor has two groups of circuits: (1) the control unit and (2) the arithmetic logic unit, also called the digital logic unit. Almost all microprocessor chips also include a small amount of high-speed memory called cache.

The control unit

directs and coordinates computer operations according to instructions stored in the computer’s memory. Each set of instructions is expressed as a binary operation code. This code also dictates where data for each processing operation are stored. The control unit interprets the instructions and relays commands to the arithmetic logic unit. The control unit also regulates the flow of data between the memory and the arithmetic logic unit and routes processed information to output or storage devices.

The arithmetic logic unit

carries out the computer’s mathematical and logical processes. In this unit, circuits called registers temporarily store data from the memory. To carry out a calculation, charges travel from registers through wires to the appropriate circuit. The result comes out on wires at the other end of this circuit and goes back to a specified register. Combinations of these circuits can perform different mathematical and logical operations.

Memory and storage.

Inside a computer, data is stored in several different ways. Memory chips hold the data being used by the processor. Storage devices, in contrast, hold data that is not currently being used, in much the same way as does a person’s long-term memory. Storage devices, often called drives, generally work more slowly than memory chips.

Memory chips

work closely with the computer’s processing units. Like microprocessors, memory chips consist of transistors, other electronic components, and wires arranged as circuits built into chips no larger than a fingernail. There are two basic kinds of memory chips: (1) read-only memory (ROM) and (2) random-access memory (RAM).

A ROM chip holds its memory even when the computer is turned off. However, the computer user cannot change the memory. ROM chips hold instructions that a computer must follow when it is first turned on, or they may hold frequently used system services.

A RAM chip holds its memory as long as the computer is turned on, somewhat resembling a person’s short-term memory. The user can change the memory, but the data is volatile—that is, as soon as the computer is turned off, the data held in the RAM chip vanishes. Random-access memory is sometimes called internal memory or main memory. RAM chips receive information and instructions from a microprocessor, an input device, or a storage device. Computers with more RAM can perform more complicated tasks and run more programs simultaneously.

Flash drives

consist of special memory chips that—unlike ROM and RAM chips—are used for long-term data storage. They are also called flash disks or solid-state drives. A flash drive does not lose data when it is disconnected from a power source, and its data can be changed. Flash drives are the fastest type of storage device. Unlike other types of storage devices, flash drives are completely electronic and have no moving parts.

Flash storage devices come in other formats besides a built-in flash drive. A common type of flash device is the small removable card that stores the pictures taken by a digital camera. Mobile computers typically rely exclusively on flash storage because flash drives are lightweight and durable. Flash storage is expensive, on the other hand, and so it is less common in larger computers. However, some laptops and desktops have flash storage alongside, or instead of, slower and cheaper hard drives.

Hard drives

typically store data on one or more rigid magnetic disks, also called platters. Data is encoded in tracks of microscopic patterns of magnetic fields on the disk. In many personal computers, an internal hard drive serves as the main storage device. Many people use the term hard disk to refer to the entire hard drive, but the term can also apply to an individual platter.

Usually, platters are fixed inside the drive and cannot be removed. The platters are stacked on top of one another. Data are stored on circular tracks on each side of a platter’s surface. Each side of a platter has a device called a read/write head. As the platter spins, the head moves in and out to specific tracks to store or to call up data for the computer to use.

External hard drives are separate units that connect to a computer via a cord or a wireless connection. These extra drives are typically used to back up (make copies of) a person’s computer files for safekeeping.

Optical drives

read data from, or record data on, plastic discs. Such discs include compact discs (CD’s), DVD’s, and Blu-ray discs. Data is encoded in a pattern of microscopic pits inscribed into the disc. The term optical refers to the fact that light—in the form of a laser beam—is used to “read” the pattern of pits, and thus the disc’s data.

Computer programs, such as games, are often sold in the form of optical discs. Such discs are almost always read-only, meaning they cannot be rewritten with new data. But some optical drives can record new data on blank or rewritable optical discs. A Blu-ray disc can hold about 50 gigabytes of data, more than any other type of commercially available optical disc. But storing data on optical discs is typically slower and less practical than using a hard drive or a flash drive.

Tape drives

store sequences of data on magnetic tape in much the same way that audiotape recorders store sound information. Tape drives copy data much more slowly than do other types of storage devices. The main use of tape drives is to back up information stored on hard disks for extremely long-term storage.

Input devices

send information and instructions to the computer. Some input devices enable the user to input information manually, for example by typing on a keyboard. Other input devices include cameras, microphones, and other types of sensors. These devices measure light, sound, motion, or some other phenomenon, translating it into electronic data for processing.

Manual input.

Many familiar input devices are operated by the user directly. Such devices include keyboards, which were the main input devices on early electronic computers. Desktop computers typically make use of a mouse alongside a keyboard. Most modern mice are optical, meaning they track their motion across a flat surface using laser light. Software translates the motion of the mouse into the motion of a pointer on the computer screen.

Artists often use digitizing tablets along with an electronic pen called a stylus. The tablet can sense the pressure of the stylus to mimic brush and pencil strokes, which are recorded by graphics software. Game controllers are the main input devices for video game consoles, and some controllers can be used with other types of computers as well.

Touch screen on an iPad
Touch screen on an iPad

Touchpads and touch screens are especially common input devices on portable computers. Both typically work by sensing faint movements of electric charge from the screen to a human fingertip. Laptops typically include touchpads, which are blank rectangles. Moving a finger across a touchpad corresponds to a pointer’s movement across the computer screen, similar to the use of a mouse. Mobile devices, in contrast, use touch screens, in which the touch-based sensor overlaps the screen itself. On such devices, a user may experience the feeling of interacting directly with objects on the screen, making touch screens among the most intuitive input devices.

Sensors

come in many forms. Most personal and mobile computers include one or more digital cameras and microphones. Many mobile devices can make use of both their cameras and microphones to record videos with sound.

Computers often contain sensors that measure ambient (surrounding) lighting. This information enables computers to automatically adjust their display brightness. Many kinds of computers can also sense radio waves. Most computers can receive Wi-Fi internet signals, short-range radio waves typically sent out by a nearby device called a router. Smartphones and many other kinds of mobile computers can also receive radio signals from cellular networks.

Mobile computers may include extra sensors not found on PC’s. They can often track their own position by measuring the distance to nearby cellular network towers or using signals from the Global Positioning System (GPS) satellites. They can also typically sense the angle at which the user holds the device.

Output devices

display the work done by the computer. In early computers, the output of calculations was printed on paper. Today, printers are still common output devices.

The computer’s display screen is perhaps the most important output device. It displays information that has been processed. Many computers have built-in screens. Many computers can also output data to a separate monitor or television. Speakers are also output devices—the sound they produce is processed data.

Output devices may send processed data to other electronic devices, rather than to human users. For example, when a person sends an internet message on a mobile computer, the computer outputs the data in the form of radio waves. This output is invisible to humans. Instead, it is received by a Wi-Fi router or a cell phone tower and sent to an internet server for further processing and delivery.

Arranging hardware.

A single computer may contain many input, output, and storage devices. Computers can also connect using wires or wirelessly to peripheral devices—that is, hardware separate from the computer’s main casing. Common peripheral devices include printers, monitors, and external hard drives. Some hardware components may be used for multiple tasks. For example, a mobile computer’s touch screen serves as both an input device and an output device.

Chips on a motherboard
Chips on a motherboard

Many computers are designed so that people can change their capabilities by adding or removing components. In a typical PC, for example, many components are mounted on thin, rigid boards called circuit boards. A circuit board called the motherboard holds the CPU, other microprocessors, and a collection of memory chips. Other components, such as sound and graphics co-processors, come installed on circuit boards called cards. Cards can be plugged into sockets called expansion slots inside the computer. Often, common components are integrated directly into the motherboard. Peripherals connect by wire or cable to sockets called ports. Peripherals may also wirelessly connect to computers via a short-range radio technology called Bluetooth. Wi-Fi, which has a longer range than Bluetooth, can also wirelessly link computer components.

Children studying on laptop computers
Children studying on laptop computers

Laptop and mobile computers are often designed with fewer options for customization. In some laptops, the components are soldered together and so cannot be removed or switched. Some laptops, and almost all mobile devices, combine several major components on a single chip. This arrangement is called a system-on-a-chip (SoC). An SoC typically requires less electric power than do multiple chips connected together.

User interface

User interface, or UI, is a term that describes the experience of a user in interacting with a computer. A friendly UI reacts to the user’s inputs to create a pleasing, understandable experience. The user interface for the earliest electronic computers was cumbersome and slow. People had to physically rearrange and rewire hardware components to run different programs. On early personal computers, users switched programs by inserting the program’s disk into the computer and typing a series of coded commands to execute (run) the program.

Today’s computers feature much more naturalistic user interfaces, made possible by advances in hardware. The most common is the graphical user interface (GUI). A GUI enables the user to interact with a computer using pictures and other visual elements displayed on a screen. On a mobile computer, for example, a user can switch programs simply by tapping a program’s icon (small picture) on the screen. To provide such a user interface, multiple hardware components must function together at high speed. The touch screen must accurately sense electric signals produced by the user’s fingertips. The CPU must receive and process these signals quickly. It must locate and run the program represented by the icon. Then it must output the program’s visuals to the touch screen.

Researchers are working to develop a new kind of user interface called the brain-computer interface (BCI). BCI technology creates a pathway from the user’s brain to a computer or other device, allowing direct thought communication. Brainwaves are recorded through electrodes (strips of metal that conduct electricity) attached to a person’s scalp or implanted in the brain. This enables the user to give commands to the computer without using their hands or voice.

Computer software

Computer software is made up of programs. Each program consists of instructions for the computer to execute. Together, the software controls the operation of the computer. There are two main kinds of software: (1) operating system software and (2) applications software.

Operating system software

makes up the computer’s master control program. It reads and responds to user commands and coordinates the flow of information among the different input and output devices. It also manages other programs the user runs. Much of a computer’s user interface is determined by its operating system. Thus, a great deal of work has gone into operating system design.

Control and security.

The operating system (OS) runs programs, including both programs controlled directly by the user and programs controlled by other programs. The OS puts other programs and the user’s data into memory and makes sure that the processor executes the correct commands. It regulates access to such resources as memory, storage, sensors, and output devices.

Modern operating systems enable more than one program to run at the same time. The operating system isolates each program so it executes as if it were the only program running. The OS also protects data on the computer. It only allows programs with the right permissions to access certain data.

The UI software

is an especially important part of an operating system. Combined with the hardware components of the UI, it enables the user to control the computer.

Older computer interfaces were text-based—that is, their users controlled the computers primarily by typing lines of commands on a keyboard. Such an approach is called a command line interface (CLI). Modern computers, on the other hand, use a system called a graphical user interface (GUI). The GUI represents data and programs using onscreen windows, icons, and other visual elements. Users interact with such elements by moving an onscreen cursor or pointer through the use of an input device such as a mouse or a touchpad. On mobile computers, touch screens enable users to interact with the icons and other elements through direct touch.

Such operating systems as MS-DOS (Microsoft Disk-Operating System) and Unix use command line interfaces as their basic mode of interaction. These CLI’s rely on typed commands that users must memorize or look up. Popular modern operating systems, in contrast, use a graphical user interface with icons and visual menus. Such an interface makes it easy for a user to find and access programs, even on an unfamiliar computer. Operating systems that make extensive use of GUI’s include Windows, Mac OS X, and Linux. Apple’s iOS and Google’s Android are popular GUI-based operating systems for mobile devices.

Applications software

consists of programs for specific uses. Such uses include writing and editing text, storing and managing data, processing pictures and sounds, playing electronic games, and browsing websites.

Many applications are designed to be installed on a user’s computer. In cloud computing, on the other hand, people can access applications stored and run on distant server computers over the internet. Some programs, called plug-ins or extensions, are designed to work with an existing program rather than on their own.

Productivity software

is typically used to do work. Sometimes called office software, productivity software includes a variety of programs. Word processors are used to write and edit text. Spreadsheets are tables that people use to keep track of finances or other data. Database management systems are similar to spreadsheets, but they can store more complicated sets of data, such as library catalogues or medical records, and process data in more complex ways. Presentation software enables users to create text, diagrams, charts, and other visual aids to display on a screen during speeches, meetings, and other presentations.

Spreadsheet program
Spreadsheet program

Web browsers

are computer programs that access and display websites. For many computer users, web browsers serve as the main “gateway” to the internet’s information and communication capabilities. Web browsers enable users to see images and videos and to listen to sounds on websites.

Before web browsers, most computers could only display simple text from the internet. In fact, widespread use of the internet for functions other than e-mail began after web browsers were introduced in the early 1990’s.

Games software

combines graphics, animation, sound, music, and play to produce exciting adventures and puzzles. Many games allow multiple players to play together over the internet. For more information, see Electronic game.

Graphics and video software

are commonly used by artists, filmmakers, and game designers. With a graphics editor, a user can change or combine photographs or create new images using virtual paintbrushes and other “tools.” People use video editing programs to work with video clips and add special effects. Artists also use more powerful graphics programs to create three-dimensional models of characters and settings. In many movies and games, such animated computer models present fantastical creatures and places in lifelike detail.

Computer-aided design (CAD) programs

are used to make three-dimensional models of real-world objects. Architects use CAD programs to design new buildings. Engineers use CAD to draft new automobiles and airplanes, as well as computer components and other electronic devices. Advanced programs can simulate how objects react to certain conditions, such as wind, heat, or pressure.

Scientific and medical software

is used to help visualize (show) measurements and structures that are otherwise hard to see or understand. For example, medical visualization software can generate detailed images of a patient’s internal organs. The software uses inputs from such medical imaging techniques as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound imaging.

Other visualization software is used to model the shape of molecules, weather systems, galaxies, and other structures large and small. Powerful computers can then simulate the behavior of such structures based on the predictions of scientific theories. The model’s behavior can then be compared to real-world observations to test and improve theoretical understanding.

Computer graphic of MRI
Computer graphic of MRI

Artificial intelligence (AI) software

enables a computer to imitate the way a person thinks. One type of AI software, called an expert system, makes use of a detailed database of information loaded onto the computer. For example, a physician might use an expert system to evaluate a patient’s symptoms. The computer would compare the combination of symptoms described by the patient with all the descriptions in its database, then suggest diagnoses and treatments. The computer does so by drawing upon rules and data contained in its software. The computer can narrow the field of inquiry until a potential solution is reached. However, the quality of the solution may be limited by the data the computer contains and the sophistication of the rules it uses.

Other types of AI software employ machine learning. Using statistics and probability theory, they refine and improve their outputs based on new data and experiences. This approach seeks to mimic the way humans learn. With machine learning, AI can even learn how to “understand” human speech in a natural, conversational way. In 2011, an IBM computer named Watson demonstrated the capabilities of machine learning. The computer was able to process and respond to spoken questions on the television game show Jeopardy!, even winning against human opponents.

Other types of programs also employ machine learning. They include internet search engines and the so-called “virtual assistants” found on smartphones. These programs collect data from users and send it to a centralized computer for processing. By collecting large amounts of data and refining their responses over time, such AI-based programs can become “smarter.”

Computer use in weather forecasting
Computer use in weather forecasting

Programming a computer

Computer programming (sometimes called coding) involves the creation of detailed sets of instructions for a computer. Programmers are computer specialists who write and edit these sets of instructions. Programs are written in programming languages.

A computer processor executes programs in the form of machine language, also called low-level language. Machine language is composed of numbers. These numbers represent memory addresses and operation codes. However, programs are almost never written in machine language. Instead, most programs are written with high-level languages. These languages use symbols, everyday expressions, or mathematical formulas, as well as rules for combining those elements. Such languages must be translated into machine code to work on a given computer.

Preparing a program

begins with a complete description of the job that the computer is to perform. This description explains what data must be input, what computing must be done, and what form the output should take. Computer programmers use the description to prepare diagrams and other visual aids that represent the steps needed to complete the task. The programmers may produce a diagram called a systems flow chart that shows how all the major parts of the job fit together.

Programs can be written with nothing more than a text editor, a simple program for working with plain text. But some programmers use more specialized software to write their programs.

High-level languages

are used by most programmers. The particular language a programmer uses depends largely on the job to be done. For example, PHP and Javascript are commonly used for Web-based programming. The PHP language handles code on the server computers that send out internet data to users. Javascript, in contrast, handles code on client, or user, computers. Other common high-level languages include C, C++, Java, Python, Ruby, LISP and Prolog.

New languages are constantly being developed, often building on earlier languages. For example, Java draws from both C and C++. Many older languages have become obsolete and are no longer commonly used. For example, COBOL (COmmon Business Oriented Language) and Fortran (Formula translation) were among the earliest of the high-level languages, but they are rarely used today.

Using objects.

Some high-level languages, such as Java and C++, support the use of objects. An object, in this sense, includes a block of data and the methods (functions) that act upon those data. Object-oriented programming (OOP) uses objects that can work together to create a whole program, somewhat like the parts of a car.

The properties of an object are defined by its class. A class is a model of the data variables and methods for a particular kind of object. Each object is an instance (particular example) of its class, with actual data values instead of variables. OOP relieves programmers of the need to re-create sections of code in long programs. They can instead simply use different instances of the same class. The same class may also be used in more than one program. This arrangement makes changing the software relatively easy.

Other features

offered by high-level languages include functional programming and logic programming. Functional programming emphasizes functions rather than the objects upon which functions act. Logic programming is based on formal logic, a framework for relating facts. These types of programming are less common than object-oriented programming, and many languages do not support them. But they are frequently used in artificial intelligence applications and financial software.

Some high-level languages may be used in more than one way. For example, the languages OCaml and Scala support both object-oriented programming and functional programming.

Assembly languages.

Instead of working with high-level languages, programmers sometimes write their programs in assembly languages. An assembly language is harder to work with than a high-level language. A given assembly language is also specific to a particular type of computer. The programmer must state each instruction with much more detail than is needed when using a high-level language.

The advantage of assembly language is that each assembly language instruction corresponds directly to one machine code instruction. Translating programs into the form the computer needs to execute is most straightforward for assembly language. Programs called assemblers handle this translation task.

Translating high-level languages.

High-level languages cannot be run directly by a CPU. They must first be translated into machine language that the CPU can execute.

Further complicating matters is the fact that different types of computers use different machine languages. Each computer has its own way of interpreting binary numbers as data and instructions. A program written for one type of machine may not directly run on another type.

Programmers use several methods to make a high-level language work on a wide variety of computers. These methods include (1) compiling and (2) interpreting programs.

Compiling a program

means translating it from one language to another. Unlike assembly languages, high-level languages do not correspond directly with machine languages. Thus, compilers for high-level languages are usually more complex than assemblers.

Typically, a compiler translates programs from a high-level programming language into a lower-level language. A compiler may translate high-level language statements directly into machine language instructions. Compilers may also translate high-level statements into one or more intermediate-level languages, such as assembly languages, that are closer to machine languages.

Some compilers translate from one high-level language into another. The second language may have the advantage of being more portable—that is, compatible with a wider variety of machine languages. For example, programs are often compiled into the C language, which is noted for being extremely portable.

Interpreting a program

means translating a high-level program as it runs, rather than first translating the entire program into machine language to be run directly by the CPU. The interpreter examines, translates, and performs each statement as it is encountered in the instructions. If program instructions “loop back” to an earlier point in the program, the earlier statements are examined, translated, and executed again.

Programs run with interpreters are generally slower than those that have been translated into machine code and executed directly by the computer. But using interpreters can allow for greater flexibility. For example, the Java language was designed to work with software called a Java interpreter or a Java Virtual Machine (JVM). The JVM interprets an intermediate language that is not tied to any particular machine language. Java programs are designed to be compiled into this intermediate language. They can then run on any computer that has the JVM software.

An emulation program is another example of an interpreter. An emulation program can be used on one type of computer to mimic the operation of another type. Such programs enable the computer to run software designed for the other type of system.

Completing and distributing a program.

After a computer program is written, it is debugged—that is, it is thoroughly tested for mistakes, called bugs. The programmer makes the corrections in the written program, and the program is again translated into machine code and tested. These processes are repeated until testing detects no errors or improper functions.

Programmers may use special software called integrated development environments (IDE’s) to work on their programs. IDE’s combine a text editor, a debugger, a compiler or an assembler, and a loader into the same program. The loader puts the machine language into memory and instructs the processor to run it. The IDE can thus run a new program, check it for bugs, translate it, and run it, streamlining the development process.

Once a program has been completely tested, debugged, and compiled into machine language, the compiler is no longer needed unless further changes must be made to the program. The file containing the machine language can be copied and distributed to anyone who needs the program.

Computer science and the theory of computation

Computer science is the scientific study of the application of computers to solving problems. Computer scientists work in such fields as artificial intelligence, computer security, data storage and retrieval systems, and communication networks. Programming is a major component of most areas of computer science. Some computer scientists work to create more efficient programs, including compilers and interpreters.

A related branch of computer science focuses on the theory of computation and its mathematical description. In this study, scholars seek to understand computing at its most abstract and fundamental level.

Computing has always been deeply connected to mathematics. In mathematical terms, a computer program is an algorithm. An algorithm is a step-by-step mathematical procedure. Generally, an algorithm takes data as input and outputs an “answer” or “solution” to a mathematical problem. Scientists who study computer theory hope to determine what sorts of questions computers can and cannot answer.

To understand the capabilities—and limits—of computing, computer scientists often use abstract models. The most famous such model is known today as the Turing machine.

The Turing machine.

In 1936, the British mathematician Alan Turing developed the idea of a universal computing machine that could potentially solve any mathematical problem. Turing’s machine was a thought experiment, not an actual device. But it showed the great potential of programmable computers.

Turing machine
Turing machine

A Turing machine has four basic parts. They are (1) the tape, (2) the head, (3) the machine’s states, and (4) instructions.

The tape

is a long sheet or ribbon that passes through the head. It is divided along its length into cells. The cells may contain symbols—such as the 1’s and 0’s of binary code. The cells can also be blank. The tape is essentially data. It is input into the machine for processing.

The head

does the processing. Its operation is extremely limited. The head can read the contents of the tape, one cell at a time. It can erase symbols and write new ones. And it can move one cell in either direction.

States

can be thought of as steps in the machine’s operation. The machine starts at an initial state. The machine must contain some mechanism to keep track of its changing state throughout its operation.

The instructions

describe what basic actions the head is to do and in what order. Crucially, the instructions include the next state to be entered when the head has finished its current action. The complete set of instructions, together with the states they define, form an algorithm the machine follows—a computer program.

At any given state of the machine, and any given cell on the tape, the instructions determine what the head does next. For example, in the initial state, if the head reads 0 on the tape, it may be instructed to erase the 0, write 1, and move one cell farther down the tape. It may then be instructed to proceed to another state. A different state may contain a different instruction for what to do if the head reads 0. If the head comes to a certain cell at a certain state, its instructions may cause it to halt, or stop operation.

A Turing machine’s operation

is determined by its data—the tape—and its program, the assignment of instructions to states. At the initial state, the data on the tape is unprocessed. The head then processes the data, one cell at a time, based on its instructions. Once the machine stops, the data on the tape is fully processed.

One of Turing’s most important insights was that the machine’s program—no matter how long or how complicated—can be encoded as a series of 1’s and 0’s on a tape. A Turing machine’s tape, therefore, may contain both the data to be processed and the instructions for the head. The tape may even contain instructions and data for a second Turing machine, or any number of other Turing machines, because the tape can be infinitely long. A computing machine like this one that is able to simulate other computing machines is called “universal.” This idea of a universal Turing machine forms the basis of modern computer programs, which are stored and processed as data.

Computability.

Turing conceived of the universal computing machine because he wanted to understand the logical limits of algorithms. He sought an answer to the question: Which mathematical problems are computable, and which are not?

The Church-Turing thesis

is the idea that a Turing machine can be built to execute any algorithm or any computable function—given a long enough tape and enough time for the head to process its information. The thesis is named after Turing and his teacher, the American mathematician Alonzo Church.

Every computer that has ever been built can be simulated using a Turing machine. Any mathematical problem that can be solved by a real computer can also be solved by some Turing machine. However, Turing showed that there are mathematical problems that even the most powerful Turing machine imaginable cannot solve.

The halting problem

is a famous example of such an unsolvable problem. A Turing machine can be instructed to halt—stop operation—for a certain input of data at a certain state. But whether or not the machine actually halts depends on the input of data. Some Turing machines, on some input, will run in a loop forever once they start processing and never give an output.

In real computers, such endless operations are undesirable, because they waste power and may freeze up the computer. Thus, it would be useful to have some way to automatically determine whether a given program can end up stuck in such an endless loop.

But to get an answer to the question, Will this computer program halt?, one needs to run the program. If the program does not halt after a certain length of time, a human observer might assume that it will never halt. But no definite answer is ever given by the computer, one way or the other.

One might conceive of a special computer program that takes any program as its input and, somehow, answers whether or not it will halt. But Turing proved that, no matter how brilliant its programming or how powerful the hardware it runs on, no such program can ever be created. The impossibility of such a program has many practical implications for software design, testing, and debugging. For example, it shows that it is impossible to create a program that tests all computer software for potential security breaches.

P equals NP?

is a broader idea in computer science that has yet to be solved or proven. P and NP are two classes of mathematical problems. The mathematical descriptions of the classes are highly technical. But the idea can be summarized as a question: Is it fundamentally easier for a computer to verify solutions to mathematical problems (NP) than it is to find the solutions in the first place (P)?

Computer scientists have not been able to prove the answer to this question, one way or the other. Currently, computers are good at verifying solutions, but not so good at finding them. If P is proven to be equal to NP—that is, if verifying solutions is no more fundamentally difficult than discovering solutions—the proof would imply that the creative problem-solving commonly done by human mathematicians could one day be done by a computer.

Computer networks and security

The communication of data over networks is one of the most important and influential uses of computers. The internet links billions of computers together, from powerful servers to tiny smart watches.

Computers on a trading room floor
Computers on a trading room floor

Increased connectivity, however, brings increased risk of security breaches. Certain databases hold private and personal information, such as medical, banking, or tax records. Others contain business plans or inventions that a company wishes to conceal from competitors. Still others store top-secret military information and other kinds of data important to a nation’s security.

Before widespread use of the internet, a person usually needed physical access to a computer to get its data. Today, criminals can spread harmful programs over the internet that secretly gather personal information from “infected” computers. It is also relatively easy for wrongdoers to penetrate certain wireless networks and spy on other users.

To protect against such risks, much sensitive computer data is encrypted, meaning it is translated into a secret code. Only those authorized to view the information can decrypt it, or translate it back into readable language. Many kinds of data sent over the internet, such as credit card numbers, are encrypted.

Networks

transmit data among multiple computers. Businesses often establish small networks for their own use. A local area network (LAN) connects a company’s workstations within the same building or among neighboring buildings. A wide area network (WAN) links workstations and servers over larger areas. Both LAN’s and WAN’s enable co-workers to exchange information rapidly. They also enable computers to share printers and storage devices.

The internet is a “network of networks,” linking computers around the world. Each computer on the internet is given an Internet Protocol (IP) address. An IP address is a number that is used in much the same way as a home address is used for receiving mail. Unlike mail deliveries, however, packets of data travel over the internet at almost the speed of light, enabling nearly instant communication. Specialized computers called routers send packets of data through the internet.

IP addresses work within a set of protocols (rules) called Transmission Control Protocol/Internet Protocol (TCP/IP). These rules determine how all data are sent over the internet, from one IP address to another. Smaller networks may also use TCP/IP to handle data transfers. Some organizations make use of such private networks, called intranets, which are “walled off” from the larger internet.

A user typically establishes an internet connection through a device called a modem. The modem can hook up directly to a computer. Or it can connect via a home or office router. A modem typically uses a cable connection or specialized telephone line to link to the internet. Some modems connect to a radar dish that uses radio waves to communicate with satellites. Modems are generally supplied by companies called internet service providers (ISP’s) for a monthly fee. In addition, many mobile devices can access the internet through cellular networks.

Encryption.

Laws limit the disclosure of information in databases, and operating systems are designed to prevent unauthorized access of a computer. Often, a computer user must enter a password. To further protect their information, some computer systems automatically encrypt the data they hold.

Sensitive information must be encrypted before being transmitted over a communications line that may not be secure. Internet businesses encrypt credit card numbers and other personal information so that a purchaser can safely transmit the information over the internet. Only the intended recipient can decrypt the transmission.

In public-key encryption, a mathematical technique is used to create a pair of special codes, called keys. A key pair consists of a private key, which its user keeps secret, and a mathematically related public key, which is available to everyone. Anyone can use the public key to encrypt a message, but only the holder of the corresponding private key can decrypt it. A person may send a message with a digital signature, a computation derived from the person’s private key and the data being sent. Another person using the corresponding public key can use this signature to verify the identity of the person who sent the data. It is practically impossible for an individual to determine a private key using the associated public key within a reasonable amount of time.

Security threats.

Despite protective measures taken by businesses and individuals, computer crimes sometimes occur. Industrial spies and thieves often link their computers to a network to gain access to other computers on that network. Some of these criminals steal or change the information in a computer database. Others steal money by transferring funds electronically. Some wrongdoers can correctly guess poorly chosen passwords. Others run programs called packet sniffers on computers that handle internet traffic, seeking to collect pieces of data as they are transmitted.

Phishing involves tricking people to give up such personal information as passwords, credit card numbers, or social security numbers. Phishers may impersonate legitimate-seeming institutions over e-mail. In website spoofing, phishers may create entire fake websites that lure people into giving up their personal information. The use of such stolen personal information is a crime called identity theft.

In the late 1980’s, computer experts realized that some software—often called malware—could be used to damage data stored on computers. Programs known as viruses are designed to do mischief, sometimes by deleting or changing information and sometimes by simply inserting a message. Other destructive programs known as Trojan horses and worms spread over the internet. Some types of malware enable wrongdoers to take complete control of an “infected” computer, even spying on the computer’s owner by hijacking the device’s camera and microphone. Criminals may also link many compromised computers together into a network called a botnet. A botnet can supply a great deal of computing power for criminal activities.

Criminals are not the only security threat to computer systems and networks. Computers help run military operations, transportation networks, and electric power systems—making them tempting targets for rival or enemy nations. Attacking a country’s computer systems is called cyberwarfare. Experts believe that most nations have cyberwarfare capabilities.

History

The first electronic, digital, programmable computers were invented in the mid-1900’s. Before that, computers were little more than obscure mathematical curiosities. Before the 1940’s, in fact, the term computer generally referred not to a machine, but to a job held by a human being. Computers—often women—were people skilled at quickly making calculations. They worked for businesses, governments, and scientific organizations.

The ideas behind modern digital computers, however, have much earlier roots. Many engineers, mathematicians, and scientists over hundreds of years contributed to the development of computing machines.

Early calculating machines.

In 1642, the French mathematician, scientist, and philosopher Blaise Pascal, invented the first automatic calculator. The device performed addition and subtraction by means of a set of wheels linked by gears. The first wheel represented the numbers 1 to 10, the second wheel represented 10’s, the third stood for 100’s, and so on. When the first wheel was turned 10 notches, a gear moved the second wheel forward a single notch. The other wheels became engaged in a similar manner. In early versions of the machine, some wheels had 12 or even 20 notches, reflecting standard divisions among the denominations of French currency.

During the early 1670’s, the German mathematician Gottfried Wilhelm Leibniz improved Pascal’s calculator. Leibniz added gear-and-wheel arrangements that made possible multiplication and division.

Binary.

Leibniz also developed and promoted the binary numeration system as a counting system that was easier for a machine to handle than was the decimal system. Neither Leibniz nor his contemporaries found much practical use for binary numbers. But the binary system would prove critical to the development of electronic digital computers.

Another important contribution to the development of binary mathematics was made in the mid-1800’s by George Boole, an English logician and mathematician. Boole used the binary system to invent a new type of mathematics. Boolean algebra and Boolean logic perform complex mathematical and logical operations using the symbols 0 and 1. This development made it possible to perform complex calculations using only those two digits, shaping the development of computer logic and computer languages.

Punched-card devices.

The next great contribution to the development of the computer was made by Joseph Marie Jacquard, a French weaver. In the weaving process, workers manipulate thread on a loom to produce often complex patterns. Correcting a mistake made in such a pattern on a hand loom might mean undoing many hours of work. During the 1700’s, many people sought to eliminate such mistakes by automating the weaving process.

The difference engine was the first computer ever made
The difference engine was the first computer ever made

In 1801, Jacquard succeeded. The Jacquard loom used long belts of punched cards to automate the weaving. The cards controlled needles on the loom. Where there were holes, the needles went through them. Where there were no holes, the needles were blocked. The position of each needle controlled whether one thread would be above or below another in the cloth. By changing cards and alternating the patterns of punched holes, loom operators could mechanically create complex woven patterns. The presence or absence of a hole could be compared to the two digits of the binary system.

The punched cards of the Jacquard loom inspired the English mathematician Charles Babbage. During the 1820’s, Babbage developed the idea of a mechanical computer. He worked on several versions of the machine for almost 50 years. He called early versions the difference engine and later versions the analytical engine. When performing complex computations or a series of calculations, his analytical engine would store data on punched cards for use in later operations. Babbage’s analytical engine contained all the basic elements of an automatic computer—storage, working memory, a system for moving data between the two, and an input and output device. But Babbage lacked funding to build an analytical engine. Much later, in 1991, the Science Museum in London constructed an operational version of one of Babbage’s difference engines from his original drawings.

English mathematician Charles Babbage
English mathematician Charles Babbage

Although Babbage never built an analytical engine, one of his contemporaries, the English mathematician Ada Lovelace, planned sequences of steps for the machine to perform. Many people consider her to be the world’s first computer programmer.

Foundations of data processing.

In 1888, the American inventor, statistician, and businessman Herman Hollerith devised a punched-card system, including the punching equipment, for tabulating the results of the United States census. Hollerith’s machines made use of electrically charged pins that, when passed through a hole punched in a card, completed a circuit. The circuits registered on another part of the machine, where they were read and recorded. Hollerith’s machines tabulated the results of the 1890 census, making it the fastest and most economical census up to that time. In a single day, 56 of these machines could tabulate census information for more than 6 million people.

Punched-card tabulating machine invented by Herman Hollerith
Punched-card tabulating machine invented by Herman Hollerith

Governments, institutions, and industries found other uses for Hollerith’s machine. In 1896, Hollerith founded the Tabulating Machine Company. He continued to improve his machines. In 1911, he sold his share of the company. Its name was changed to the Computing-Tabulating-Recording Company (C-T-R). In 1924, the name was changed to International Business Machines Corporation (IBM).

The first electronic computers.

The first special-purpose electronic digital computer was constructed in 1939 by John V. Atanasoff, an American mathematician and physicist, and Clifford Berry, an American graduate student in electrical engineering. During World War II (1939-1945), the German inventor Konrad Zuse designed and built two general-purpose electronic digital calculators for work on engineering problems. They made use of electric telephone relays (switching devices). Also during World War II, British codebreakers designed and built a number of electric codebreaking devices used to decode German military messages. Alan Turing designed one called the Bombe. Other British codebreakers built more advanced, programmable devices called Colossus machines. In 1944, Howard Aiken, a Harvard University professor, built a digital computer called the Mark 1. The operations of this machine were controlled chiefly by electromechanical relays.

The Colossus machine, a British computer used during World War II
The Colossus machine, a British computer used during World War II

In 1945, two engineers at the University of Pennsylvania , J. Presper Eckert, Jr. and John William Mauchly , completed one of the earliest general-purpose electronic digital computers. They called it ENIAC (Electronic Numerical Integrator And Computer). ENIAC included about 17,500 vacuum tubes. These tubes performed the roles served by transistors in more modern computers. The machine occupied more than 1,500 square feet (140 square meters) of floor space and consumed 150 kilowatts of electric power during operation. ENIAC operated about 1,000 times faster than the Mark 1. It performed about 5,000 additions and 1,000 multiplications per second.

Although ENIAC worked rapidly, programming it took a great deal of time. Computer specialists created programs by setting switches manually and plugging cables into the machine’s connector panels. Eckert and Mauchly next worked on developing a computer that could store more of its programming. They worked with John von Neumann, a Hungarian-born American mathematician. Von Neumann helped assemble all available knowledge of how the logic of computers should operate. He also helped outline how stored programming would improve performance. In 1951, a computer based on the work of the three men became operational. It was called EDVAC (Electronic Discrete Variable Automatic Computer). EDVAC strongly influenced the design of later computers.

Also in 1951, Eckert and Mauchly completed a more advanced computer called UNIVAC (UNIVersal Automatic Computer). UNIVAC became the first commercially successful computer. Unlike earlier computers, UNIVAC handled numbers and alphabetical characters equally well. It also was the first computer system in which the operations of the input and output devices were separated from those of the computing unit. Like ENIAC, UNIVAC used vacuum tubes.

The first UNIVAC was installed at the U. S. Census Bureau in June 1951. The following year, another UNIVAC was used to tabulate the results of the United States presidential election. Based on available data, UNIVAC accurately predicted the election of President Dwight D. Eisenhower less than 45 minutes after the polls closed.

Miniaturization.

The invention of the transistor in 1947 led to the production of faster and more reliable electronic computers. Transistors soon replaced the bulkier, less reliable vacuum tubes. In 1958, Control Data Corporation introduced the first fully transistorized computer, designed by the American engineer Seymour Cray. IBM introduced its first transistorized computers in 1959.

Computer technology improved rapidly during the 1960’s. Miniaturization continued with the development of the integrated circuit, a complete circuit on a single chip, in the early 1960’s. This device enabled engineers to design both minicomputers and high-speed mainframes with large memories. By the late 1960’s, many large businesses relied on computers.

In 1965, an American research scientist named Gordon Moore noticed an interesting trend in computing. He observed that the number of transistors that could fit onto an integrated circuit seemed to double every year. He predicted that this rapid progress would continue in the future, though he later forecast that the doubling of transistor counts would occur every two years. This trend became known as Moore’s Law.

By the early 1970’s, the entire workings of a computer could be placed on a handful of chips. As a result, computers became smaller. In 1968, Moore cofounded In 1968, Moore cofounded Intel Corporation with the American inventor Robert Noyce, one of the inventors of the microchip. The company developed the first microprocessor in 1971. It contained around 2,500 transistors.

Remarkably, Moore’s Law has held steady since its formulation. Manufacturers have doubled the transistor counts on computer chips roughly every two years—perhaps in part by treating Moore’s Law as a challenge. Moore’s Law describes an exponential growth in computing power, a type of growth much faster than steady, linear growth. For example, the Intel Pentium 4 microprocessor, released in 2000, contained 42 million transistors—about 17,000 times the number of the original 1971 processor. In just 30 years, Intel’s computer chips had become thousands of times more powerful, without becoming larger, using more power, or costing more to produce.

The personal computer.

The first personal computer, the Altair, was introduced in kit form in 1975. Only electronics hobbyists bought these computers. But the promise of a small, easy-to-use computer would transform the industry and society.

Personal computer from the early 1980's
Personal computer from the early 1980's

In 1976, two young American computer enthusiasts, Steven P. Jobs and Stephen G. Wozniak, founded Apple Computer, Inc. (now Apple Inc.). The next year, they introduced the Apple II personal computer. The Apple II was much less expensive than mainframes, and it was sold as an assembled unit, not as a kit. This made the personal computer available to people other than computer specialists and technicians. Personal computers were purchased by small and medium-sized businesses that could not afford mainframes or did not need the immense computing power that mainframes provided. Millions of individuals, families, and schools also bought personal computers.

Steve Wozniak and Steve Jobs
Steve Wozniak and Steve Jobs

In 1975, former schoolmates Bill Gates and Paul Allen founded Microsoft Corporation to develop programs for the Altair. In 1981, IBM entered the personal computer market with its PC. The machine was even more successful than the Apple II. Microsoft soon began to develop programs for the PC. Apple scored another success in 1984 with the introduction of its Macintosh, a powerful, easy-to-use desktop computer.

Xerox Corporation developed the first graphical user interface in the late 1970’s. Both Apple and Microsoft adopted GUI’s for their OS’s, which contributed to their ease of use.

Software platforms.

During the 1980’s, a number of companies began manufacturing personal computers. Soon, computers became commonplace in homes and offices. Most of these computers ran Microsoft’s operating system software, called Windows. Apple, on the other hand, continued to manufacture its own Mac computers, which only ran Apple’s software. Apple computers were also typically more expensive than competing machines, limiting their appeal. Consumers and businesses bought far more Windows-capable PC’s than Apple Macs.

Apple cofounder Steve Jobs with the Macintosh personal computer
Apple cofounder Steve Jobs with the Macintosh personal computer

Even though Microsoft did not manufacture computers itself, the dominance of its software made the company fortunes. The company’s strategy illustrated the importance of computing platforms. An operating system is often said to function as a platform for other types of software, which must be built “on top” of the OS to work. Because far more computers ran Windows than ran Apple’s OS, Windows secured a much broader platform. Consequently, developers were inclined to make far more software programs for Windows computers. The greater availability of software for the Windows OS, in turn, further increased Windows’ popularity with consumers—broadening the Windows platform even further.

Rival computing platforms also battled for dominance in the market of electronic games. During the 1980’s and 1990’s, video game companies manufactured consoles that hooked up to a television. Developers had to design game software to work with specific console platforms. Thus, the console that could secure the best software often became a more popular platform with consumers—which in turn further increased the amount of software developed for the console, and so on.

In 1991, the Finnish computer programmer Linus Torvalds introduced a computer operating system called Linux. Unlike other OS’s, Linux was open source, meaning that the source code used to create the program was made freely available. Other programmers were encouraged to modify the code and share their improvements. Though Linux never became a dominant platform on personal computers, its source code found widespread use in later applications, such as operating systems used to run internet server computers.

The original iMac personal computer, launched in 1998
The original iMac personal computer, launched in 1998

The internet and the World Wide Web.

The internet began in the late 1960’s as ARPAnet, a group of interconnected military and other government computers in the United States. The U.S. Department of Defense created ARPAnet to ensure secure communication in the event of war or natural disaster. Soon after ARPAnet began, universities and other institutions created their own networks. These networks eventually merged with ARPAnet to form the internet.

Before the 1990’s, the main users of the internet were computer scientists and the military. The World Wide Web, developed in 1991 by the British computer scientist Tim Berners-Lee, helped transform both the internet and the use of PC’s. The Web turned PC’s into user-friendly gateways into the internet’s rapidly expanding content. Through the use of web browser programs, ordinary people could easily use their PC’s to explore the internet and communicate with one another.

Student using a laptop in a university library
Student using a laptop in a university library

Web browsers soon became another battleground in the computer market. A company called Netscape created the first truly popular web browser, called Netscape Navigator, in 1994. Beginning in 1995, Microsoft began promoting a rival web browser called Internet Explorer by bundling the program with its Windows OS. Because so many computers ran Windows, Internet Explorer quickly overtook Netscape as the most popular web browser. In 1998, however, the U.S. Department of Justice sued Microsoft for anticompetitive practices. It charged that bundling Internet Explorer with Windows violated antitrust laws. The European Union (EU) filed similar charges. Microsoft eventually settled with the United States in 2002 and paid a $600-million fine to the EU in 2004.

As more people interacted on the Web, internet service providers sought to increase the speed and reliability of internet connections. By the late 1990’s, broadband connections, which often used cable lines, began replacing slower dial-up modems, which connected via telephone lines. Internet connections grew faster at the same time computers increased in power and in storage capacity. The distribution of songs, videos, games, and other data-intensive content soon proliferated on the internet.

Other programs—notably electronic games—also took advantage of the internet in the late 1990’s, connecting players across the globe. In 2001, Microsoft entered the video game console market with its Xbox console. Microsoft had previously manufactured a number of computer accessories, but the Xbox marked the software company’s first venture in manufacturing computers. The Xbox and its rival consoles, with their added internet functionality, more closely resembled general-purpose computers than did previous video game consoles.

Mobile computing.

During the 1990’s, most personal computers were desktops that plugged into wall outlets. But as Moore’s Law held steady, computer components continued to shrink at an exponential rate. Laptop computers soon matched the power of desktops and became steadily more popular. The spread of wireless internet access made laptops even more attractive to consumers.

Laptop computer components
Laptop computer components

Laptops, though portable, maintained the basic design and function of desktop computers. Both laptops and desktops had keyboards, for example. Both types of computers used the same operating systems with similar user interfaces.

During the 1990’s and 2000’s, however, other types of small, portable electronic devices became more and more computerlike. They included portable music players and most importantly, cellular telephones. Familiar computer OS’s proved difficult to adapt to these devices’ small screens and limited inputs. Manufacturers had to develop entirely new user interfaces to make these mobile devices double as general-purpose computers.

Personal digital assistants, or PDA’s, first appeared during the 1980’s, but became more popular and powerful in the 1990’s and 2000’s. These electronic devices often featured touch screens and styluses. People used them to keep notes and organize contacts. Eventually, manufacturers made PDA’s that could connect to the internet wirelessly.

Smartphones.

Cellular telephones began transmitting sound digitally in the 1990’s. They had previously done so using analog radio waves. Digital transmission enabled the development of the first internet-connected smartphones. The Canadian company Research in Motion pioneered smartphone development with its BlackBerry phone, released in 2002. The BlackBerry featured a miniature typewriter-style keyboard and enabled users to send and receive e-mail over a cellular network. The BlackBerry featured a relatively large screen—for the time—and a streamlined operating system. It proved especially popular with businesspeople who wanted to send and receive e-mail outside the office. However, the BlackBerry’s screen and OS were not well-suited to display websites and more complicated computer programs. Many other smartphones of the early 2000’s ran another operating system, called Symbian, and had similar limitations.

In 2007, Apple launched the iPhone. Unlike the BlackBerry and other smartphones, the iPhone had no physical keyboard. Instead, a colorful touch screen covered its entire surface. Apple designed a simplified version of its operation system—later called iOS—to work especially well with the phone’s touch screen. When users needed to type, iOS displayed a virtual keyboard. The iPhone also enabled users to control programs using swipes, pinches, and other touch-based finger gestures. The iPhone was expensive, but its high quality hardware, expressive user interface, and ability to play music from Apple’s popular iTunes music service helped propel its popularity. In 2008, Apple launched its App Store, an online store where users could easily download software for their iPhone.

Inside of a smartphone
Inside of a smartphone

In the same year, the internet company Google released a rival smartphone operating system, called Android, with a touch screen user interface similar to iOS. Hoping to establish a presence in the fast-growing smartphone market, Google employed a strategy similar to Microsoft’s strategy for personal computers during the 1980’s. Google did not manufacture smartphones itself, but instead let smartphone manufacturers use its Android software.

By 2010, smartphones had become the best-selling type of general-purpose computer. Millions of consumers purchased smartphones with touch screens. These devices were mostly iPhones at first, but cheaper, Android-based smartphones soon caught up in popularity.

Tablets.

Companies manufactured many tablet devices before the 2010’s, but none were commercially successful. These devices were often bulky and used PC-based operating systems that proved unwieldy to control. In 2010, Apple released the iPad, a tablet computer about the size of a magazine. Like the iPhone, the iPad had most of its surface covered by a touch screen and used a similar operating system. Software developers released new apps that took advantage of the iPad’s portability and large touch screen. Android-powered tablets entered the market soon afterward.

During the 2010’s, mobile computers vastly outsold laptop and desktop PC’s. Some people used their tablet or smartphone as their primary computer. New smartphones featured larger screens, while new tablets featured smaller, more portable forms. The two types of devices converged with the introduction of so-called “phablets”—that is, smartphones the size of small tablet computers.

Three Apple iPad tablet computers
Three Apple iPad tablet computers

Computer hardware continued to follow the trajectory of Moore’s Law, and so PC’s became much cheaper and more capable. Manufacturers made thin, streamlined laptops that were sometimes less expensive than tablet computers. Many laptops featured the same solid-state storage drives found in smartphones and tablets.

The future of computing.

In the 2010’s, Moore’s Law continued to hold steady. But chip manufacturers began pushing up against physical limitations that threatened to prevent the design of ever-smaller transistors. Using nanotechnology (the creation and study of structures on the scale of atomic particles), manufacturers created transistors less than 100 atoms wide. In 2012, researchers created an even smaller experimental transistor out of a single phosphorus atom, though it was impractical for use. In theory, however, transistors cannot be smaller than an atom.

One active area of research in computer science is quantum computing. Quantum mechanics is a branch of physics that describes the behavior of particles at the subatomic level. It is often contrasted with classical mechanics, which describes larger, everyday phenomena.

Digital electronic computers already rely on quantum mechanics to work. The behavior of electrons in transistors, for example, is based on the laws of quantum mechanics. A true quantum computer, on the other hand, would take advantage of a uniquely quantum-scale phenomenon called superposition. In a classical computer, a bit is either switched “on” or “off.” In a quantum computer, a quantum bit, or qubit, can be held in superposition—that is, a combination of “on” and “off” states.

Using qubits held in superposition, a quantum computer could quickly perform certain computations that would take a classical computer thousands or millions of years to complete. Such computations are often used in encryption. The laws of quantum mechanics also make measuring a qubit’s calculations difficult, because any measurement “collapses” the ­superposition.

Manufacturers have successfully built quantum computers with hundreds of qubits. Experts think that quantum computers will eventually help computer scientists, engineers, physicists, chemists, and other researchers perform complex calculations. But quantum computers will probably not replace classical computers in everyday use.