Coding is the process of creating instructions that a computer can carry out. Such instructions are generally written using computer languages or programming languages. Just as there are many human languages that people use to communicate with one another, there are many programming languages that people can use to communicate with computers. Instructions written in a programming language are called code.
A computer programmer, also called a coder, is a person who writes code to instruct computers to do particular jobs. Computers can perform a wide range of tasks to accomplish a job efficiently. For example, computers can perform mathematical calculations; store and retrieve large amounts of information; analyze data to identify trends; and display graphics. To get a computer to perform such tasks, a programmer must be able to write and debug code. Debugging code involves identifying and fixing any mistakes—called bugs—that prevent the code from working properly. A programmer must also be able to organize the code’s structure in such a way that the code works accurately and efficiently.
The ability to code has become an important skill, because people increasingly rely on computers to help with many aspects of life. Modern cars, smartphones, televisions, thermostats, and many other common devices depend on code to run properly.
Fundamentals of coding
Programmers write code in multiple lines. Like a sentence of text, a line of code is written with characters and symbols. A line of code generally contains a single instruction. A programmer must follow various rules to write lines of code that effectively communicate instructions to a computer.
Writing code.
To successfully write a computer program, a programmer must use different kinds of instructions, such as declarations, operations, and control flow instructions. These instructions work together to form the structure of the program.
Declarations
are lines of code that name a space in the computer’s memory and tell the computer what kind of information the space can contain. Using a declaration, a programmer can tell a computer how to use a particular program element. For example, a programmer might use a declaration to tell a computer that a certain program element is a variable. A variable is a value, such as a number, a word, or a list, that can be changed by the program or the user. A declaration can tell a computer how to handle the variable within the program.
Operations
are instructions that tell the computer to perform specific functions as part of a computer program. An operation uses symbols or characters called operators to tell the computer what to do with certain values. Depending on the programming language that the programmer is using, an operation can be written with arithmetic symbols, such as + and -; comparison symbols, such as > and <; or other sets of operators. The numbers, words, or other values that are represented in an operation are called operands.
Control flow.
Programmers must also determine the control flow (also called the flow of control or the execution sequence) of their program. The control flow is the order in which a computer should follow the steps of the program. Programmers create a control flow to make sure the computer performs actions in the correct order. A loop is one example of an instruction that can help regulate a program’s control flow. A loop makes the computer repeat a section of code, often until certain conditions are met.
Debugging.
In addition to writing code, programmers must be able to debug code. A small mistake, such as a misspelled word or a missing symbol, can prevent the code from running properly or running at all. If the computer cannot run the code, it displays an error message. This message tells the programmer which line contains the bug and describes the problem. But a program with a bug may still be able to be run by a computer. The computer will simply return different results than the programmer intended. Bugs that work in this way can be particularly hard to find.
Structuring code.
Coding also involves using techniques to make the code easy to maintain over time. For example, a programmer might write 20 lines of code to perform a particular calculation. If the same calculation is needed in different parts of the program, the programmer can copy the same 20 lines and paste them wherever needed. This technique works, but the code may become difficult to maintain. For example, if any of those 20 lines need to be revised, the programmer would have to find every place where those lines were pasted and update each one individually. Instead of copying and pasting, the programmer could write the 20 lines of code into a single function. The function would execute the 20 lines of code wherever it was applied. If any of the 20 lines needed revision, the programmer could simply revise the function once, rather than individually revising every pasted line. Functions are one of many techniques a programmer can use to make code easier to maintain.
Coding languages
A computer’s data is encoded in the form of numbers. The only numbers a computer can handle directly are 0 and 1. This limitation comes from the fact that computers run on electrical signals. Each signal can be detected as either off (represented by 0) or on (represented by 1). Representing information using only 0’s and 1’s is known as binary numeration, or binary for short. Computers can represent numbers, words, images, sounds, and virtually any other kind of information in binary, as long sequences of 0’s and 1’s.
Working directly in binary would be tedious and difficult. Instead, a programmer writes code in a high-level language. High-level languages are designed to be written and understood by people. A program written in high-level language must be translated into machine language for a computer to carry it out. A machine language is a kind of low-level language that uses only the numerals 0 and 1. A computer can interpret code in a machine language and execute it as a series of binary operations.
Programmers work in many different high-level languages. To choose a particular language, the programmer must consider the type of job to be done. A programmer must also consider which language other programmers use to do the same job. Using the same language enables a community of programmers to work together and share code. Here are some examples of programming languages used in different fields.
In web development.
The layer of a website that a user can see and interact with is commonly called the front end. Front-end development requires HTML, CSS, and JavaScript—languages that enable a programmer to build the structure and content of the site. Many sites, including blogs and social media sites, save custom information for users. Such sites require programmers to build a back end. The back end is the layer of a website that allows users to access information from the website’s database . Back-end development makes use of such languages as Ruby, Python, PHP, and JavaScript.
In video game development.
Making video games requires a programming language that can efficiently generate graphics and audio, the visual and sound elements of the game. Such languages tend to be at a lower level than other high-level languages—making them more difficult for a programmer to use but faster for a computer to execute. Languages used in video game development include C++, C# (pronounced c sharp), and Java.
Designing a game’s sights and sounds
In embedded computer programming.
An embedded computer is a computer built into a larger device. Embedded computers are the most widespread kind of computer and are used to control such common objects as calculators, microwave ovens, and airplanes. Embedded computers are usually designed to run a single program. Programming an embedded computer requires a language that uses minimal resources. Lower level languages such as C, C++, and Java work well with embedded systems. Some higher-level languages, such as Python and JavaScript, were designed to work in embedded computers.
Learning to code
Coding has become a vital field of study. Computers have shaped every major industry, including such varied industries as agriculture, health care, and transportation. This transformation has created a growing demand for computer programmers who can write the code needed to advance these industries. Additionally, understanding how coding works can help people to better interact with the software they use in their daily lives.
Many resources are available for people at any skill level who are interested in learning to code. They include interactive online courses, video tutorials, and books. Many people use such resources to teach themselves the basics of coding. Some _single-board computers—_computers built on a single circuit board, such as the Raspberry Pi—are designed to teach people how to program embedded computers.
Many toys and games on the market today can teach young people the basics of coding. Some toy robots, such as Cue and Sphero, can be programmed to perform a variety of functions. Children, teens, and adults can build and program their own robots using Lego Mindstorms kits. Players who design their own games in the electronic gaming platform Roblox can learn simple coding sequences to add to the complexity of their creations. Scratch is a programming language for children and teenagers that allows them to make games and videos.
Some people use online resources to begin a professional career as a computer programmer. However, many choose a more traditional, structured path that includes earning credentials valued by employers. The established process involves earning a bachelor’s degree in computer science. The required program of study focuses on the fundamentals of computer programming; data structures and algorithms; operating system design; and advanced mathematics. Computer programmers have many different career options, including web development, video game programming, database administration, and data science.
People who wish to learn to code can also enroll in a _coding boot camp—_a program designed to teach the necessary skills to get a programming job in a relatively short period. A coding boot camp typically takes about 12 weeks to complete. Boot camps focus primarily on the fundamentals of coding, practical skills, and applications for a specific industry. Most coding boot camps teach web development, a rich job market for programmers in the early stages of their career.
Many people learn to write code as a hobby. Coding can be used to control robots, drones, and internet-enabled devices. Some people write code to produce artwork. Others do it to automate boring, repetitive tasks. People sometimes learn to code simply for the challenge of mastering a complex skill.
History
People began coding in the 1800’s, when mathematicians first invented machines that could perform large numbers of calculations. In 1843, the English noblewoman Ada Lovelace wrote the first known computer program. It consisted of instructions for programming the analytical engine, a steam-powered mechanical computer designed but never built by the English mathematician Charles Babbage .
The first general-purpose electronic computers were built in the 1940’s. Programs for these computers were painstakingly written in assembly language, a low-level language requiring custom code for each machine. In the 1950’s, computer programmers created high-level languages that made it possible for the same code to be run on various machines. This feat was accomplished by translating the high-level code into low-level codes specific to each machine. The first widely used high-level language, called FORTRAN, was developed in the mid-to-late-1950’s at the International Business Machines Corporation (IBM) and is still in use today.
In the mid-1950’s, the American computer scientist Grace Hopper created a high-level language called FLOW-MATIC. It was the first programming language to use everyday human words—rather than just mathematical symbols—to represent code. The introduction of FLOW-MATIC made programming more accessible to non-mathematicians. Three years later, Hopper directed the work that led to the development of COBOL, which by the 1970’s became one of the most widely used programming languages.
One of the most efficient and widely used programming languages today is called C. The American computer scientist Dennis Ritchie developed C in the early 1970’s. Code written in C translates easily into machine language, making C a practical language choice for many industries. Nearly every major modern operating system—including Android, iOS, Linux, macOS, and Windows—is written at least in some part in C. C has indirectly inspired the development of a wide variety of newer languages, such as C++, C#, Java, Go, PHP, Python, and Ruby. These newer languages have lost some of the efficiency of C, but they are generally easier for programmers to use.
In 1958, the American computer scientist John McCarthy created the Lisp programming language. McCarthy designed Lisp to be a mathematical notation (system of mathematics symbols) for computers, with an emphasis on mathematical accuracy rather than speed and efficiency. Lisp is still used in various forms. It directly influenced the development of such programming languages as Common Lisp, Scheme, and Clojure. It has indirectly influenced additional languages, such as ML, Haskell, and OCaml. Programmers consider languages in the Lisp family to be more academic and less practical for industry use, compared with the C family of languages. However, as computers become increasingly powerful, they are able to run Lisp languages more practically.
Early programming languages were designed to run as efficiently as possible, because computers had limited memory and processing power. Computer memory hardware has become far less expensive, and processors are much faster than ever before. Computer scientists are working to create new programming languages and features that take advantage of today’s advanced computing hardware.
One modern challenge that computer scientists face is to design programming languages that can take advantage of concurrency, a feature that enables different parts of a program to be run independently. Concurrency enables a program to run at the same time on multiple machines or processors, helping to enable distributed computing and other technologies that make use of multiple connected computers.