Photography

Photography is the process of making pictures by means of the action of light. Light reflected from a subject forms an image of that subject on a light-sensitive device or material. Usually, but not always, the device or material is inside a camera. The image formed by the light is then digitally or chemically processed into a photograph. The word photography comes from Greek words meaning to write or draw with light.

Nature photography
Nature photography

Photography enriches our lives in many ways. From photographs, we can learn about people in other parts of the world. Photographs show us scenes from such historic events as the American Civil War (1861-1865) and the first human moon landing in 1969. Photographs also remind us of special people and important events in our own lives. Millions of people throughout the world take pictures of their family, friends, vacations, and celebrations.

Special cameras can capture images in places where human beings cannot go—into deep space, to the bottom of the ocean, and inside the human body. Photographs made by visible light, X rays, infrared rays, or other forms of radiant energy help physicians detect many types of cancer and other diseases.

Cameras can also “see” events in a way that the eye cannot. For example, some cameras can record action that occurs so rapidly we see it only as a blur. Through this type of photography, scientists examine moving parts of machinery and study hummingbirds in flight.

Scientific research is only one of the many fields in which photography plays an important role. The advertising industry uses photographs to publicize products and services. Photography is such an essential part of news reporting that photojournalism has become a specialized field. Mug shots and pictures taken with hidden cameras help the police find criminals. Military leaders use aerial photographs to learn about enemy troop movements and plan battle strategy.

Some photographs, like great paintings, have lasting value as works of art. Such pictures, through the photographer’s imagination and technical skill, are exceptionally beautiful or express significant ideas.

A camera works in much the same way as the human eye. Like the eye, a camera takes in rays of light that are reflected from an object and focuses the rays into an image. But the camera records the image on film or converts the image into electronic information and stores it. As a result, the image can be made permanent, reproduced endlessly, and seen by an unlimited number of people.

A crude type of camera was developed by about 1500. However, the first true photographs were not made until the 1820’s and 1830’s. Early photographers needed much heavy equipment and knowledge of chemistry. Gradually, as a result of scientific and technical advances of the 1800’s and 1900’s, cameras became smaller, lighter, more efficient, and easier to operate. Today, a person can take a photograph simply by aiming a camera and pressing a button.

From the late 1800’s to the late 1900’s, almost all photographs were made using chemical processes that required film and a darkroom. In the late 1990’s and early 2000’s, digital technology began to replace these chemical processes. Digital cameras record incoming light rays as electronic signals. The images captured by digital cameras can be easily transferred to a computer and processed using digital imaging software. Today, most professional and amateur photographers work with digital cameras. But film cameras and chemical processing methods are also still in use.

Amateur photography
Amateur photography

Photography can be divided into two general areas—still photography and motion pictures. This article will discuss still photography. Both film and digital photography will be covered. For information about motion pictures, see the World Book articles on Motion picture and Camcorder. This article will discuss kinds of cameras; how to take photographs, including the principles of lenses, focusing, exposure, and lighting; the processes used to make film and digital photographs; the history of photography; and careers in photography.

Cameras

The camera is the photographer’s basic tool. There are hundreds of kinds of cameras, with a wide range of designs, features, and complexity. But all cameras have some basic things in common.

How a camera works.

A camera is basically a lightproof box with a small aperture (opening) at one end and a light-sensitive electronic chip or piece of film at the opposite end. The electronic chip is called an image sensor. Light reflected from a scene enters the camera through the aperture and exposes the sensor or film. The inside of the camera must be completely dark so that light rays reach the sensor or film only through the aperture. A device called a shutter opens when the camera is taking a picture. It remains closed at all other times to keep light away from the sensor or film.

Loading the player...
How a film camera works

In nearly all cameras, the aperture is part of a lens mounted on the front of the camera. A lens consists of multiple pieces of curved glass or plastic inside a tube. The lens concentrates incoming light rays on the sensor or film. In this way, the lens gathers enough light to make an exposure in only a fraction of a second. Without a lens, the exposure might have to be several minutes long, and it would not form a sharp image.

How an image is formed inside a camera
How an image is formed inside a camera

Most cameras have a viewfinder that helps the photographer frame the scene to be photographed. The viewfinder may be a small window through which the photographer looks directly at the scene. It may be a window through which he or she views a mirror image of the scene on a screen. Digital cameras have monitors (video displays) that often can act as viewfinders.

When the shutter opens, light from the subject forms an image on the sensor or film. Rays of light from the top of the subject go through the aperture and form the lower part of the image. Light rays from the bottom of the subject form the upper part of the image. Thus, the image on the sensor or film is upside down.

In addition to concentrating the rays of light, the lens focuses them on the sensor or film. As the light rays pass through the aperture, the lens bends them so that they form a sharp image. The sharpness of the image depends on the distance between the subject and the lens, and between the lens and the sensor or film. Many cameras have a mechanism that moves the lens forward and backward. In other cameras, the lens is fixed.

In digital cameras, after the sensor is exposed, the focused image is converted into electronic data. These data are transferred to the camera’s storage device, which is usually a small removable card. The stored images can be downloaded to a computer, manipulated using digital imaging software, printed, transmitted via e-mail, or posted on a website.

In film cameras, chemical changes occur on the film when it is exposed. These changes form one or more invisible latent images on the film. The latent images become visible once the film is removed from the camera and treated with chemicals in a procedure called developing. Prints are made by transferring images from the developed film to light-sensitive printing paper.

Most cameras have an internal computer system that can automatically adjust focusing, aperture size, shutter speed, and certain other functions. In some cameras, the photographer can adjust these functions manually, giving him or her more creative control over the images. Serious photographers usually work with cameras that have both manual and automatic controls.

Compact cameras

are the most popular cameras for casual photographers. They are small, easy to use, and relatively inexpensive. They typically have automatic functions and a built-in, noninterchangeable lens. Nearly all compacts are point-and-shoot cameras, which means the user simply aims the camera, frames the scene, and presses a button. Pictures taken in this manner are often called snapshots. Compact cameras come in both digital and film models. The film models usually use 35-millimeter film.

Digital point-and-shoot camera
Digital point-and-shoot camera

Most film compact cameras and some digital compact cameras have a direct-vision viewfinder, which is a type of optical viewfinder. A direct-vision viewfinder is a small window above or to one side of the lens. The user looks directly through the viewfinder at the scene in front of the camera. The view through a direct-vision viewfinder is slightly different from the view seen by the lens. This difference is called parallax error. With faraway subjects, the difference between the two views is not important. But with close-up subjects, parallax error makes it more difficult to frame the subject correctly. To help correct for parallax error, the viewfinder usually has lines that show the approximate borders of the picture area.

Digital compact cameras have an external monitor that displays a video image of the scene. This monitor is often a liquid crystal display (LCD) screen. The monitor can act as a viewfinder. It can also be used to view pictures already taken. Some digital compacts have an electronic viewfinder (EVF) instead of a direct-vision viewfinder. An EVF relies on a miniature monitor inside the back of the camera. The photographer looks at the monitor through a small window and sees the scene from the same viewpoint as the lens. Thus, parallax error is avoided.

The most basic compact cameras are fixed-focus models with a nonadjustable lens, a single aperture setting, and one or two shutter speeds. They are designed so that everything within a certain range of distance from the lens is in acceptable focus. Some of them are single-use cameras that come preloaded with film. Single-use cameras are usually recycled into new cameras or scrap plastic after the film is processed.

However, most compact cameras have built-in zoom lenses, multiple exposure settings, and other advanced features. A zoom lens can be adjusted to make a subject appear closer or farther away with no loss of focus.

Some direct-vision compact cameras have a focusing device called a rangefinder. A rangefinder shows two images of the scene from slightly different viewpoints. To focus, the user adjusts the focusing mechanism until the two images come together.

Single-lens reflex (SLR) cameras

are the most common cameras used by professional and other serious photographers. They are usually heavier and more expensive than compacts. However, SLR’s give photographers more control over how their pictures will look. Most SLR cameras offer both manual and automatic controls over focusing, exposure, and other functions. In addition, most SLR’s can utilize a variety of interchangeable lenses. The different lenses change the size and depth relationships of objects in a scene.

Digital single-lens reflex camera
Digital single-lens reflex camera

Like compacts, SLR’s come in both digital and film models. The digital models are often referred to as DSLR’s. The majority of the film models use 35-millimeter film.

SLR cameras enable a photographer to look at a subject directly through the lens. The term reflex refers to reflection. An SLR has a mirror mechanism between the lens and the image sensor or film. The mirror reflects the image onto a focusing screen that can be seen through the viewfinder window. When the shutter button is pressed, the mirror rises out of the way so the light can expose the sensor or film. Thus, the photographer can see the image as it will be recorded and avoid parallax error. The photographer also can see whether the image is correctly focused.

Like digital compacts, DSLR’s have an exterior monitor. However, the monitor on most DSLR’s cannot be used as a viewfinder. The monitor can display an image only after it is taken.

Other types of cameras.

Digital cameras are built into smartphones and other devices. The quality of smartphone photographs can vary from having a low resolution (amount of detail) to having a fairly high resolution. Many people around the world use smartphone cameras to capture images, which they can then share electronically.

Many professional photographers use medium-format cameras. Such cameras are heavier and costlier than standard SLR’s. But they have a larger area on which to capture an image, and so their photographs have a high resolution. Most medium-format cameras come in either SLR or rangefinder models. They typically use film that measures 21/4 inches (6 centimeters) across, or they use a medium-sized image sensor. Some medium-format cameras have interchangeable backs that allow the same camera to switch film types, or even to switch between film and an image sensor. Common sizes of medium-format film images include 6 by 6 centimeters, 6 by 4.5 centimeters, 6 by 7 centimeters, 6 by 9 centimeters, and 6 by 17 centimeters.

Large-format cameras, also called view cameras, use large digital backs or large individual sheets of film. A large-format film image is 4 by 5 inches (10 by 13 centimeters) or larger. Professionals use these cameras because they allow much creative control and produce images with excellent resolution. The lens and the image capture area are mounted on separate planes, called standards, that can each be independently moved up and down, tilted, or rotated. A folding, accordionlike tube called a bellows connects the two standards.

Instant cameras use self-processing film and can deliver a print in seconds or minutes. They were popular in the last half of the 1900’s, but few people buy them today. Instant film has built-in developing chemicals and dyes that process the film immediately after exposure.

Instant camera
Instant camera

Webcams (Web cameras) are simple digital cameras that take relatively low-resolution images for transmission over the internet. They are often built into laptop or desktop computer monitors. An external webcam can be connected to a computer. Webcam video streams can be viewed live over the internet. Webcams can also record video clips and still images for later distribution.

Taking good photographs

To take a good photograph, you must follow certain principles of photography. You should try to “see” as the camera does—that is, be aware of the key design elements of a picture. You should know the effects that different types of light will have on your subject. You should know about the light sensitivity of your image sensor or film, and how that will affect the look of the photograph. Most cameras have controls that adjust the image focus and the amount of incoming light. You need to know how the lens works and how exposure can be controlled.

This section discusses composition—that is, the arrangement of design elements in a photograph. The basic elements of composition include (1) line, (2) shape, (3) space, (4) tone, and (5) color. Later sections of this article discuss the principles of focusing, exposure, light, film, and digital imaging.

Diagonal lines
Diagonal lines

Line.

There are two principal kinds of lines in photography, real lines and implied lines. Real lines are physically visible. For example, telephone poles and the edges of buildings form real lines. Real lines help to define space and create perspective, the illusion of depth and distance. Implied lines are suggested in an indirect way, such as by a person’s gesture or gaze.

Vertical lines in a photograph
Vertical lines in a photograph

Both real and implied lines can be used to direct a viewer’s eye to parts of a picture. In effective photographs, the lines draw attention to the main subject. The direction of these lines can also be used to reinforce the mood of a picture. Vertical lines, such as those of a tower or a tall tree, may convey a sense of dignity and magnificence. Horizontal lines may suggest peace and stillness. Diagonal ones may emphasize energy or conflict.

Horizontal lines in a photograph
Horizontal lines in a photograph

Shape

is the chief structural element in the composition of most photographs. It enables the viewer to immediately recognize a face, a structure, or an object in a picture. Shape also adds interest to composition. The shape of such objects as rocks and seashells can be interesting in itself. A combination of different shapes can provide variety. For example, an outdoor scene can be made more interesting by contrasting the jagged shape of a fence with the soft curves of hills and clouds.

Shape can be used to create a natural frame around a subject. For example, if a photographer shoots a picture of a person in a doorway, the doorway serves as a natural frame that directs the viewer’s eye to the person. Interplay between shapes can form repetitive patterns that help unify a composition. The shape of shadows or silhouettes can create interesting effects. Shape also plays a role in producing visual texture, the illusion of physical, three-dimensional surface qualities.

Space

is the area between and surrounding the objects in a photo. It can be used to draw attention to the main subject and to isolate details. However, large amounts of space can detract from a picture’s interest. A general guideline is that blank space should occupy no more than a third of the photograph.

Proper placement of a subject within the picture space can help convey scale. For example, placing a small subject next to a large subject can help the viewer comprehend their relative size.

Tone

adds depth to a photograph. Without this element, the shapes and spaces in a picture would appear flat. Varying degrees of shadow can help generate form, the illusion of three dimensions. Tone is particularly important in black-and-white photography, where colors are translated into tones of black, gray, and white. If light tones dominate a photo, the mood may seem upbeat and playful. Dark tones may convey a sense of mystery or sadness. Putting light and dark tones side by side can create interesting visual texture.

Color,

like tone, adds depth to a picture and carries an emotional message. Warm colors, such as red and orange, create a sense of action and energy. Cool colors, such as blue and green, appear restful and calm. Interesting visual effects can be created by placing contrasting colors, such as red and green, side by side. For beginning photographers, a good approach is to have one dominant color and a balance between warm and cool colors.

Lenses and focusing

A camera lens collects light rays reflected from a subject and projects them as a focused image onto an image sensor or film. A simple lens consists of a single lens element—that is, a single piece of curved glass or plastic. But most camera lenses are compound lenses that consist of a number of lens elements inside a metal or plastic housing called a barrel. See Lens .

A lens’s main purpose is to ensure that the subject is in focus. Lenses also play other roles. The focal length of a lens influences how large or small objects appear in an image, and how much distance appears to be between them. Focal length also controls the angle of view—that is, the amount of the scene in front of the camera that the lens can “see.” Lenses with short focal lengths have wide angles of view. Lenses with longer focal lengths capture a smaller area of the scene. The focal length also has an effect on depth of field—that is, the range of area in front of and behind a subject that appears in focus. For more information on depth of field, see the Exposure section of this article.

Types of lenses.

Some lenses have a single fixed focal length. However, zoom lenses are often used on both compact and SLR cameras. The focal length of a zoom lens can be changed by moving parts of the lens forward or backward.

There are three main categories of lens focal lengths: (1) standard, (2) wide-angle, and (3) telephoto. A standard lens has an angle of view that approximates the view people see directly in front of them with their eyes. A wide-angle lens has a relatively short focal length and provides a wider angle of view than a standard lens. It is used for large scenes and in locations where the photographer cannot move back far enough to photograph the entire scene. A telephoto lens has a relatively long focal length and makes objects appear larger and closer. It enables photographers to take pictures of faraway subjects from a distance. A zoom lens has variable focal length settings that may cover one or more of these categories.

Some lenses have special uses. A macro lens is used in extreme close-up photography. It focuses on subjects from a short distance. A fisheye lens has an extremely wide angle of view. It produces tremendous depth of field and exaggerates the differences in size between close-up subjects and faraway subjects. A perspective control (PC) lens, also known as a shift lens, can be shifted up, down, or sideways to correct for perspective distortion. For example, it can be used to photograph a tall building so that the sides of the building look more naturally parallel and do not appear to converge.

Focusing

controls the sharpness of an image. The degree of sharpness is determined by (1) the distance between the lens and the subject and (2) the distance between the lens and the sensor or film inside the camera. To form a sharp image of a subject that is close to the camera, the lens must be relatively far from the sensor or film. For subjects far from the camera, the lens must be relatively close to the sensor or film.

With fixed-focus cameras, the user must position the camera at a certain distance from the main subject to ensure proper focus. But most cameras have a focusing mechanism that can move the lens forward and backward. Many cameras have both manual and automatic mechanisms. Some cameras have only one or the other.

With manual focus, the photographer selects the part of the scene he or she wants to be sharpest, and then rotates the lens barrel to obtain that result. Automatic focus usually works in one of two ways. Most film compact cameras use active autofocus, also called infrared autofocus. An active autofocus system calculates the distance to a subject by sending out a beam of invisible infrared light. The beam bounces off the subject and returns to the camera. Most SLR’s and digital cameras use passive autofocus. A passive autofocus system electronically analyzes the image formed by the lens. The system detects the contrast—that is, the difference between the light and dark areas—and the hardness of the edges in the image. An image generally is in focus when it reaches maximum contrast and has hard edges, and so the system adjusts the lens until this point is reached.

SLR cameras have a focusing screen where the photographer can see and manually focus the image. When light enters the camera and forms the image, a mirror reflects the image onto the focusing screen, which is viewed through the viewfinder window. Some focusing screens use a microprism, a split-image prism, or both. These focusing aids cause part of the image area to appear broken up or not aligned if the image is not focused. Rangefinder cameras have a focusing system that works in a similar way to the split-image prism.

Exposure

Exposure is the total amount of light that reaches the image sensor or film in a camera. Exposure affects the quality of a photograph more than any other factor. If too much light enters the camera, the sensor or film will be overexposed, and the picture will be too bright. If there is insufficient light, the sensor or film will be underexposed, resulting in a dark picture.

In some simple cameras, the exposure is fixed. But most cameras have controls that regulate the incoming light. To set the exposure, the photographer adjusts the settings on these controls. On many cameras, the photographer can select an exposure mode that helps automatically determine the proper exposure.

Two controls regulate exposure. One of these controls changes the speed of the shutter, and the other changes the size of the aperture. Proper exposure involves understanding the relationship between shutter speed and aperture size.

Shutter speed

is the amount of time the shutter remains open to let light make the exposure. A slow shutter speed lets in a large amount of light, and a fast shutter speed admits only a little.

Most cameras have a shutter speed range that varies from 1 second or slower to 1/1000 of a second or faster. DSLR’s often have shutter speeds between 30 seconds and 1/8000 of a second. These speeds are represented by whole numbers on the standard scale of shutter speeds. The number 500 on the scale stands for 1/500 of a second, 250 means 1/250 of a second, and so on. Each number represents twice the speed of the preceding number or half the speed of the next number. At a setting of 250, for example, the shutter works twice as fast as at a setting of 125, and half as fast as at a setting of 500.

Fast shutter speeds enable photographers to take sharp pictures of moving subjects. Any movement of the subject while the shutter is open will be recorded as a blur. At a setting of 1/1000 of a second or faster, the shutter is open for such a short time that even the motion of a speeding race car appears to be “stopped.”

A camera must be kept steady during an exposure to avoid blurring. When the shutter speed is longer than 1/30 of a second, a tripod may be required. A tripod is a three-legged support used to position, stabilize, and elevate a camera. A single-legged support called a monopod can be used when the photographer needs to move around more quickly than a tripod would allow.

Aperture size

is changed by the lens diaphragm, which consists of a circle of overlapping leaves. The diaphragm expands to make the aperture larger and contracts to make it smaller. A large aperture admits more light than a small aperture.

Aperture size
Aperture size

The various sizes of an aperture are called f-stops. On most cameras, the f-stops range from a maximum of 2 to a minimum of 22 or 32, and they include normal settings of 2.8, 4, 5.6, 8, 11, and 16. In addition, DSLR’s often allow the f-stop to be set at any point between the normal settings. The smaller the f-stop number, the larger the size of the aperture. Each normal f-stop setting lets in twice as much light as the next normal setting and half as much light as the preceding setting. For example, if you open up the setting from f/11 to f/8, the aperture admits twice as much light. If you stop down the setting from f/11 to f/16, the aperture lets in half as much light.

Changes in the aperture size affect the overall sharpness of the picture. As the aperture becomes smaller, the area of sharpness in front of and behind the subject becomes greater. This area of sharpness is called depth of field. It extends from the nearest part of the area in focus to the farthest part in focus. Depth of field can be anywhere from a fraction of an inch to virtually infinite. A small aperture, such as f/11 or f/16, creates greater depth of field than a large aperture, such as f/4. As you open up the aperture, the area in focus becomes shallower. At f/4, the subject will be in focus, but objects in the foreground and background may be out of focus.

Aperture size is a key factor that influences depth of field. But depth of field also depends on the focal length of the lens and the distance between the camera and the subject. A short focal length produces greater depth of field than a long focal length. Depth of field also is greater when the camera is focused on a faraway subject than when it is focused on a close-up subject.

Setting the exposure.

The proper exposure for a picture depends chiefly on (1) the lighting, (2) the subject, and (3) the desired depth of field. Each of these factors may require an adjustment in shutter speed or aperture size. You must choose a combination of settings that will meet all your requirements.

The amount of light in a scene affects both shutter speed and aperture size. On a cloudy day, you should reduce the shutter speed and increase the f-stop. On a sunny day, you should use settings for a fast shutter speed and a small aperture. Certain types of artificial lighting have special requirements for exposure.

The type of subject to be photographed may require an adjustment in the shutter speed, and depth of field may determine the aperture size. If the subject is moving, you must increase the shutter speed to prevent blurring. If you want greater depth of field in your photograph, you need to select a small aperture.

If you adjust either the shutter speed or the aperture size, you must also adjust the other to maintain proper exposure. A fast shutter speed stops the action, but it also reduces the light reaching the image sensor or film. To make up for this reduction in light, you should increase the f-stop. Similarly, a small aperture increases depth of field but reduces the amount of incoming light. Therefore, you should change to a slower shutter speed.

Suppose you want to photograph your dog on a sunny day. A suitable exposure might be a shutter speed of 1/125 and an aperture of f/11. If your dog is moving, you might increase the shutter speed to 1/250. This speed is twice as fast as 1/125 and so half as much light will reach the sensor or film. You should make the aperture twice as large by setting it at f/8.

You may want the photo to include a toy on the ground in front of your dog, and also the trees in the background. To make sure these extra subjects are in acceptable focus, you can increase depth of field by reducing the aperture size. At a setting of f/16, the sensor or film will receive half as much light as it did at f/11. You should also change the shutter to the next slowest speed, doubling the length of time of the exposure. At a slow shutter speed, however, blurring from movement of your dog or the camera may occur. A better option in this situation might be to retain the faster shutter speed and use a faster digital ISO setting or a faster film speed. For information on film speed, see the Film photography section of this article. For information on digital ISO settings, see the Digital photography section.

Exposure meters,

Exposure meters, also called light meters, measure the amount of light available for a photograph and help determine the correct exposure. Most exposure meters are either (1) reflected light meters, which measure the light reflected from a scene toward the camera, or (2) incident light meters, which measure the light falling on a subject. Meters that are built into cameras are reflected light meters. A built-in meter takes into account the variations in brightness from different areas of the scene.

Most SLR’s and some compacts offer exposure modes that work with the built-in meter to help set exposure. Basic exposure modes include (1) manual, (2) aperture-priority, (3) shutter-priority, and (4) program. In manual mode, the photographer manually sets both the aperture and shutter speed. In aperture-priority mode, the photographer picks the aperture, and the camera automatically selects a shutter speed. In shutter-priority mode, the photographer selects the shutter speed, and the camera adjusts the aperture. In program mode, the camera automatically selects both the aperture and shutter speed based on a built-in computer program.

Advanced photographers often use a handheld meter that can take reflected light readings, incident light readings, or both. To measure reflected light with a handheld meter, the photographer aims the meter at the main part, or various parts, of the scene. If there are strong contrasts in light and shadow, the photographer can set the camera to expose the most important areas correctly, while allowing other areas to be somewhat underexposed or overexposed. To measure incident light, the photographer stands near the subject and points the meter toward the spot where the photo will be taken. The advantage of an incident light meter is that it is not fooled by a subject’s darkness or lightness into giving an incorrect reading.

Light

Light is photography’s fundamental ingredient. There are two basic types of light, natural light and artificial light. Natural light, also called available light or existing light, is normally present in outdoor and indoor locations. Such light comes chiefly from the sun and electric lights. Artificial light is produced by various types of lighting equipment, such as photoflood lamps and electronic flash devices. Natural light and artificial light have certain characteristics that greatly affect the quality of photographs. These characteristics include (1) intensity, (2) color, and (3) direction.

Intensity

is the quantity or brightness of light. Photographers measure the intensity of light to determine the lighting ratio of a scene. The lighting ratio is the difference in intensity between the areas that receive the most light and those that receive the least. On a sunny day or in a room with bright lights, the lighting ratio may be high. On a cloudy day or in dim indoor light, the ratio is probably low.

The lighting ratio affects the degree of contrast in a photograph. A high lighting ratio may produce sharp images with deep shadows and bright highlights. A low ratio creates softer images with a range of medium tones. Thus, a high lighting ratio can create a sense of drama and tension in a picture. A low lighting ratio makes portraits and still lifes look more natural.

The image sensors in digital cameras can record most lighting ratios. High and low ratios can be adjusted with imaging software. Black-and-white film can record a wide range of lighting ratios. However, when working with film used to make color slides, a high lighting ratio may make some colors appear either washed-out or excessively dark. The maximum lighting ratio that an image sensor or piece of film can record is its dynamic range or contrast range.

Color.

The color of light varies according to its source, though most of these variations are invisible to the human eye. For example, ordinary light bulbs produce reddish light, and fluorescent light is basically blue-green. The color of sunlight changes during the day. It tends to be blue in the morning, white at about noon, and pink just before sunset.

Variations in the color of light make little difference in a black-and-white image. In color pictures, however, they produce a wide variety of effects. To control these effects, you can use color filters on your lens, change the white balance setting on your DSLR, or use color film that is designed for different types of lighting. On a digital camera, the white balance setting tells the camera which objects appear white to the human eye in a certain type of light. The camera makes color adjustments based on this white point. Color balance in digital images can also be altered with imaging software.

Direction

refers to the direction from which light strikes a subject. Light may reach a subject from a single direction or from more than one direction. The direction greatly affects how the subject looks in the picture.

Front lighting

comes from a source near or behind the camera. This type of lighting shows surface details clearly. However, it should be avoided for pictures of people because the light makes them squint and casts harsh shadows under their features.

Back lighting

comes from a source behind the subject. Light from this direction casts a shadow across the front of the subject. To fill in the shadow, extra light from an electronic flash can be used. This technique is called fill flash. Cameras with built-in flash, and those that accept certain flash units, often perform this function automatically. If the back lighting is extremely bright, the picture may show only the outline of the subject. Back lighting can be deliberately used in this way to create silhouettes.

Side lighting

shines on one side of the subject. Shadows fall on the side opposite the light source. Fill flash can lighten the shadowed areas. Side lighting does not show surface detail as clearly as front lighting does, but it creates a strong impression of depth and shape.

Top lighting

comes from a source directly above the subject. It is used most often in situations where other types of lighting would cause a glare or reflection in a picture. For example, top lighting may be used to photograph fish in an aquarium or objects behind a window because the glass will not reflect the light.

Artificial lighting devices.

The most widely used source of artificial lighting is the electronic flash, which provides a brief burst of light. Many professional photographers use photoflood lamps—lighting devices that can provide continuous light for several hours.

Most cameras have a small built-in flash. In addition, many have a built-in socket called a flash synchronizer. A flash synchronizer coordinates any accessory flash unit with the shutter, so the greatest brightness of the flash occurs at the instant the shutter reaches its full opening. The flash synchronization setting for electronic flash units is often referred to as the X-sync setting or sync speed.

Built-in electronic flash unit
Built-in electronic flash unit

Electronic flash units operate on batteries or on electric current from an outlet. They contain an ionized (electrically charged) gas inside a sealed tube. The gas emits a burst of bright light when an electric current is passed through it. Electronic flash units can fire thousands of flashes. They include small flash units that fit onto the top of a camera, handheld units, and large studio units. Camera-top flash units are mounted onto a flat piece of metal called a hot shoe. The hot shoe contains electrical contacts for the flash unit so that the flash synchronizer can trigger it. The head of a hot shoe flash unit often can be tilted or rotated to make the light bounce off a wall or ceiling.

Accessory electronic flash unit
Accessory electronic flash unit

Filters

can be used to screen out haze and glare or increase the contrast in a picture. A photographic filter is typically a piece of colored plastic or glass. Normally, the filter is in a holder that fits over the camera lens. The filter selectively absorbs certain wavelengths of light. Nearly all filters reduce the amount of light that enters the camera. Therefore, when using a filter, you must increase the exposure by the filter factor listed in the instructions provided with the filter. Filters are more commonly used with film cameras than with digital cameras. This is because digital cameras and imaging software can achieve most of the effects produced by filters.

There are many types of filters. An ultraviolet filter reduces haze. It is useful for photographing distant subjects and for taking pictures at high altitudes. A polarizing filter makes colors more vivid and screens out glare from shiny surfaces, such as water and glass. Various kinds of color filters can increase contrast or alter color effects in a photograph.

Film photography

During the 1900’s, nearly all photographs were made using film. Today, digital technology is the primary means of making images. But there are still many people who make pictures with film cameras and chemical developing and printing processes.

There are three main kinds of photographic film, based on the type of pictures produced. Black-and-white prints are made from black-and-white negative film, color prints from color negative film, and color slides from color reversal film. Reversal film is also known as transparency film, slide film, or chrome film.

Black-and-white film and color film are developed and printed in much the same way. However, it is more complicated and expensive to process color film than to process black-and-white film. Color film processing requires more precise control, extra steps, and some additional materials. Almost all photographers have their color film processed in commercial laboratories. However, many photographers develop and print their own black-and-white work. By doing their own developing and printing, they can change the size, composition, contrast, and other features of their photographs.

How a black-and-white film photograph is printed
How a black-and-white film photograph is printed

Exposing the film.

Film is a thin sheet or strip of flexible plastic with a light-sensitive coating called an emulsion. Black-and-white film has a single emulsion layer. Color film usually has three emulsion layers. The emulsion layers consist of tiny grains of silver salts held together by gelatin. Silver salts are highly sensitive to light and undergo chemical changes when exposed to it. The degree of change in the salts depends on the amount of light that reaches them. A large amount of light causes a greater change than does a small amount.

The light that reaches the film varies in intensity. Light-colored objects reflect much light, and dark colors reflect little or no light. Therefore, the silver salts react differently to different colors. The chemical changes in the silver salts produce a latent image in each emulsion layer of the film. These images cannot be seen, but they contain all the details of the photograph.

In color film, each of the three emulsion layers is sensitive to only one of the primary colors of light. Although most light looks white to the eye, it is actually a mixture of the primary colors—blue, green, and red. Blending these three colors of light can produce any color. See Color (Mixing colored lights) .

When color film is exposed, the first emulsion layer reacts only to blue light, the second layer only to green light, and the third only to red light. Light strikes the first layer and forms an image of the blue areas of the scene. The light then passes through the second layer, forming an image of the green areas. Finally, it goes through the third layer and records an image of the red areas. Three latent images are thus recorded on the film.

Film characteristics.

Film varies in a number of characteristics that affect the overall appearance and quality of photographs. These characteristics include (1) format, (2) speed, (3) graininess, (4) color sensitivity, and (5) color balance.

Format

determines the size and shape of the image recorded on the film. The larger the film format, the more detailed the images will be. The most widely used format is 35-millimeter film, which has an image area that is 24 by 36 millimeters in size. Medium-format and large-format films capture images that are larger than a 35-millimeter image.

Speed

is the amount of time required for film to react to light. A film’s speed determines how much exposure is needed. A fast film reacts quickly to light and needs little exposure. It is useful for scenes that have dim light or involve fast action. A medium-speed film is suitable for average conditions of light and movement. A slow film needs much exposure and should be used for stationary subjects or brightly lit scenes.

The principal system of measuring film speed is the ISO system. ISO stands for the International Organization for Standardization. The higher the ISO number, the faster the film. Films that have numbers of 200 or higher are generally considered fast. Medium-speed films have numbers ranging from 80 to 125, and slow films are numbered lower than 80.

Graininess

is the speckled or streaked appearance of some photos. It is caused by clumps of silver grains on the film. The degree of graininess depends on the film speed. A fast film is more sensitive to light because its emulsion contains larger grains of silver salts. The fastest films produce the grainiest pictures. Slow films produce little or no graininess in standard-sized prints, but some graininess may appear in enlargements.

Color sensitivity

refers to a black-and-white film’s ability to record color differences. On the basis of color sensitivity, black-and-white films are classified into several types. Panchromatic film, the most widely used type, is sensitive to all visible colors.

Color balance

applies only to color film. Such film is sensitive to all colors, including those of different kinds of light. The human eye sees light from most sources as white. But color film records light from light bulbs as reddish, light from fluorescent bulbs as blue-green, and daylight as slightly blue. Variations in the emulsions of different types of color film make the film more or less sensitive to certain colors. These variations balance the color of light recorded on the film so colors in the photograph appear natural. Most color film is balanced either for daylight or for specific types of artificial light.

Developing the film.

After the film has been exposed, it can be removed from the camera. However, it must be kept away from light, because further exposure would destroy the latent image or images. The film is taken to a darkroom or a photographic laboratory. There, it is treated with a chemical called a developer that converts the exposed silver salts in the emulsion into metallic silver. The latent image in each emulsion layer then becomes visible.

During development, the silver salts that received much light form a thick deposit of silver and appear dark. The salts that received little or no light form a thin metallic layer or no layer at all. They appear light or clear. Thus, the light colors and dark colors of the subjects are reversed on the image in each emulsion layer. For example, a piece of coal would appear clear, and a snowball would look dark. The reversed silver images in each emulsion layer are called negative images.

In black-and-white film, the emulsion layer is sensitive to all colors, and so all are represented in the negative image. In color film, the negative image in each emulsion layer represents only the color of light—blue, green, or red—that exposed the layer.

For all types of film, a similar procedure is used to turn each emulsion layer’s latent image into a negative image. But later steps of the development process differ depending on the type of film.

Black-and-white negative film

is the simplest type of film to develop. After the developer turns the exposed silver salts into metallic silver, the action of the developer is stopped either by water or by a chemical solution called a stop bath or short stop. Next, a chemical called a fixer or hypo dissolves the unexposed silver salts so they can be washed away. Next, the film is washed to remove the unexposed salts and the remaining chemicals. Finally, the film is dried. The developed film is called a negative and has a visible, permanent, reversed image.

How a black-and-white film photograph is printed
How a black-and-white film photograph is printed

Color reversal film

requires two different developers. The first developer changes the exposed silver salts on the film into metallic silver. The action of this developer is then stopped. The film is then reexposed by treatment with a chemical agent so that the remaining silver salts can be developed. The second developer converts the remaining silver salts into metallic silver and also activates a substance called a coupler in each emulsion layer. Couplers unite with chemicals in the developer to produce colored dyes. The colored dyes form around the silver produced by the second developer in each emulsion layer. In the process, a positive image is created in each emulsion layer.

After the second developer is neutralized, the film is bleached to convert all the silver into a form that the fixer can dissolve. Once the film is fixed, washed, and dried, all the silver is gone, and only the dyes remain. The developed film is called a transparency or a positive. A strip of transparencies can be cut into separate pictures and mounted as slides. A slide projector can project the pictures onto a wall or screen.

The colors of the dyes are the complements (opposites) of the primary colors. Yellow is the complement of blue, magenta (purplish-red) is the complement of green, and cyan (bluish-green) is the complement of red. Complementary colors are used as dyes because they reproduce the original colors of the subject in the final photograph.

On a slide, each area of the subject is transparent in one of the emulsion layers. In each of the other two layers, the area has a complementary color different from that of its original color. For example, suppose a slide includes an image of a blue sky. The sky image is transparent in the first emulsion layer. The image is magenta in the second layer, and cyan in the third layer. When light passes through the slide, each dye acts as a filter on a primary color. The magenta layer holds back green light, and the cyan layer holds back red light. As a result, only blue light passes through the transparent area of the slide, and the sky appears blue.

Color negative film

is treated with only one developer. The developer converts the exposed silver salts into metallic silver and activates dye couplers at the same time. The dyes form around the silver in each emulsion layer. The developer is then neutralized, and the film is bleached, fixed, washed, and dried. At the end of the process, all the silver is gone, and only the dyes remain and form the negative image.

On a color negative, each area of the scene is recorded on an emulsion layer in a color complementary to the original color. A blue area appears as a yellow image on the first layer, a green area appears as a magenta image on the second layer, and a red area appears as a cyan image on the third layer. The images are reversed to their original colors during the printing process.

Making prints

from negative film is similar to exposing and developing the film. Like film, printing paper is coated with an emulsion containing silver salts. Black-and-white printing paper has a single emulsion layer. Color printing paper usually has three emulsion layers, each of which is sensitive to one primary color of light. During exposure, light is passed through a negative onto the paper. The light forms a latent image in each emulsion layer. After development and chemical treatment, the image on the printing paper, called a positive, is visible and permanent.

When light passes through a black-and-white negative, the dark areas of the negative hold back much light. The light and clear areas of the negative let a large amount of light pass though to the paper. Once the paper is developed, the dark areas of the negative will appear as light areas on the print, and vice versa. In this way, the black, gray, and white tones of the print reproduce the tones of the subjects photographed.

When light passes through a color negative, the yellow, magenta, and cyan dyes on the negative hold back light of their complementary colors. In other words, each dye filters out one of the primary colors. Thus, the colors of light that expose the printing paper are the opposite of those that exposed the film. When the paper is developed, couplers in the emulsion layers form dyes that reproduce the colors of the subjects photographed.

Most of the time, people want the image in a final print to be larger than its negative. Larger images are produced using an enlarger. It projects the image from the negative onto printing paper in much the same way as a slide projector throws an image onto a screen. The size of the projected image depends on the distance between the negative and the paper. The greater the distance, the larger the image.

Enlarger
Enlarger

At the same time an image is enlarged, undesirable areas along its edges can be cropped. Specific areas can be darkened or lightened by increasing or blocking the exposure of those areas in isolation. In addition, the color balance of color images can be adjusted by using color filters to alter the light from the enlarger.

Like film, printing papers vary in several characteristics that affect the appearance of prints. These characteristics include size, surface, thickness, speed, tone, and grade. The surface of printing paper ranges from matte (dull) to glossy. The surface may be smooth or have texture. The grade of printing paper refers to the degree of contrast produced in the prints.

Digital photography

In the late 1900’s and early 2000’s, digital technology brought about a revolution in photography. The first commercial digital cameras were expensive and produced images that were inferior to film images. Today, however, digital cameras can make images that equal or exceed the quality of film images. Almost all professional photographers today use digital cameras. Digital cameras began outselling film cameras in the early 2000’s.

Most digital cameras are easy to use. They allow you to view your photographs on a screen immediately after you take them. With a computer, it is easy to organize, store, revise, and transmit your images electronically.

A digital camera records the properties of light, such as intensity and color, as electronic information. This information is in binary code—that is, code consisting of strings of two digits, 0 and 1. Each digit is called a bit. Bits are the smallest units into which the camera is able to digitally divide the visual information. Binary code is the language of computers. See Digital technology.

Capturing digital images.

Like film cameras, digital cameras use a lens to gather light rays from a subject and focus them into an image. When the shutter opens, an image sensor in the camera captures the focused image. In most cases, the sensor is a charge-coupled device (CCD). A CCD is a wafer about the size of a postage stamp that converts light into electric current. Some sensors use complementary metal-oxide semiconductor (CMOS) technology instead of CCD technology. A CMOS sensor performs the same function as a CCD but in a different manner.

Charge-coupled device (CCD) image sensor
Charge-coupled device (CCD) image sensor

An image sensor consists of millions of tiny silicon photodiodes (SPD’s), also called photosites, arranged in rows and columns. Each SPD accumulates an electrical charge according to the amount of light that strikes it. The sensor reads these charges, and then an analog-to-digital converter changes them into binary code. In this manner, the image is broken down into pixels (picture elements). Pixels are the smallest units of brightness and color in a digital image. Each pixel corresponds to an SPD on the sensor. The SPD’s themselves are often called pixels.

The SPD’s measure only the amount of light. For the camera to also record color, the light must be filtered into the three primary colors—red, green, and blue. In most cameras with a single sensor, a color filter array performs this function. A color filter array is a grid pattern of tiny color filters placed over the sensor so that individual filters match up with individual SPD’s.

The most common color filter array is the Bayer filter mosaic or Bayer pattern. A Bayer filter mosaic is a grid of alternating red, green, and blue filters. Each filter allows only red light, green light, or blue light to pass through to its SPD. There are twice as many green filters as there are red or blue filters, because the human eye is more sensitive to green light. Thus, the raw image captured by the sensor is a pattern of red, green, and blue pixels of varying intensity. Each raw pixel contains a single primary color and is missing the other two colors. The camera “guesses” the missing color information in each pixel by examining nearby pixels. This process is called interpolation. Through interpolation, the camera is able to closely re-create the true colors of the image.

Bayer filter pattern
Bayer filter pattern

A digital camera is categorized based on how many pixels its sensor is capable of capturing. The more pixels an image contains, the sharper the image and the higher its resolution. When an image is made up of millions of pixels, the human eye perceives the image in continuous tones and colors. Camera resolution is usually described in megapixels. A megapixel is 1 million pixels. Budget cameras may record 5 megapixels or less, while large-format professional cameras may capture 20 megapixels or more. Low-resolution images are appropriate for display on a computer screen. A higher resolution is necessary for photographic-quality prints.

The ISO system is used to measure a digital camera’s sensitivity to light. Digital ISO settings are intended to roughly resemble the ISO numbers for various speeds of film. The higher the digital ISO setting, the less light needed for the exposure. Also, just as a faster film speed will produce more graininess in a film image, a higher digital ISO setting will produce more noise (random, unwanted pixels) in a digital image. Noise is caused by electrical activity that occurs as an unwanted by-product of the digital imaging process.

Storing digital images.

After the image is captured and broken down into pixels, the image data are transferred to the camera’s storage device. Usually the storage device is a small, removable flash memory card, often called simply a memory card, flash card, or storage card. Flash memory does not lose data when disconnected from a power source, and it allows data to be erased and recorded in blocks. The memory can be reused thousands of times.

In the storage device, the image is saved as a single computer file. The most common file format is JPEG. This name comes from the initials of the Joint Photographic Experts Group, an international committee of digital imaging experts. Most photographs displayed on the Web or sent over the internet are JPEG files.

The JPEG format compresses (reduces) the size of the file. Compression saves storage space and makes the files easier to transmit. JPEG compression is lossy—that is, it permanently eliminates color and detail data from the image. As a result, some image quality is sacrificed to achieve the smaller file size. But much of what is eliminated is irrelevant detail that most people would not notice. Cameras often have JPEG quality settings that allow you to choose how much the file should be compressed.

Another file format is TIFF (tagged image file format). In a digital camera, a TIFF file is uncompressed, or compressed in a lossless way, and thus it is significantly larger than a JPEG file. In lossless compression, no information is permanently eliminated from the image. TIFF images are suitable for high-quality printing or for other times when a high degree of detail is desired.

In-camera software often makes automatic adjustments to an image so that it looks more natural. For example, the overall brightness level of the image is adjusted to deliver a normal contrast range. Also, the colors of the light are balanced so they match the colors seen by the eye.

Some cameras allow images to be saved in raw format. Raw simply means unprocessed. A raw file consists of the image data exactly as the sensor records them, with no color interpolation, compression, or any other alteration by in-camera software. All image processing is done later with imaging software on a computer.

Processing digital images.

There are different ways to transfer digital images to a computer. The flash memory card can be inserted into a card reader attached to the computer. The card can also be inserted directly into some desktop printers or into commercial picture-printing machines. Images can also be transferred using a cable that links the camera to the computer. Some cameras can transfer images using wireless technology.

Film photographs, negatives, and transparencies can be turned into digital images with a scanner. Most people use either film scanners or flatbed scanners. In these machines, a lamp shines light on the film image, and a CCD records the light that reflects from or passes through the image. The CCD consists of SPD’s arranged in a single row. Usually, the CCD and its related equipment move down the image and record one line at a time. The information gathered by the CCD is then converted into a digital image file.

You can use editing apps and software to make a variety of changes to an image. You can remove blemishes, sharpen the apparent detail, crop the image, or change its size, colors, or contrast. The changes can range from simple retouching to drastic alteration. You can add text or other graphic elements to the image. You can also use the software to combine two or more images, or even create a completely new image that does not represent actual reality.

Digital photography has changed many people’s ideas about photographic truth. Before the digital revolution, people believed that a camera was a reliable and automatic witness. They had confidence that photos were accurate portrayals of reality. Today, because digital images can be so easily modified, people are often skeptical about the accuracy of a photograph they see in a news source or in an advertisement. Many businesses that rely on photography have ethical guidelines that set limits on how much images can be modified.

Printing digital images.

Most digital images are never printed and are viewed only on an electronic device or computer screen. However, a paper print can be made from a digital file. Images are often printed with either an inkjet printer or a laser printer. They also can be submitted to a commercial printing service for high-quality printing.

Inkjet printers use tiny nozzles to spray quick-drying ink of various colors onto paper. Laser printers use a tiny laser to transfer digital images to a light-sensitive drum and then to paper. For high-quality photographic prints, an inkjet printer is usually the better choice. Laser printers are useful when prints are needed quickly and quality is not the main concern.

History

Early developments.

Since ancient times, people have sought to create likenesses of people, objects, or scenes that they thought were worth remembering. The urge to create pictures has led people to concentrate on how to capture an image directly formed by light.

In the 400’s B.C., the Chinese philosopher Mo Di observed that light reflecting from an object and passing through a pinhole forms an inverted image of the object. The Greek philosopher Aristotle made a similar observation around 330 B.C. During a solar eclipse, Aristotle saw a crescent-shaped image of the sun projected onto the ground through a small opening in the leaves of a tree. Around A.D. 1000, the Arab physicist Alhazen demonstrated that pinhole images become sharper when the hole is made smaller.

These observations about light were first used to construct a camera in about 1500, in Italy. The first crude camera was called a camera obscura—a Latin term meaning dark chamber. It was a box large enough for a person to enter. It had a tiny opening in one side that let light in. On the opposite side, the light formed an inverted image of the scene outside. Artists used the camera obscura as a sketching aid. They traced the outline of the image formed inside the box. See Camera obscura.

A camera obscura could only project images onto a screen or a piece of paper. Scientists sought a way to make the images permanent. In 1727, a German physicist named Johann H. Schulze discovered that silver salts turn dark when exposed to light. About 50 years later, Carl Scheele, a Swedish chemist, showed that the changes caused in the salts by light could be made permanent by chemical treatment. However, these discoveries were not applied to photography until the 1830’s.

The invention of photography.

A French inventor named Joseph Nicephore Niepce found a way to produce a permanent image inside a camera obscura. In 1826, he coated a metal plate with a light-sensitive chemical and then exposed the plate in the camera for about eight hours. The resulting picture, showing the view from Niepce’s window, is the world’s earliest surviving photograph.

Earliest surviving photograph
Earliest surviving photograph

The French artist Louis Daguerre perfected Niepce’s method during the 1830’s. Daguerre exposed a sheet of silver-coated copper, developed the image with heated mercury fumes, and then “fixed” it with table salt. His pictures, called daguerreotypes, were sharp, detailed images. Initially, these pictures required exposure times of 5 to 40 minutes. But later inventors improved the process and reduced the exposure time to less than a minute. See Daguerreotype.

In 1839, the same year Daguerre made his process public, a British inventor named William H. Fox Talbot announced his invention of light-sensitive paper. This paper produced a negative from which positive prints could be made. Fox Talbot’s friend, the astronomer Sir John Herschel, called the invention photography. Herschel suggested the use of sodium thiosulfate (hypo) to make the image permanent. Both Daguerre and Fox Talbot began using this chemical to fix their images.

British scientist William Henry Fox Talbot
British scientist William Henry Fox Talbot

Fox Talbot’s paper prints, called talbotypes or calotypes, were not as sharp as daguerreotypes. Nevertheless, his negative-to-positive method for making photographic prints became the foundation of photography. It succeeded because numerous prints could be made from one exposure, and the prints could be pasted into books and other printed materials. See Talbotype.

In addition to the new developing and printing processes, photography was greatly improved during the 1840’s by the introduction of specialized lenses. A Hungarian mathematician named Josef M. Petzval designed two types of lenses, one for making portraits and the other for landscape pictures. The landscape lens produced sharper pictures of large areas than previously had been possible. The portrait lens admitted much more light than previous lenses had and thus reduced exposure time to a few minutes.

Technical improvements.

In 1851, a British photographer named Frederick S. Archer introduced what became known as the wet-plate or wet-collodion process. It greatly reduced exposure time and improved the quality of prints. Archer coated a glass plate with a mixture of silver salts and an emulsion made of a wet, sticky substance called collodion. After exposing the plate for a few seconds, he developed the plate into a negative and then treated it with a fixing agent. The collodion had to remain moist during exposure and developing. Therefore, wet-plate photographers had to have immediate access to a darkroom so they could process the plate immediately after making an exposure. Some photographers traveled in wagons that served as a mobile darkroom and developing laboratory.

The invention of the dry-plate process overcame the inconvenience of the collodion method. In 1871, Richard L. Maddox, a British physician, used an emulsion of gelatin to coat photographic plates. Unlike collodion, gelatin dried on a plate without harming the silver salts. By using dry plates, photographers did not have to have immediate access to a darkroom.

The use of gelatin also eliminated the necessity of keeping a camera motionless on a tripod during exposure. By the 1880’s, improvements in the gelatin emulsion had reduced exposure time to 1/25 of a second or even less. Photographers could now take pictures while holding the camera in their hands.

In addition, the gelatin emulsion revolutionized the design of cameras. With earlier types of printing paper, a negative had to be as large as the intended print. But now, photographs could be made by projection printing on paper coated with gelatin. Photographers could enlarge the pictures during the printing process, and so the size of negatives could be reduced. Smaller negatives meant smaller cameras.

Meanwhile, the halftone process, perfected in the 1880’s, made it possible for photographs to be printed alongside text in newspapers. The halftone process uses a pattern of tiny ink dots of various sizes or spacing to represent the tones of a photograph.

In 1888, George Eastman, an American dry-plate manufacturer, introduced the Kodak box camera. The Kodak was the first handheld camera designed specifically for mass production and amateur use. It was lightweight, inexpensive, and easy to operate.

The marketing of the Kodak camera and other inexpensive box cameras led to a dramatic rise in the number of amateur photographers. The Kodak system was an immediate success because it eliminated the need for photographers to process their own pictures. Instead of a single dry plate, the Kodak used a flexible roll of gelatin-coated film that could record 100 circular photographs. After a roll had been exposed, a person sent the camera with the film inside to one of Eastman’s processing plants. The plant developed the film, made prints, and then returned the camera loaded with a new roll of film. The Kodak slogan declared: “You Press the Button, We Do the Rest.”

Artistic advances.

During the 1850’s and 1860’s, people began to experiment with the artistic possibilities of photography. One of the first to use a camera creatively was Gaspard Felix Tournachon, a French photographer who went by the name of Nadar. Nadar added a new element to portrait photography by using light to emphasize the pose and gestures characteristic of his subjects. He also made the first aerial photograph, a view of Paris taken from a balloon.

In England, Oscar Rejlander and Henry Peach Robinson pioneered the process of combination printing, also called composite photography, in which an image is constructed from a number of individual negatives. In this procedure, elements from two or more scenes or subjects are combined into one picture. Combination printing allowed a photographer to make complex compositions based on the artistic standards of painting.

Another pioneer in portrait photography was the British photographer Julia Margaret Cameron. She often required her subjects—who included many famous people of the day—to sit for long exposures under limited natural light in her studio. Cameron’s innovative approach featured intentionally out-of-focus portraits and slightly blurred photos because of the long exposures.

The Sad White Roses by Julia Margaret Cameron
The Sad White Roses by Julia Margaret Cameron

Landscapes and architecture also were popular subjects for early art photographers. During the 1850’s and 1860’s, a number of countries commissioned photographers to make visual records of important buildings and natural landscapes. Photos were taken of historical sites in Europe and the Middle East, the scenery of the American West, and many other landmarks. Some of these pictures are remarkable not only for their technical excellence but also for the effort involved in taking them. In 1861, for example, two French photographers named Auguste and Louis Bisson withstood intense cold and avalanches to take pictures from the top of Mont Blanc in France. The brothers needed so much equipment that they took 25 porters up the mountain with them.

Some of the most stirring photographs of the mid-1800’s portray battlefield scenes. The earliest surviving pictures of this type were taken by Roger Fenton, an English journalist covering the Crimean War (1853-1856). Photography teams organized by Americans Mathew Brady and Alexander Gardner made many images of the American Civil War (1861-1865). The Brady and Gardner collections rank among the most comprehensive and widely seen collections of war pictures.

Photography of Mathew Brady
Photography of Mathew Brady

After the American Civil War, the United States government sponsored a series of geological explorations and surveys of the American West. Photographers, including Timothy O’Sullivan and William Henry Jackson, took part. They aimed to create a visual record of wondrous geographical discoveries that would play a role in the economic development of the West. Jackson’s photos of the Yellowstone area helped influence the U.S. Congress to establish the world’s first national park there.

By the late 1800’s, photography was moving in two directions. One faction used photography to document social issues and thus bring about cultural change. The American photographers Jacob A. Riis and Lewis W. Hine were pioneers of this type of photography. In 1888, Riis’s photos of the slums of New York City shocked the public and helped bring about the elimination of one of the city’s worst districts. At the beginning of the 1900’s, Hine, who was trained as a sociologist, documented the arrival of immigrants at Ellis Island and the miserable working conditions of the poor. His pictures of children working in coal mines and dimly lighted textile factories helped bring about the passage of child labor laws.

Another faction wanted to explore photography’s creative and expressive potential in the tradition of drawing and painting. This movement was known as pictorialism. The aim of these photographers was to “make” pictures with artistic intent, rather than simply “take” photographs from nature. They paid close attention to lighting and composition, and they used soft-focus effects and special printing techniques. In 1902, Alfred Stieglitz, Edward Steichen, Gertrude Käsebier, and other American photographers formed a group to promote photography as an independent art form. This group, called the Photo-Secession, organized photo exhibitions in the United States and loaned collections of photos to exhibitors in other countries.

The Steerage by Alfred Stieglitz
The Steerage by Alfred Stieglitz

Dramatic changes.

In 1904, the French scientists Auguste and Louis Lumière introduced the autochrome plate, the first widely used means of making color photographs. Commercial production of autochrome plates began in 1907. In the 1930’s and 1940’s, such color films as Kodachrome, Agfacolor Neu, and Ektachrome gradually replaced autochrome plates.

During the 1920’s and early 1930’s, two major developments caused dramatic changes in photography. First, the handheld 35-millimeter camera and artificial lighting revolutionized photographic equipment. The Leica camera, introduced in 1925 in Germany, was small enough to fit in a pocket, but it delivered sharp, detailed photographs. The Leica allowed such photographers as Andre Kertesz, a Hungarian-born American, to take candid pictures, in which people did not know they were being photographed. The development of the electric flashbulb and electronic flash in the 1920’s and 1930’s greatly expanded the range of photographic subjects. The American scientist Harold Edgerton pioneered in using electronic flash to photograph high-speed events, such as bullets passing through objects.

The second development involved experimentation with new ways of composing pictures and viewing subjects. In the 1920’s, Laszlo Moholy-Nagy, a Hungarian-born American, and Man Ray, an American, experimented with unusual angles, photograms, and image reversals. Photograms are created without using a camera, often by placing objects on a piece of printing paper and exposing it to light. Other photographers created abstract compositions with X-ray photos and multiple exposures. During the 1930’s, the German artist John Heartfield pioneered the use of the photomontage as a political tool to attack the Nazis. A photomontage is formed by combining many photos or parts of photos into a single picture.

Migrant Mother by Dorothea Lange
Migrant Mother by Dorothea Lange

From the 1930’s through the 1960’s, the French photographer Henri Cartier-Bresson refined a style of street photography—that is, photography made in public places. He sought to capture the “decisive moment”—that is, the moment when he perceived the perfect composition among changing elements in the scene.

An approach called documentary photography developed in the 1930’s. During the Great Depression, a worldwide economic slump, the U.S. Farm Security Administration hired such photographers as Walker Evans, Dorothea Lange, and Arthur Rothstein to survey conditions in rural areas of the United States. Their photos portray the dignity and suffering of poverty-stricken farm families. At the same time, the appearance of illustrated news magazines—such as Life in the United States and Picture Post in the United Kingdom—created a demand for news photographs. Such photojournalists as Margaret Bourke-White and Robert Capa, both of the United States, vividly recorded important people and dramatic events of the period.

Shoeshine stand, Southeastern U.S. by Walker Evans
Shoeshine stand, Southeastern U.S. by Walker Evans

By about 1930, some photographers had come to believe that using manipulative lighting, printing, or other techniques to create artistic photographs was unnecessary or undesirable. Instead, they said that minimally manipulated photos had unmatched beauty and power. This ideal evolved into the principles of straight photography. The American photographers Alfred Stieglitz, Paul Strand, and Edward Weston are often associated with the development of straight photography. It features sharply focused, highly detailed images without any signs of pictorial handwork. The American photographer Ansel Adams applied this approach in his images of landscapes of the American West.

Blind Woman, New York by Paul Strand
Blind Woman, New York by Paul Strand
Flock in Owens Valley by Ansel Adams
Flock in Owens Valley by Ansel Adams

The late 1940’s and the 1950’s saw a number of technical developments in photography. One of the most notable was the Polaroid instant photo process, developed by the American inventor Edwin H. Land. In 1948, Land’s Polaroid Corporation introduced the first instant camera to the public. It was capable of producing film photographs in less than a minute.

A diversity of styles.

During the 1950’s and 1960’s, photographic styles became increasingly diverse, particularly among American photographers. Minor White developed a highly personal visual language to express ideas of spirituality. Aaron Siskind brought the concepts of abstract painting to photography. Robert Frank, William Klein, and Garry Winogrand developed a technique that became known as the snapshot aesthetic to comment on people and society. Their photos, like snapshots, do not follow traditional guidelines of composition. But they do reflect an informed approach designed to focus on people and ordinary scenes, using blurring and off-kilter compositions. Diane Arbus used the informal artistic qualities of the snapshot in her portraits of society’s outsiders to challenge the definition of what it means to be “normal.”

Other artists extended the expressive possibilities of photography with innovative imagemaking techniques, such as collage, lithography, photocopies, and photographic sculpture. For example, Robert Heinecken, an American, produced imaginative images not by using a camera but by making contact prints directly from the illustrated pages of magazines. Many of his images are commentaries about gender roles and violence.

The artistic possibilities of color photography began to be fully explored in the 1960’s. Ernst Haas, an Austrian-born American, and Marie Cosindas, an American, were among the first professionals to concentrate on color photography. In 1962, at the Museum of Modern Art (MoMA) in New York City, Haas had the first-ever exhibition of color photography by a single photographer. Cosindas produced many portraits and still lifes using Polaroid instant color film. Her work was shown at MoMA in 1966.

Red Rose by Ernst Haas
Red Rose by Ernst Haas

These exhibitions preceded the detailed and structured large-format color photos of the American photographer Stephen Shore. He examined familiar but peopleless landscapes that had been altered by human activities. His images are representative of a 1970’s approach called new topographics, which examined the clash between the natural and human-made worlds.

In 1976, MoMA displayed a collection of the American photographer William Eggleston’s color images of ordinary scenes. Many historians believe this influential exhibition ushered in the acceptance of color photography as a fine art.

During the last half of the 1900’s, Lee Friedlander of the United States and Martin Parr of the United Kingdom were among the most prominent Realist photographers. Their work documents the social conditions of their countries using the casual approach of the snapshot aesthetic. Another photographer of that period, Harry Callahan of the United States, used simple forms to produce decidedly formalized compositions.

New Brighton by Martin Parr
New Brighton by Martin Parr

At the same time, Expressionist photographers, such as Jerry Uelsmann, an American, examined the internal landscape of the mind. Through combination printing, Uelsmann made fantastic dreamlike images using a process called post-visualization. The principle behind post-visualization is that the moment of exposure is just a starting point for picture making, and the act of creation continues in the darkroom or digital studio.

Meanwhile, there was a growing movement among women and minority photographers to portray their own identities. The photographs of Cindy Sherman, an American, question female stereotypes. The work of Carrie Mae Weems explores issues of humanity from an African American perspective.

Untitled Film Still #21 by Cindy Sherman
Untitled Film Still #21 by Cindy Sherman

The digital revolution.

Digital imaging dates back to the 1950’s and 1960’s, when it was primarily used in science. The first filmless still camera was invented in 1972. But this electronic camera did not convert light data into digital signals. In 1975, Steven Sasson, an electrical engineer working for the Eastman Kodak Company, invented the first digital camera. In 1981, Sony Corporation introduced a filmless camera that used magnetic disks to record images. It was not a digital camera, but it was the first electronic camera to be marketed commercially.

By the 1990’s, home computers had become widespread, and graphics software allowed virtually anyone to work with digital images. As technology improved and costs dropped, sales of digital cameras and digital imaging equipment rose dramatically. In the mid-1990’s, news photographers began to rely almost completely on digital cameras instead of film cameras. Commercial photography studios began to follow suit in the late 1990’s. By the early 2000’s, many manufacturers either had stopped producing film cameras or had drastically reduced their production. In the 2010’s, the resolution of smartphone camera images increased, allowing people around the world to easily take high-quality photographs.

Digital imaging created new artistic possibilities. Photographers could now create images that did not represent physical reality at all. For example, the American artist Nancy Burson used digital morphing technology to create images of people who never existed.

Careers

Photography offers a wide variety of career opportunities. Successful photographers are typically imaginative and visually curious. They have a good sense of design and are willing to acquire technical knowledge to improve their craft. High school students who are interested in a photography career can prepare by taking courses in the arts, computer science, and journalism. They also can get involved in creative activities, such as the school newspaper or yearbook, or they can enroll in photography or art workshops outside of school.

Many colleges and universities offer photography courses, and some have degree programs in the subject. A number of art schools and technical schools also offer instruction and practical training in photography.

Photographers work in many different industries. A large number of them are free-lance photographers—that is, they are self-employed. They may own their own businesses, or they may work for limited periods with a number of different employers.

Commercial photographers

take pictures for advertisements or for illustrations that appear in books, magazines, websites, or other publications. These photographers work with subjects as varied as architecture, food, equipment, and fashion.

Fine art photographers

produce photographs that are sold as fine art. These photographers exhibit their work in museums and sell their pictures through commercial art galleries and at art festivals. Fine arts photographers rarely can support themselves solely through their art, and so they often take other jobs.

Portrait photographers

take pictures of people and of special events in people’s lives. Some photographers in this field specialize in one type of portraiture, such as children, families, or weddings. Portrait photographers must like working with people and know how to pose their subjects to create pleasing effects.

Portrait photography
Portrait photography
Photojournalism
Photojournalism

Photojournalists

take pictures of events, people, and locations for newspapers and other news outlets. They must be skilled in seeking out and recording dramatic action in such fields as politics and sports. A photojournalist must be prepared to travel and work quickly under the pressure of a deadline.

Scientific photographers

work in a number of specialized areas. Major fields of scientific photography include medical photography and engineering photography. Medical photographers may work with such medical equipment as microscopes, X-ray machines, and infrared scanning systems to help diagnose and treat disease. Engineering photographers help improve the design of equipment and structural materials. These photographers sometimes use high-speed cameras to “stop” the action of machines and to make visible the flaws in metal, plastic, and other materials.

Other fields.

Many young people start their careers by working as photographic studio assistants. They may help collect props, prepare sets, arrange products to be photographed, and process images. Galleries and museums employ people to manage their photography collections and organize exhibitions. Many photographically minded people work in website design and desktop publishing. Careers also are open to people who can teach or write about photography. Other areas include photo processing and printing, photo editing, image library maintenance, equipment manufacturing, sales of equipment and supplies, and equipment maintenance and repair. Technically minded people can follow paths in research and development of new photographic equipment and products.