Commons:Photography terms
Graphics community: Graphic Lab · Graphics Village Pump · Picture Requests · Photography Critiques · Photography terms
Creation on Commons |
---|
While looking at pages relating to photography and images on Commons, users often encounter terms that might be difficult to understand. This is particularly true if the user's knowledge of photography is very basic, or if they are not a native speaker of English. This page aims to explain several of these terms in basic English, with the help of images. If you meet a term that you do not understand, please add it at the bottom of this page at Suggestions for terms to add to the list. If you know what one of these terms mean, please explain it is as simple a language as you can.
Please provide:
- A definition allowing one to understand what is being talked about (a link to Wikipedia is welcome)
- An explanation of why/when this is important in photography, and possible how to get a good result on this aspect.
- Please try and write in basic English: most users are not native English speakers, nor are they photographers.
Abbreviations of photography terms
[edit]- AE/L Auto exposure lock
- AF/L Autofocus lock
- AF: Autofocus
- BW, B/W or B&W: Black and white
- CA: Chromatic aberration
- CCW: Counterclockwise
- compo: Composition
- CW: Clockwise
- DOF or DoF: Depth of Field
- DS: Dust spots
- GIF: File format
- EV: Exposure value or educational value
- EXIF: Exchangeable image file format
- FOV: Field of view
- HEIF: File format
- HDR: High-dynamic-range
- ISO: film speed or digital amplification
- JPEG: File format
- MP or MPx: Megapixels
- NR: Noise reduction
- OE: Overexposed
- OOF: Out of focus
- PF: Purple fringing
- PNG: File format
- post: Post processing
- Raw: File format
- IS: Image stabilization
- SVG: File format
- TC: Teleconverter
- TIFF: File format
- UE: Underexposed
- WB: White balance
List of photography terms
[edit]Index
[edit]- 500 rule
- Aerial photography
- Adobe RGB
- Aperture
- Aperture priority
- Aspect ratio
- Astrophotography
- Autofocus
- Back button focus
- Banding
- Back lighting
- Bit depth
- Black and white
- Blown-out
- Blue hour
- Blur
- Bokeh
- Bridge camera
- Broad lighting
- Bulb
- Burned out
- Camera
- Camera shake
- Catch light
- Chromatic aberration
- Chromatic noise
- Cloudy
- Colour balance
- Colour cast
- Colour correction filter
- Colour model
- Colour profile
- Colour space
- Colour temperature
- Compact camera
- Composition
- Compression
- Compression artefacts
- Contrast
- Continuous shooting mode
- Contre-jour
- Crop
- Crushed Blacks
- Dawn
- DCI-P3
- Dead pixel
- Depth of Field
- Depth of field preview
- Diffraction
- Direction of light
- DSLR camera
- Downsampling
- Drone photo
- Dusk
- Dust spots
- Electronic first curtain shutter (EFCS)
- Electronic image stabilization
- Electronic shutter
- EV
- EXIF
- Exposure
- Exposure bracketing
- Exposure compensation
- Exposure lock
- Exposure triangle
- Exposure value
- Extension rings/tubes
- f-number
- Fast lens
- Field of view
- File format
- Fill light
- Fill flash
- Filter
- Flash-sync speed
- Focal length
- Focus
- Focus lock
- Focus hold button
- Focus point
- Focus-recompose
- Focus stacking
- Foggy
- Format
- Format shape
- Front lighting
- Full frame
- Gamut
- Geotag
- Golden hour
- Graduated neutral density filter
- Hard light
- HEIF (HEIC)
- High-dynamic-range images
- High Speed Sync (HSS)
- Highlights
- Histogram
- Icy
- Image stabilisation
- In-body stabilization
- In-lens stabilization
- Intervalometer
- ISO
- Key light
- Lateral CA
- Lens
- Lens flare
- Light
- Looney 11 rule
- Longitudinal CA
- Luminance noise
- Macro
- Manual
- Midday
- Mirrorless interchangeable-lens camera
- Mode dial
- Moiré
- Moonlight
- Multiple exposures
- Neutral-density filter
- Noise
- Megapixels
- Noise reduction
- Overcast
- Overexposed
- Overprocessed
- Oversaturated
- Panning
- Panorama
- PASM
- Perspective correction
- Photoshopping
- Pixel peeping
- Polarising filter
- Post processing
- Posterisation
- Prime lens
- ProPhoto RGB
- Program
- Protective filter
- Quality of light
- Purple fringing
- Raking light
- Rain
- Raw
- Rec. 2020
- Reciprocal rule
- Rembrandt lighting
- Reproduction ratio of 1:1
- Resolution
- Rim light
- Ring light
- Rule of doubles
- Rule of thirds
- Saturation
- Shadows
- Sharpness
- Short lighting
- Shutter
- Shutter button
- Shutter priority
- Side lighting
- Silhouette
- Single shot mode
- Snow
- Softbox
- Soft light
- sRGB
- Star trails
- Starburst effect
- Stitching, Stitching error
- Stop
- Stopping down
- Strip photography
- Studio shot
- Sunny
- Sunny 16 rule
- Teleconverter
- Telephoto lens
- Texture photography
- Thunderstorm
- Tilt
- Time of day
- Tripod
- Twilight
- Underexposed
- Upsampling
- Vignetting
- Weather
- White balance
- Wide open
- Windy
- UV filter
- Zoom lens
A–E
[edit]Aerial photos are photos taken from any kind of aircraft. It is a common way to photograph large landscapes or cityscapes, but also single objects such as buildings or bridges. Aerial photography can be the only way to get useful photos of some motifs.
Examples of aircrafts that can be used for taking aerial photos:
- Helicopter: Often used for aerial photography, including commercial photos. However, helicopters are not very stable in the air and can cause camera shake.
- Airplane: It is possible to take useful photos from an airplane, but in most cases the altitude is much too high and the angle of view very limited.
- Hot air balloon: Good angle of view and stability, but the precise route is depending on wind direction, so it is not always possible to photograph a specific motif. Commercial balloon flights are also very expensive.
- Drone or Unmanned aerial vehicle (UAV): Multicopters with built-in camera, commonly known as camera drones, are currently the easiest and cheapest way to take aerial photos and/or videos. A drone can hover very stable, which is important for taking pictures from a short distance. The built-in camera usually has technical properties similar to a good point-and-shoot camera and can produce both JPG and raw format. It can also take videos of excellent quality. Please note that the use of unmanned aerial vehicles, including camera drones, is restricted by your country's laws. Even though commonly available camera drones are not heavier than 2 kg (4.4 lb), a registration is required in many countries. No-fly-zones for manned aircraft apply to UAV's as well.
The aperture is the hole in the "iris" behind the glass lens that lets the light through. (The whole part of the camera with glass lens, iris, etc. is called a lens.) In almost all smartphone cameras the aperture size is fixed, but in most other cameras the aperture may be adjusted. With some lenses, this is done by turning a ring around the lens, but in many others the aperture is controlled by a dial on the camera. The units of aperture setting are called stops and given an f-number. Making the aperture smaller is called stopping down. The f-number is the ratio of the lens focal length and the diameter of the aperture. A 50mm lens with an aperture diameter of 25mm has an aperture of f/2. A aperture of f/1.4 is twice as bright and an aperture of f/2.8 is only half as bright.
Changing the aperture not only affects the exposure but also the depth of field (the amount in front and behind the focus distance that is acceptably sharp). The maximum aperture of a lens determines its usefulness in low-light situations. A lens with a large maximum aperture is called a fast lens (because it makes short exposure times possible). The maximum aperture of many zoom lenses varies as the focal length is adjusted.
When a lens is set to the maximum aperture it is called wide open. The lens tends to be generally less sharp, with a little vignetting and more longitudinal (axial) chromatic aberration at this setting. Stopping down a bit can significantly improve image quality. At the other end of the scale, a very small aperture can cause a soft image due to diffraction.
The aperture of DSLR camera lenses is held wide open and only closes briefly when the camera takes the photo. This gives a bright image in the viewfinder and the shallow depth of field helps the photographer adjust the focus. A depth of field preview button (usually situated next to the lens) closes down the aperture, making it possible for the photographer to see the effect the aperture has on the depth of field.
The aspect ratio of an image describes the proportional relationship between its width and its height. It can more commonly also be called "image size" or "image format" even if those terms usually means something else. Some of the most common ratios used in photography are:
- 1:1
- 5:4
- 4:3 (most common in the Micro Four Thirds system and in many point-and-shoot cameras)
- 3:2 (most common among APS-C and full-frame sensor cameras)
- 5:3
- 16:9 (most common for computer screens and TVs)
- 3:1
All aspect ratios are welcome on the Wiki projects. Select the format that works best for your subject and composition.
See also: Format shape
Astrophotography is any kind of photography of anything not manmade up in the sky, such as astronomical objects, celestial events, or areas of the night sky. Satellites, rockets or space stations can be part of an astronomical photo, but they are usually not counted in the category. Astrophotography can be as simple as taking a photo of the moon and some stars with a phone camera, or as complicated as multi wavelengths composite images taken by a telescope out in space.
There is a whole range of special equipment more or less designed for astrophotography: Binoculars, lenses and telescopes that you can combine with a camera, plus phone apps and other kinds of software to help you find and capture subjects in the sky. More advanced astro-photographers also use special camera mounts or heads with motors, star trackers, on tripods to get sharp photos of celestial objects. Due to Earth's rotation and the need for long exposure, these computer-aided mounts will move the camera to avoid star trails.
A comprehensive guide to astrophotography can be found in this PetaPixel article.
Autofocus is the ability of the camera to automatically adjust the lens to focus on a point or subject. The point where the camera is currently set to autofocus is indicated with a tiny square in the viewfinder or rear screen, called the focus point, which turns green when the point comes into focus. Modern cameras can have hundreds of focus points over most of the frame, though some still have relatively few bunched in the middle third. In some focus modes, several focus points may light up green. If the camera is set to autofocus on faces, then the focus point may appear as a large square framing the subject's face.
Without autofocus, the lens must be focused manually by the photographer, which is usually done by rotating a wide ring on the lens. It is then up to the photographer to determine when the image is in focus. It can be hard to manually focus a lens using the viewfinder in a DSLR, but the rear screen can be made to show a magnified portion of the frame, which helps.
Modern autofocus is done by tiny motors in lens and is very quick and quiet. Older systems used a motor in the camera body or had noisy motors that became a problem when cameras were also used to shoot video. Most advanced cameras offer a choice of autofocus modes.
One choice is between single and continuous autofocus. In single autofocus mode (AF-S), when the shutter button is half pressed, the camera will start autofocussing and stop when it achieves focus. In continuous autofocus mode (AF-C), the camera continues to autofocus as long as the shutter button is depressed.
The photographer may also control where in the frame the camera focuses. At its simplest, this may be restricted to a point in the centre of the frame. Alternatively, the photographer may choose a point by moving it with a joystick, buttons or touch sensitive screen. The photographer may also leave the choice up to the camera, which may assume the nearest object is the subject. Some cameras have face detection or eye detection, which increases the chance that the camera will automatically pick a good point to focus on. When continuous autofocus is enabled, the camera may attempt to track a subject if it moves across the frame.
When autofocussing, many cameras will wait until focus is achieved before taking the photo, and refuse to take it if it can't focus. Cameras find it hard to focus on areas without detail, shiny objects or if the light levels are low, and may "hunt" forwards and backwards in such situations. With simple autofocus, the focus will hop to and lock on to the nearest detailed area instead. Some DSLR and flash guns have a "focus assist lamp" that shines an infra-red grid pattern onto the subject, or that briefly fires the flash.
Some telephoto and macro lenses have a focus limiter – a switch to restrict the range of distances the camera will autofocus over. For example, if shooting with a telephoto lens through a window at a distant subject, you don't want the camera to focus on dirt on the window. Alternatively, if shooting an insect with a macro lens, you don't want the camera to try to focus on the sky.
A problem of inaccurate colour presentation in photos where not every shade can be shown because there are not enough bits to represent them. Instead of a smooth gradient (light to dark nuance of a color), you see abrupt changes between shades of the same colour. See also: Posterisation
The bit depth is the number of bits of data used to represent a single primary colour in an image. Most images use the RGB colour model which has red, green and blue primary colours. So a bit depth of 8 will use 24 bits per pixel. Confusingly, sometimes 'bits per pixel' is given instead, though such values will be three times larger than the 'bits per colour component'. A JPG has a bit depth of 8, so it can only record 256 shades of grey. A 16-bit TIFF can record 65536 shades of grey. A 32-bit TIFF uses floating-point arithmetic to permit an almost infinite dynamic range with high precision. The low bit-depth of a JPG makes it unsuitable for representing high-dynamic range images or using a colour space with a large gamut -- if you try to do so, it would result in too big a jump between the tones and make the image posterised. The latest ultra high definition video standard uses 10 bits per colour.
Most computer screens have a bit-depth of 8-bits. Some professional monitors have 10-bits, but they require very expensive professional graphics cards to use all 10-bits (e.g. Quadro or Fire Pro). Modern ultra high definition television screens have 10-bits and support a high dynamic range.
A camera raw file has a bit depth of between 10 and 14 bits, depending on how modern and expensive the camera is. Conveniently, the power-of-two nature of binary arithmetic corresponds to the doubling of light achieved with each stop of aperture. Therefore a modern camera can be said to capture approximately 12 stops of light.
Black and white, (BW, B/W or B&W) is the artistic quality of an image which only uses shades of gray. In many cases it is acheived digitally by changing each pixel in a color photo to the mean of the red, green and blue values, uniform across all three channels.
Areas in a photo that are so overexposed or so bright that no details can be seen are often described as blown-out or blown. The areas are usually just pure white, but blown-out areas can also appear in colored parts of a photo when at least one channel (red, green or blue) is overexposed. This leads to color shifts, for example from blue to cyan in a blown blue sky.
When an area in a digital photograph is blown-out, no structure or details can be retrieved in it by any editing program since the information simply isn't there. However, the camera's raw files generally have a higher dynamic range than generated photos such as JPG or TIFF, and some degree of highlight recovery can often be achieved.
Blur
[edit]See also: Focus
An image is "blurred" when it is not displayed sharp and clear. To avoid blur and get clear images, it is generally neccessary to set the focus distance precisely to the correct value, which can usually be done automatically via autofocus.
The term bokeh is used to describe the aesthetic appearance of areas in a photo that are out of focus sufficiently to make them not only slightly blurry but create a more abstract, heavily blurred look.
Bokeh is often considered to be plesant when it is not distracting from the parts of the image that are in focus, makes the out-of-focus elements as blurry and unrecognizable as possible, and is devoid of any structure within the bokeh dots. As such, lenses with a wide maximum aperture (when shot at or close to that aperture), rounded aperture blades, a long focal length and well-corrected spherical aberration tend to produce the smoothest and strongest bokeh, although many aspects of the lens design can have an impact on the appearance.
Burned out
[edit]When an object, usually branches and leaves, is photographed against a very bright, overexposed sky, the strong light can obscure the edges of the branches leaving only the center visible. The branches are said to be "burned out" since they in fact look like they have been partially consumed by fire. If there is some information left in the photo, the branches can be "filled out" to normal thickness in post processing with the help of an editing program, but most of the time this damage is irreparable.
Camera
[edit]- Digital SLR (Single Lens Reflex). This is like a film camera, but instead of a film, there is an electronic sensor. It has a viewfinder. When you look through the viewfinder, a prism and mirror lets you see directly through the lens. When you take a photograph, the mirror moves out of the way, letting light through to the sensor. You can buy extra lenses to fit onto a DSLR. They can be very expensive. These lenses let you take photographs in very poor light, and very difficult situations, such as taking photographs of fast cars or sports.
- A mirrorless interchangeable-lens camera (MILC), sometimes simply called a mirrorless camera, has an electronic viewfinder that shows the image directly from the sensor. This has the benefit that the camera can show information about this image, such as a live histogram, overexposed areas, a preview of the current exposure or areas that are in focus directly in the viewfinder, before the image is taken. As the name suggests, MILCs offer the ability to change the lens, making them as flexible as DSLRs and usually slightly more compact and more suitable for recording video. The most notable drawbacks compared to DSLRs are a focusing system that is often not quite as fast as that of a modern DSLR, as well as a smaller range of available lenses for most cameras, as most MILC systems are still quite young.
- A bridge camera is a camera with a focus on good ergonomics and the ability to control most of the settings of the camera easily with dedicated buttons and dials, just like on a DSLR. Unlike with mirrorless ILCs, the lens of these cameras cannot be changed. The viewfinder is electronic, i.e. a screen which shows the image from the sensor. Sensor size and image quality are usually in-between compact cameras and DSLR/MILC cameras, sometimes just as good as the latter. Modern examples offer enormous flexibility in terms of zoom range, although image quality can drop drastically at the telephoto end.
- A compact camera is a "simple" camera. It is fully automatic and does all of the settings for you. It is easy to use and to carry. These cameras sometimes have extra controls to meet more specific needs, such as poor lighting conditions, but otherwise they work best in good light.
Camera shake
[edit]Camera shake occurs when the camera moves during an exposure. This is usually caused by the photographer moving slightly while holding the camera. You avoid camera shake by placing the camera on a firm surface or use a tripod. It might also occur if the ground is unstable, like a floor where people are walking by, or on a bridge with traffic.
It can also be caused by the camera itself: either the impact of the camera shutter opening rapidly ("shutter shock"), or the mirror in a DSLR flipping up out of the way. The former can be eliminated in some cameras by using "electronic first curtain shutter". The latter by using "mirror lock-up", though that can be impractical in many situations. One advantage of mirrorless cameras is that they do not suffer from shake due to the mirror moving.
The longer the exposure, the more likely that camera movement will affect the image. Better holding technique, or bracing yourself can permit longer shutter speeds.
See also Image stabilisation and Reciprocal rule
A catch light is a bright highlight reflection of the light source in the subject's eyes. Without a catch light, the eye can appear dead. Larger catch lights are more appealing than tiny catch lights, which can appear aggressive. A big catch light requires a big light source, such as the sky, a window or softbox positioned close to the subject. The catch light's shape will mirror the light source shape. A ring flash produces a bright ring in each pupil. Most portrait lighting aims to position the catch light at 10 o'clock or 2 o'clock positions in the eye.
Chromatic aberration (CA) is an effect resulting from dispersion of light in which the lens can not focus all colours to the same point. It happens because lenses "bend" (refract) different wavelengths of light to different degrees. There are two kinds of CA: longitudinal (axial) and lateral (transverse).
Longitudinal CA occurs when the lens focuses some colours in front of the sensor and some behind. The effect is seen in a picture in blurred areas that are just in front of the plane of focus (e.g., purple tint) and just behind (e.g, green tint) but not those that are sharply focused or areas that are well out-of-focus. For example, it may appear to give a purple fringe to the outline of the dark hair on someone's head. It is worst when the lens aperture is wide-open and improves as the lens aperture is closed. Since the effect is to defocus some colours, it is very hard to correct this in software, other than to desaturate the unwanted colours.
Lateral CA occurs when the lens focuses some colours to a point further from the middle of the sensor and some colours to a point nearer the middle of the sensor. The effect is worst at the edges of the picture, in high contrast areas such as dark branches against a white sky, and appears to give a coloured fringe to dark edges. The fringing will be one colour on edges furthest from the middle of the picture and another colour on edges nearer the middle of the picture. Image processing software can correct this to some degree, though extreme CA of several pixels width will likely cause the software to merely desaturate the fringing colours to grey.
Some modern lenses include details of their CA problems in the data supplied electronically to the camera. The camera can then automatically remove just the right amount of CA when creating a JPG, or instruct the raw developing software on your PC to do this when you examine your raw files. A lens designer will try to reduce CA by combining many elements in a photographic lens, each one having a different shape or types of glass with different optical qualities.
Removing chromatic aberration
[edit]Most of the advanced post processing tools have automatic, semi-automatic or manual tools for removing CA. These should be used if available as they preserve more detail than most manual methods and are far quicker and easier to apply.
There are, however, also ways to get rid of the unwanted green, red, purple or cyan "shadows" in a photo by using the regular tools in any image editing program.
Getting rid of CA in places that are almost black/grey/white is rather easy. Most programs have a tool to desaturate or remove color connected to a brush-tool of some kind. Set that to a size that just covers the CA you want to remove and move it over that area. The green or purple will turn grey instead and become as one with the rest of the picture. For areas that are next to something with color, you select a tool with a brush for replacing color. You choose the color of the background close to the CA and move the replace-brush over the CA. For CA close to a part of heaven you can use the clone-tool and clone a bit of the sky right next to the CA over that CA to get the same shade of sky. To make it easier to work on the right area, first enlarge the picture to 100% to see where the CAs are, and then work on it at 200%. This will make it easier to hit the right spot.
Chromatic aberration removal mishap
[edit]Post-processing programs often have an automatic tool (defringe) for removing lateral CA, but with the wrong settings, angular areas in gray will appear around highlights and along high-contrast lines. This usually occurs when colors of objects in the photo match the colors of the CA, such as red/pink/magenta or green/cyan. The automatic tool can't make the distinction between the real color and the CA and it will over-compensate. One fix to only apply the "defringe" tool to certain areas using a brush. Another fix is to apply the defring globally to the whole picture, then use a negative amount of "defringe" to paint over the areas that the tool has wrongly desaturated.
The colour balance of a scene is how much of certain colors, in relation to each other, is used to represent a scene accurately. A scene may consist of direct light (which may be filtered) or indirect light (which is reflected off an object). Most scenes are lit with by relatively neutral "white" light, and judging the qualities of this light (white balance) makes sure that we reproduce a scene correctly. Sometimes a standardized card with colors (a ColorChecker) can be photographed with the object in the photo to make sure the colors are correctly balanced.
In photo processing programs, the colors usually balanced against each other are:
- Cyan ← . → Red
- Magenta ← . → Green
- Yellow ← . → Blue
A colour cast is an undesirable tint in a photograph. This may be due to the wrong choice of white balance for the lighting in the scene, or by strongly coloured direct or reflected light. For example, a scene light by incandescent light but with a white balance chosen for daylight will have a strong yellow cast. The opposite situation results in a blue cast. Some fluorescent lights have a slight green tint. Since our eyes adjust to the scene, we may not spot a colour cast when taking a photo. For example, a nearby green wall may produce an unhealthy colour cast in the skin of anyone standing nearby.
A mathematical colour model is a way of representing a colour in an image by combining three or four "colour primaries". For example, the RGB colour model (used for computer screens) adds red, green and blue lights together. A 0 for red+green+blue makes black and a maximum value for red+green+blue makes white. The CMYK colour model (used for printing) subtracts cyan, magenta, yellow and black inks together on a white page.
A colour profile is a standard method of accurately defining the colour space used by a photograph. Typically a profile format created by the International Color Consortium (ICC) is embedded within an image file such as a JPG or TIFF. The profile ensures the image appears accurately on a computer display.
Another use for a profile is to record the characteristics of a computer display. This can be measured using a colourimeter along with calibration software. By knowing the profile for a given display, the operating system can then display the colours of an image with precision. A computer display may be supplied along with a profile that generally applies to all such models made by that manufacturer. For better accuracy, the user can profile their own monitor, and repeat this at regular intervals. Without this profile, the operating system just assumes the monitor is close to the sRGB standard.
The colour profile in a JPG can be discovered using Jeffrey Friedl's Image Metadata Viewer.
A colour space combines a colour model with a set of parameters to define the gamut of colours that may possibly be displayed. Examples of colour spaces include sRGB, Adobe RGB and ProPhotoRGB. The sRGB colour space is the most common and the default on the internet. The colour space used in an image file is typically defined by the use of EXIF tags in the file and an embedded colour profile.
- sRGB is the standard colour space used for images on the internet. It can represent a useful range of everyday colours and is supported by a wide range of monitors and screens. It cannot accurately represent some very saturated colours (e.g., a London Bus) or all the colours available to print publishers.
- The Adobe RGB colour space has a slightly larger gamut of colours than sRGB. It can better represent some of the colours available to print publishers, so is often a good choice when preparing a photograph to be published on paper. However it is not widely supported by computer screens and so only a tiny minority of internet viewers will actually be able to see the extra colours. Therefore it is recommended to use sRGB for photographs published on the internet.
- DCI-P3 (Display P3)
- The DCI-P3 colour space has a large gamut of colours (about 25% larger than sRGB). It was designed for digital movies in cinemas. Some recent computer screens and mobile phones can display most of this colour space, and it is used in the HEIF image files produced by some Apple phones.
- The Rec. 2020 standard defines a colour space with a very large gamut of colours. It is the standard for the latest ultra high definition video. It greatly exceeds the ability of current display technology, with the current state-of-the-art being the smaller DCI-P3 colour space.
- The ProPhoto RGB colour space has an extremely large gamut of colours, including some that are beyond the limit of human vision. It is designed to be used only within advanced photo editing software like Photoshop on file formats with sufficient bit depth to accurately represent these tones without posterisation (e.g. 16-bit TIFF). It is not intended to be used in any file that is shared, displayed or published. If a JPG is found to be using this colour space, then that is a mistake.
The colour temperature is a measure of the quality of white light used to illuminate a scene. It relates to the temperature of an "ideal black body radiator" and is measured in Kelvin. It ranges from the orange flame of a candle, the yellow of an incandescent light bulb, the hard white of mid-day or an electronic flash, to the blue white of overcast sky. Strangely enough, we tend to describe the feel of the white light in the opposite terms, with yellow-orange light described as warm and pure blueish light described as cold.
A composition is a deliberate arrangement of the elements of a picture in relation to each other and with the picture frame. A good composition is visually appealing whereas a bad composition appears random, accidental or awkward. There are many aspects to composition and although there are "rules" or approaches that are commonly successful or typically pleasing, many successful photographs deviate from these. Unlike a painting, the arrangement of elements in a photo cannot be a fiction and there may be limitations on how much the elements and the photographer themselves can be moved or positioned in reality.
- The elements in a photograph include a single point; multiple points; horizontal, vertical or diagonal lines; curves; eye-lines; circles, triangles, and rectangles. These are distinguished or contrasted by properties such as shape, form, size/length, quantity, texture, colour and tone. If these properties are not clearly distinct then the subject may be lost. Sometimes the contrasting properties are themselves the subject or what catches the eye. Too many different properties or elements make the composition busy. One common approach for a strong composition is to keep it simple, by having just a few different properties or elements in the frame.
- Most photographs have a subject that is the focus of attention and purpose. The simplest and often best way to present the subject is to fill the frame with it, which also ensures the subject is identifiable when displayed as a small thumbnail. This approach may be taken to the extreme where the subject itself is partly cropped (such as a very close headshot).
- If the subject is smaller than the frame, there will be more background to consider and usually also a foreground, which both help provide context. The photographer must then decide where to position the subject: a central position may work well for a symmetrical composition but often an off-centre position (such as if following the rule of thirds) is better. Avoid placing a subject so they look out of the frame. The foreground may include features that help lead the eye towards the subject. The background should not distract from the subject and is often kept simple or out of focus. A relatively featureless area between and around key elements in a photo is known as negative space and helps emphasise their importance.
- The lens focal length alters the magnification of the subject in the frame, while the distance-to-subject alters the proportionate size of foreground and background elements.
- The camera viewpoint affects the composition dramatically. The viewpoint height and camera angle changes the proportion of foreground and background within the frame. A scene shot while standing with the camera at eye-level looking straight ahead may produce a very normal viewpoint, but also one that isn't very interesting or novel. People and animal subjects are often best shot from a viewpoint at their eye-level, rather than looking up or down at them. The camera is usually held level with the ground, and any deviations from this should be pronounced so they appear deliberate rather than accidental.
- Architectural perspective is a formal and popular style of composition for buildings. The camera is held level to ensure vertical lines do not converge, and is pointed perpendicular to the facing wall to ensure horizontals on that wall are level. To ensure a modest angle-of-view, minimising distortion, the photo should be take from a distance as least as far as the building is tall or wide.
- A photograph is framed by the camera and may later be cropped when developed or processed, though this distinction is not apparent to the viewer who may refer to the "framing" or "crop" interchangeably The framing is usually a landscape rectangle, portrait rectangle or less commonly square. The typical aspect ratio of most cameras is 3:2 but there is nothing artistically special about this ratio, and landscapes are often wider than 3:2, while portraits are often shorter than 2:3. The frame may interact with elements, aligning with sides or intersecting with corners. Some images feature a frame within a frame.
- Elements may be arranged to achieve balance and harmony the eye normally seeks, or less commonly to provoke dynamic tension and discomfort in the viewer. The eye readily spots a pattern in repetition, which may be cropped so it appears to continue beyond the frame. A powerful technique is to juxtapose two elements: to arrange them specifically for contrasting effect. A common form of balance is symmetry which may be a feature of an element itself, the arrangement of elements or a reflection. Symmetry may be vertical, horizontal or radial.
- The 2D shape of an element and the horizontal and vertical relationship between elements is directly conveyed in a 2D photograph. However, the 3D form of an element and the depth relationship can be harder to convey. By overlapping elements, the viewer sees that the nearer object partly obscures the distant one. We tend to expect objects at the top of the frame to be distant and at the bottom of the frame to be nearby. Depth can be indicated by perspective: linear perspective (lines converging with distance), diminishing perspective (objects appears smaller with distance) and aerial perspective (increasing haze with distance). A wide-angle lens creates a stronger depth perspective and a telephoto creates weaker depth perspective. Unique to photography, focus (sharpness) provides an important clue to depth: elements in front or behind the subject are out-of-focus and increasingly blurred with distance. The lens aperture and distance-to-subject control the depth-of-field, with a shallower depth-of-field creating a stronger depth perspective. An element's 3D form and texture is indicated by tonal change due to light falling on and reflecting off the surface at an angle.
Compression is a method (algorithm) used to reduce the space taken by an image file. Some file formats like BMP and some camera raw formats are not compressed at all, with every pixel defined by a fixed number of bits of data. This makes them fast and simple to generate but bulky to store or download. Ideally we want to compress the image without losing any quality: lossless compression. The PNG and TIFF formats use lossless compression, similar to a ZIP file, but the level of compression is limited to about 50%. With the JPG file format we can easily get about 80% compression and even 95% compression is almost indistinguishable from the original.
Because the image quality is reduced a little each time a JPG file is saved and re-opened, it is not a good format for making a sequence of edits, such as when retouching or restoring a photo or scanned work. The compression artefacts produced by a lossy compression algorithm make it less suited to graphic and text works and better for photographs.
Compression artefacts are defects in the image caused by the lossy compression algorithm (e.g., JPG or MPEG). The algorithm aims to reduce the size of the file by approximating the image in a way that is hard to detect by eye. With increased compression, the approximation becomes so imprecise that the defects become highly visible.
The algorithm used for JPG cannot handle the abrupt transition in a high contrast edge or in finely detailed elements, such as when reproducing graphic design and text. The fine detail is smudged and an edge may appear soft or contain small adjacent spots (like gnats flying around). These can be visible under magnification even for JPGs with low compression. This is why PNG is preferred over JPG for computer-generated art or when reproducing a poster or printed page. Photographs generally do not contain such hard edges.
All JPGs are composed of 8x8 pixel tiles. At high compression, the square shape of these tiles becomes visible in the image.
In a video stream, the compression algorithm also approximates the changes from one frame to the next, and only transmits a totally new frame intermittently. This can mean that a glitch in the transmission can affect not only the frame containing the error, but persist for a short while afterwards. It also means that fast moving scenes are harder to compress without generating artefacts. For example, while the grass in a football pitch may be rendered with detail in a still image, it is reduced to a green block when the camera pans to follow the ball.
Other artefacts can be due to a reduction in bit-depth leading to a loss of dynamic range or in the smoothness of tonal changes. This posterisation is particularly noticeable in the sky or on skin. The GIF file format, with only 256 colours, is particularly prone to this issue.
The contrast is the ratio between the darkest tone in an image and the brightest tone in an image. This is also known as global contrast. A high contrast scene may include bright highlights and dark shadows. A low contrast scene might occur in fog or a gloomy interior. High contrast is generally appealing to the eye but does not always suit the mood or appear realistic. Often when the contrast in an image is increased by software, the saturation of colours also tends to increase.
Local contrast considers only a small region of the image. Increasing local contrast can give a photo more "punch" without affecting the global contrast. The Clarity control in Adobe Lightroom or Adobe Camera Raw adjusts the local contrast.
The fine contrast at an edge or between pixels (also known as acutance) affects the sharpness or detail recorded. An unsharp image can be sharpened somewhat by software, though there are limits to what can be recovered. At an edge between dark and light, the sharpening algorithm increases the fine contrast by darkening the edge of the dark area and lightening the edge of the light area. Applying too much sharpening can make this effect too obvious, leading to a visible halo.
Poor quality lenses have low contrast, affecting the impact and sharpness of photos taken with them. Lens flare can also reduce the contrast, making the image uniformly too bright.
To crop an image is to remove unwanted outer parts of it. This may be done to give the photo other proportions, also known as aspect ratio, getting rid of undesirable things in the picture, or change the balance of the composition. As a noun, it can also mean the result of cropping. The term can also mean "framing", as in how the photographer has selected the composition within the camera's frame, even if no part of the photo has been cut away.
When making a crop using image editing software, there is often a choice as to whether to permit any proportions or a predefined aspect ratio. A grid may also overlay the selected area, such as lines for the rule of thirds.
The detail in the darkest part of the image is lost and it all looks uniformly black. This may be as a result of under-exposure; post-processing for effect; or limitations of or incorrect calibration of a display device. The term comes from the idea that the variety of black tones have all been crushed down to 0, or as close as to be indistinguishable. Some display technologies are incapable of displaying true black, so any tone darker than a certain limit is simply rendered very dark grey. Adjusting the black point in a tone curve or using the "Blacks" exposure slider is the easiest way to crush blacks in post processing. Certain photographic films and film processing techniques resulted in crushed blacks, and so reproducing that effect in the digital age can be an attempt to give an image a "vintage" look. See also: blown-out.
Dead pixels or hot pixels are small dots in bright colors that can appear in a photo due to some technical error or camera shortcoming. The dead pixels occur in the original photo and are not the result of post-processing. Dead pixels are more common in long exposure photos taken in weak light, than in daylight photos. The spots are usually only visible when the photo is viewed at 100%. Click on the example photo to see them better.
Depth of Field (DOF or DoF) is decided by the given lens opening (aperture) or f/stop. It is the distance between the nearest and farthest objects in a scene that appear acceptably sharp in a photo. A small aperture (large f/number: f/16, f/22, etc.) will give "large DOF"; the image will be sharp/in focus from the foreground to infinity. A large aperture (small f/number: f/1.8, f/2.8, etc.) will give "shallow DOF". Depending on the type of lens used, DOF can be as shallow a a few millimeters, for example when using Macro lenses or extension tubes. Good photographers know how to make use of DOF and/or use selective DOF to make their images more interesting. Some types of special lenses, known as "Lens Baby" can generate a very specific type of selective DOF, reminiscent to the imperfections of turn of the XIX century lenses, where the image gets blurred and/or distorted towards the edges. With a "Lens Baby" however, this selective focus blur/distortion can be controlled.
Diffraction
[edit]In photography, diffraction usually refers to aperture diffraction, where very small apertures (large f-numbers, like f/22) cause an image to appear somewhat blurry or unsharp. This is because light passing through a very small opening spreads out and becomes less focused; see Airy disk for more information. If you need a wide Depth of Field, but diffraction is making your images blurry, you may wish to consider Focus stacking.
The direction of light has a big impact on how the subject or scene appears. It is most apparent when there is a single hard light source and least apparent when there are multiple and/or soft light sources. Common light sources are the sun, flash or studio strobe lights, indoor lighting, windows, candles, and reflected light. The dominant or main light source in a scene is known as the key light.
- Front lighting has the light source in front of the subject and behind the photographer. It evenly lights the subject with low contrast and few shadows. The subject may then appear flat and featureless. A ring flash creates front lighting that can be used for beauty fashion photography, as it hides skin imperfections and does not cause any shadow on the model's face.
- Side lighting has the light source to the side of the subject and photographer. It strongly lights one side of the subject and leave the other side dark. This emphasises form, texture, creates shadows and increases the contrast. Side lighting is called raking light when used to bring out texture, detail and technique in a painting or other rough surface. In a portrait when the subject's face is angled slightly in relation to the camera, the side lit with the key light is important. Short lighting has the key light shining on the side of the face turned away from the camera (the short side). Broad lighting has the key light shining on the side of the face turned towards the camera (the broad side). Short lighting is usually more flattering, making the face appear slimmer, with more shadow and contouring.
- Back lighting has the light source behind the subject and in front of the photographer. As with front lighting, the subject has low contrast and few shadows, though the scene as a whole may have high contrast and shadows. The subject is darker than the rest of the scene, which can make it difficult to expose the photograph correctly. Spot metering helps the photographer expose for the subject rather than the background. Back lighting is also known as contre-jour, French for "against the day". A semi-transparent or light-bending subject (e.g., water drops, crystal glass, leaves, a white veil or hair) can behave in the opposite way when backlit. Instead, it appears brightly illuminated against a darker background.
- Silhouette takes back lighting to the extreme where the background is bright and often evenly lit, and the subject is very dark or even black. Only the outline of the subject may be visible.
The height of the light source is also important. Light from directly above or directly below the subject is rarely ideal. A high midday sun effectively front-lights the ground, creating much smaller shadows and less interesting surface detail. In a portrait, a very high light position can cause a dark shadow below the nose and under the chin. A portrait lit from below is rarely flattering, producing a sinister look.
When relying on the sun, the photographer may have to choose the time of day or reposition themselves and the subject to achieve a particular direction of key light. A photographer can position flash guns or studio strobes anywhere. If they key light produces too strong or dark shadows on the subject, or the subject is backlit, it may brightened with a fill light. This may be a reflector or a flash gun set to fill flash mode. The photographer may also choose add a rim light behind the subject to highlight hair or help separate the subject from a dark background.
With back lighting, unless the light source is out of the frame or hidden by the subject or a lens hood, it can shine directly into the lens, which causes lens flare. Lens flare may appear as hexagonal shapes in the frame, which may or may not be a pleasing effect. It can also cause a loss of contrast and too much brightness. Lens coatings help reduce or eliminate lens flare.
Downsampling
[edit]This term, in general, refers to the reduction of the sampling rate of a signal. In photography, it is used to describe the process of reducing the pixel resolution of an image, usually relative to the actual original resolution of the camera sensor with which the image was taken. This is often done to reduce the file size, to make it look sharper or to reduce the noise you see in an image when viewed at a 100 % magnification. Example: If your camera can take photos that are 4928 x 3264 pixels and you make such a photo smaller, ending up with 3000 x 1987 pixels, you have downsampled the photo. See also: Upsampling
In downsampling, some information in the image is irretrievably lost, which is why it is generally discouraged on Wikimedia Commons.
Dust spots
[edit]Dust spot (DS) are small, often dark, spots that mysteriously show up in a photo. Most of the time the are the result of dust or small raindrops on the lens or on a filter (if such is used). Cleaning the lens will get rid of them in the future and with an image processing program they can be removed from digital photos. They may also appear due to dust on the sensor. If the spots are sensor dust, they usually show up in almost the same place on every photo. The sensor can be dusted but it might be best to let a professional camera workshop do it.
(to be written)
EV
[edit]In photography, EV is short for exposure value. Each unit of exposure value, 1 EV, is also known as a "stop". On Commons, EV may also refer to the "educational value" of an image. The most obvious type of educational value is when a photograph illustrates a physical subject well. However the educational value may also come from the emotions generated by an image or other abstract concepts. On Wikipedia, EV may refer to the "encyclopaedic value" of an image, which is different to "educational value" as it only considered the usefulness of an image to Wikipedia articles.
EXIF (short for Exchangeable image file format) is a standard for embedding metadata tags into a media file such as JPG, TIFF and some raw file formats. This supplies additional information about the image alongside the data for the image itself. Some tags are designed to be read by people and some to be processed by image software. EXIF data may be recorded by the camera (exposure settings, camera and lens used, date and time), added by the post-processing software (adjustments made to the image), or added by a person (title of the photo, creator, copyright).
Commons displays a short list of EXIF tags in a table at the bottom of the file description page (see Commons:Exif). All the EXIF tags in a JPG can be viewed using Jeffrey Friedl's Image Metadata Viewer. The best tool for examining and manipulating EXIF tags is Phil Harvey's EXIFTOOL.
The exposure is how much light, per unit area, that reaches the camera sensor, when the photograph is taken. It is affected by the brightness of the scene, the size of the lens aperture (hole) and the shutter speed. If too much light reach the sensor the photo may become too bright or overexposed; too little light and the photo may become too dark or underexposed. The sensor has a limited useful exposure range (dynamic range) and so the exposure settings are adjusted to try to fit the range of brightnesses in the scene into that range. Unlike film, the sensitivity to light of a digital sensor is fixed, but the amount of gain (amplification) applied can be controlled. This is commonly referred to as the ISO setting, as it emulates the various ISO film speeds.
All the aspects of exposure, as well as ISO, can be controlled manually by the photographer, or partly or fully automatically by the camera. Often the photographer will fix some aspects to achieve a desired effect (e.g., fast shutter speed to freeze action) but allow the camera to alter the others (e.g., aperture). A flash gun with though-the-lens (TTL) metering can also be set to automatically control the amount of light added to the scene.
A particular combination of shutter speed and aperture has a given exposure value (EV). If one takes the aperture down a stop from f/2.8 to f/4, this halves the amount of light reaching the sensor, so the exposure duration (shutter speed) must be doubled to compensate. A shot at 1/500s & f/2.8 has the same exposure value as f/250s & f/4. Each halving or doubling of light is 1 EV, which is also sometimes called a "stop".
When one considers the final image, the ISO setting can be added to this balance of camera settings, which some people refer to as the exposure triangle. To achieve a similar image brightness, one could double the ISO from 100 to 200 rather than double the exposure duration.
The automatic exposure system in a camera is flexible and often chooses good settings. The photographer can use the exposure compensation setting on the camera to force it to make an image brighter or darker than it would otherwise choose.
A single shutter cycle of a camera is also referred to as an "exposure". Some cameras allow the photographer to take multiple exposures and combine them into one image. A similar effect can also be achieved by taking several photographs and combining them on a computer afterwards.
Exposure bracketing is when a series of several shots are taken of the same scene but with varied shutter speeds above and below a given setting. Many cameras allow this to be done automatically, rapidly taking three or five exposures. Each exposure may be as little as one third of a stop or as much as two stops apart. This may be done because the photographer is uncertain which exposure to use, or to make a high-dynamic-range (HDR) photograph.
Exposure lock
[edit]Exposure lock is where the photographer uses the camera to automatically set the exposure and then locks that exposure until the photograph is taken later. A common need for this is when the frame contains a large variation in brightness, such as a backlit subject, and the subject is not in the centre of the frame. The photographer points the centre of the frame at the subject, uses the centre spot metering to set an accurate exposure for the subject, locks the exposure, and then recomposes to move the subject off to the side before taking the photograph. The camera may have a AE-L button, which is sometimes labelled with a * instead, to lock the automatic exposure.
The technique of using the central spot to meter and then recomposing, is similar to the focus-recompose technique. Some modern cameras can be made to expose for the same subject they have chosen to focus on, removing the need for this technique.
Used on both digital and film SLR's as well as Mid-Format cameras (6×6, 4.5×6, 6×9 cm.) to achieve close focus in close-up and macro photography. To achieve close focus on large format (4×5″, 8×10″) technical cameras, bellows are used.
F–J
[edit]Field of view (FOV) or angle of view. The extent of the scene that can be captured on the camera sensor or by the eye. Typically the horizontal angle of view in degrees is used, but the vertical and diagonal angle of view can also be relevant. For a given distance, the field of view can be measured as an area, in metres for example. The human eye has a field of view around 135° vertically and 180° horizontally, though binocular vision only extends to around 120° horizontally. Much of this is blurred, however, so the area we can see well is limited to about 55°. This corresponds to the diagonal field of view of a 43mm lens on a full frame camera (i.e., a 50mm "standard" lens). An ultra wide angle lens of 12mm has a horizontal field of view around 120° whereas a telephoto lens of 300mm would only see around 8°.
A filter is an accessory that modifies all the light that falls on the sensor or film. One kind is a square of glass that is slotted by the photographer into a special frame screwed onto the front of a lens. More commonly, the filter is a disc of glass held in metal frame that screws directly onto a lens. The size of camera lens varies so this can require owning filters in different sizes, or using adjustment rings so the filter can be attached to a smaller lens. The effects of many (but not all) filters can be simulated by software in the camera or computer. Filters vary in price and quality filters require multiple anti-reflection coatings just as lens elements do. Some filters also have coatings to repel fingerprints or encourage water to run off. Occasionally, lenses cannot have a filter attached to them, because the front element bulges outward or is very large, and these may have a slot in the lens body where special filters can be inserted.
- Protective filter, is not strictly a filter at all, but a circular clear filter that is kept on the front of a lens to protect the lens and be replaced if damaged. The use of a protective filter is controversial. A cheap protective filter will reduce the sharpness and contrast of the image and can be prone to flare.
- UV filter, filters out ultra-violet light. Film is sensitive to ultra-violet light but digital camera sensors are not. A UV filter will reduce blue haze on photos shot with film but is often simply used as a protective filter with digital cameras.
- Neutral-density filter or ND filter, darkens the image without affecting the colour. This allows the photographer to use a longer exposure than would otherwise be possible, or to maintain a desired shutter speed for video. A variable ND filter allows the photographer to change the light reduction by rotating the filter, though such filters are usually optically poor. An extreme ND filter (such as a 10-stop reduction) appears almost black. It allows for very long exposures, making the sea milky soft and clouds to streak across the sky. When using an extreme ND filter and a DSLR, it may be necessary to cover up the eyepiece, as any light leaking in can spoil the image. A cheap extreme ND filter can have a colour cast and may only be useful for black and white photography.
- Graduated neutral-density filter, is a square filter that varies in darkness from top to bottom. The most common use is to darken a bright sky ("hold back the sky") to enable a single exposure to capture the sky and land. The filter may be "soft" with a gentle graduation or "hard" with a rapid graduation. A difficulty with such filters is that the sky is often brightest nearest the horizon, and the horizon may be interrupted with land features that are affected by the darkening graduation. An alternative is to take multiple exposures and blend them in software afterwards. As the dynamic range of digital sensors has improved, a single exposure without a filter may also be sufficient in many cases, and the filter applied in software should the photographer wish a darker sky. Darkening the sky a lot can look dramatic but also look fake or cliched ("over-gradding").
- Polarizing filter, has two effects. It is a neutral-density filter with about 1.5 or 2 stops of light reduction. It also removes light polarised in one direction. The filter can be rotated to maximise the effect. It remove reflections from glass or water, and enhances foliage. It darkens the blue sky in an area of 15-30 degrees at 90 degrees to the sun (the filter ring may have a white dot that should be rotated to point at the sun). Polarising filters are discouraged if using a wide-angle lens since only some of the sky will be darkened. This filter cannot be simulated in software.
- Colour correction filter, was common when using film. The film loaded in a camera would be optimised for daylight only or for artificial light, and so if the photographer found themselves in different lighting situation, a blue or yellow filter would compensate. When shooting black and white film, different colour filters can be used to lighten or darken specific colours -- this can also be simulated in software.
To focus
[edit]To focus means to adjust the lens so that one part of your photograph is sharp. This is usually done automatically; some people choose to do it manually, although it is possible only with a DSLR camera.
To focus on
[edit]To focus on something means to choose a part of the photograph in which you are interested, and thus want to be sharp.
In focus
[edit]When a subject or element is in focus, it means that this element is within the depth of field range around the focal plane. As a result, the element will be sharp in the resulting photo.
Out of focus
[edit]When a photograph is taken out of focus, it means the whole photograph is blurry.
The focal length of a lens is defined as the distance between the centre of the lens and the focused image. However photographic lenses do not consist of a single thin element, so the physics becomes more complicated. The longer the focal length, the narrower the field of view seen by the camera sensor. For a full frame camera, a focal length around 50mm is considered "standard". A "wide angle" lens has a much shorter focal length, and a "telephoto" lens typically has a much longer focal length. A lens with a fixed focal length is called a "prime" lens, and a lens that can vary its focal length is called a "zoom" lens.
Focus lock
[edit]Focus lock is where the photographer focuses on a subject and then locks that focus until the photograph is taken later.
A common use for this is the focus-recompose technique. This is where the photographer points the centre of the frame at the subject, uses the centre focus point to auto focus, and then recomposes to move the subject off to the side before taking the photograph. The technique was common when cameras only had one central focus point (or optical focusing aid for manual focus cameras), or the focus points were bunched in the centre of the frame. Many modern cameras allow the photographer to set a focus point anywhere they want the subject to be in the frame, and this is much more accurate than using focus-recompose.
Another use for focus lock is where the photographer is taking many frames for a stitched image, which all have to have identical focus. Similarly, in macro photography focus stacks made with a focusing rail require the focus set on the lens to remain fixed.
On autofocus cameras, focus lock is achieved in many ways. If the camera is set to single shot drive mode, then half pressing the shutter button will autofocus and lock. The photographer can then recompose before fully pressing the shutter button to take the shot. This does not work with continuous shooting drive mode as the camera would keep changing focus. The camera may have a focus lock button (AF-L) or there may be a focus hold button on the lens. This locks the focus when pressed. Another option is to disable the autofocus when the shutter button is pressed, and assign that feature to another button on the back of the camera. This is known as back button focus. Having two separate buttons gives the photographer freedom to choose when to focus and when to take the shot. Lastly, the photographer may also use autofocus initially but then switch the camera to manual focus, which will retain the current focus position. That is more useful when taking many shots with the same focus.
Some macro and Ultra-Wide-Angle lenses intended for long-exposure astro-photography offer a switch to lock the focus in place, making its less likely to accidentally change the focus setting when touching the lens. This is either called focus lock or focus clamp, depending on brand/manufacturer.
Focus stacking involves taking several photos with a gradual change in the focal plane relative to the subject and later combining these photos using image editing software. The focal plane can be adjusted either by changing the focus of the lens or by moving the distance between the camera and the subject.
Effectively, this makes it possible to extend the depth of field at a given aperture size. A tripod is almost always necessary for focus stacking to guarantee that the position of the camera does not change between the shots. The software can then easily align the photos and pick the most detailed area from each of the shots, combining them into one single image.
Focus stacking is especially useful in macro photography, where even relatively small aperture settings produce shallow depth of field, and an even smaller aperture would reduce the sharpness of the resulting image due to diffraction. It is also commonly used in landscape photography, when combining elements on the ground close to the camera with a distant landscape.
Format
[edit]Format shape
[edit]Landscape, portrait, square, or panorama are terms used to describe the shape of a photo, regardless of what is really photographed. The formats have been named after the sort of photos that are usually taken in that format.
- A landscape format is a photo that is wider than it is high:
- A portrait format is higher than it is wide:
- A square format is exactly as high as it is wide:
- A panorama format is often very wide in relation to height:
Format is sometimes known as orientation since it depends on which way your camera is oriented, horizontal or vertical. Randall Munroe also mentions the 'diagonal'.
A file format is a standard way that information is encoded for storage in a computer file. There are some common formats used for photographs or graphics.
- GIF is a bitmap image format. An image in GIF can only contain up to 256 colors, so not good for normal photos and is mostly used for small web images, animations or images where you need a transparent background.
- HEIF is a file format intended to replace JPG. See also: HEIF
- JPEG (or JPG) is a common format with irreversible compression. It is used for digital photography. The compression can cause visible artifacts (undesirable small distortions).
- PNG is a format with lossless data compression (the compression can be reversed) which also supports transparency. It is used for graphics or animations.
- Raw file is made in the camera and it is the first and not processed information collected by the sensor. It contains a lot of information and is very large, See also: Raw
- SVG is an XML-based format for vector images. Good for line drawings, but a terrible choice for photographs.
- TIFF is a universal format for graphic or photographic images. TIFF files can include lossy or lossless container formats and vector graphics.
A digital camera sensor that has the same physical size as a single frame on standard 135 film (popularly referred to as "35 mm film"), which is 36 mm × 24 mm. This can typically be found on high-end DSLR and MILC cameras; most other cameras have smaller sensors. Despite the name, Full Frame is much smaller than both Medium and Large format.
The colour gamut is the total range of colours possible in an image file and is limited by the way colours are represented in the file (the colour space).
A display monitor, or screen, which can display the sRGB colour space (which is the standard for the internet) is described as a "standard gamut" display. Some screens aimed at professional use can display the slightly larger Adobe RGB or significantly larger DCI-P3 colour spaces and are described as "wide gamut" monitors.
(to be written, meanwhile see Commons:Geocoding)
The HEIF is a file format intended to replace JPG. It has a better compression algorithm, larger allowed bit depth and often used with the DCI-P3 (Display P3) colour space, which has a larger colour gamut than the sRGB colour space typically used for JPGs on the internet. It has been made popular by some recent Apple devices, though isn't as widely supported as JPG and is covered by patents.
High-dynamic-range or HDR is a technique that improves the range of color and contrast in a digital image. An HDR photo is an image created by taking several photos of the same scene with different exposures and merging them into a single image. Most modern digital cameras are capable of capturing multiple photos with different exposures at the same time, an HDR option, with one click of the button. Instead of taking a single photo, the camera captures three or more photos in quick sequence using different exposure values. For example, a digital camera might take three photos: one with a -1 exposure setting, one with the default (automatic) exposure, and one with a +1 exposure setting. The camera (or smartphone) can store the images separately for further post-editing or/(and) combine the photos automatically to create a single image.
HDR image processing is useful when taking photos of scenes with high contrast. For example, if you take a photo of a person standing in front of window on a sunny day, the backlight might cause the automatic exposure to be too low, making the person's face appear dark. If you adjust the exposure to brighten the face, the window will become too bright — possibly completely white. By combining multiple exposures, these levels can be averaged out, allowing the face to be visible in the photo while not overexposing the light behind the person.
The highlights represent an upper portion of the histogram, above the mid-tones and below white. A common post processing step is to "recover the highlights" to uncover more detail for the viewer. If taken to extreme, this may result in some false colours or halos, and generate an unnatural scene where very bright elements (such as light sources) are rendered merely off-white. The use of a graduated neutral-density filter in front of the camera lens is another method to lower the brightness of the sky and retain detail in the highlights.
A type of chart that shows the brightness of an image by displaying the amount of pixels in each shade from complete black to complete white. The left side of the chart represents dark areas of an image (shadows), and the right side shows bright areas (highlights). The histogram is useful when taking digital photos and editing them, because it helps ensure a proper Exposure. If all or almost all of the pixels are somewhere close to the middle of the chart, you will usually have a good exposure. Photographers often discuss "clipping" the histogram; this means that significant parts of the image are so dark or so bright that they have no detail. If many pixels are leaning against or "clipping" the left side of the histogram, that means parts of the image are too dark, and you will not be able to brighten them later. If there are many pixels close to the right edge of the chart, parts of the image are too bright, and will appear simply as pure white. In this instance, your image may be described as blown-out.
You can access a photo's histogram on your digital camera display and again in your editing software. It is most useful when you have trouble seeing your image, as in bright sunshine. If an image seems too dark or too bright, but the histogram is not "clipped", you will likely be able to save the photo.
It should be noted that, even when taking only RAW photos, histograms shown by the camera are based on the processed jpeg image in all currently available digital cameras. For this reason, a clipped histogram on the back screen of the camera does not necessarily mean that the information is actually lost and can not be recovered later. This problem can be mitigated to some extent by choosing a relatively flat color profile when shooting RAW.
Image stabilisation or IS is a camera feature that compensates for camera shake and enables a sharp image to be taken hand-held at slow shutter speeds. There are three kinds, and some cameras can combine them together.
- Electronic image stabilisation takes the image from a slightly smaller region than the entire image sensor, and moves this region about to compensate for the camera moving in space. It is particularly effective when shooting video on full-frame cameras where the video is taken from an APS-C cropped region.
- In-body stabilisation shifts the camera sensor around. The sensor can be moved in the x-y axis, tilted vertically or horizontally, and also rotated. This cannot generally be done for long periods of time, making it less suitable for video.
- In-lens stabilisation uses a floating lens element, which is moved about. It cannot correct for rotation but is considered superior to in-body stabilisation for long focal lengths.
The stabilisation software can get confused if the camera is mounted on a tripod, leading to slightly softer images. So it is often recommended to turn it off when using a tripod, though some lenses can do this automatically. A switch on some lenses is used for panning shots.
The best stabilisation can extend the shutter speed for an acceptably sharp image by up to 4 stops. However, it isn't guaranteed, so taking several shots helps ensure that at least one of them is sharp. Stabilisation enables the use of lower ISO settings meaning less noise, or a smaller aperture, meaning more depth-of-field or when a given lens doesn't have a very large maximum aperture.
The heavier cameras used in professional movie making require heavy duty tripods or a system such as Steadicam where the whole camera is stabilised.
ISO
[edit]The International Organization for Standardization (ISO) rate the sensitivity (speed) of photographic film. A standard ISO 100 film is half as sensitive as an ISO 200 film and requires twice the exposure to achieve a similar image. Although the sensor of a digital camera has a fixed sensitivity, digital camera manufacturers have adopted this ISO concept to refer to the gain (amplification) applied to the exposure by the sensor. Just as a high ISO film tends to be grainier than a low ISO film, a high ISO setting on a digital camera produces an image with more noise than a low ISO setting. While a grainy black and white film photograph has some aesthetic appeal, most people find digital noise unpleasant. The maximum ISO settings on a DSLR can greatly exceed the highest speed films ever produced. The minimum ISO settings, however, are similar to film, with ISO 64 and ISO 100 being common, though some cameras do not go below ISO 200.
Many beginners are taught that ISO is one of the three variables in the "exposure triangle" along with aperture and shutter speed. In fact the third exposure variable is the brightness of the scene. ISO does have an effect on the brightness of the image recorded by the camera, so all four variables are important to consider when making an exposure.
K–O
[edit]See also: Lenses for SLR and DSLR cameras
In photography, the term lens typically refers to a series of optical lenses (called lens elements) at the front of the camera. The lens gathers incoming light and directs it onto the camera's sensor or film to form an image. In optics (and many languages other than English), the lens would be called objective. For some types of camera, the lens is permanently attached to the camera body, while for others different lenses can be attached for different shooting situations. The quality of the lens allows for a clear image instead of a blurred one.
See:
- Back lighting
- Broad lighting
- Catch light
- Direction of light
- Fill light
- Front lighting
- Key light
- Quality of light
- Rembrandt lighting
- Rim light
- Ring light
- Short lighting
- Side lighting
- Weather
Technically, a macro image is an image taken with 1:1 or greater magnification at the sensor/film plane. This means that if the object were 30mm across, it would measure 30mm across on the film negative. Greater than 1:1 magnification is possible with true macro lenses, for example a 4:1 macro setting would enable a 5mm object to be 20mm across on the negative/sensor. The term has become misappropriated and used to refer to 'close up' photography in general, particularly by manufacturers of point and shoot cameras, whose lenses are rarely if ever come capable of anything close to true 1:1 macro magnification. Modern digital cameras have a macro mode. This is a mode which enables close focusing, allowing the maximum possible magnification from the camera/lens.
(to be written)
The mode dial on a camera is a turn-button/knob, that allows the photographer to choose which exposure settings on the camera are made manually or automatically. Some of these choices may appear on an additional dial or menu option rather than on the mode dial. For example, Auto ISO is usually an option on a menu or separate dial. The following choices are common:
- Auto. The camera decides all exposure variables: aperture, shutter speed, ISO and pop-up flash. Sometimes it also detects what kind of scene is being shot, and whether to apply advanced features like HDR or multi-frame-noise reduction.
- P (Program). The camera will choose the aperture and shutter speed.
- A or Av (Aperture priority). The photographer chooses the aperture and the camera chooses the shutter speed.
- S or Tv (Shutter priority). The photographer chooses the shutter speed and the camera chooses the aperture.
- M (Manual). The photographer chooses the shutter speed and aperture.
Other options can include a choice of scene (Action, Portrait, Night portrait, Landscape, Macro, etc) and to choose between still photography and video. The acronym PASM refers to the choices on this dial for some camera brands.
Megapixels or MP or MPx is the number of pixels in a photograph, camera sensor or display, in millions. If a photo is 4928 x 3264 pixels, it is 16.08 Megapixels. MP is not to be confused with Megabytes, MB, another way of measuring the size of a file. Since the number is a product of width × height, it grows much faster than linear resolution (which more closely corresponds with increased detail in an image). Conversely, even modest cropping of an image can cause a dramatic reduction in megapixels. Assuming a 3:2 aspect photo, then a 24MP image has exactly twice the linear resolution of a 6MP image, and a 51MP image has twice the linear resolution of a 12MP image.
The figure can be used as a rule-of-thumb for whether an image will fill a display or print sharply at a given size. A 16:9 HD television requires about 2MP whereas an Ultra HD television requires 8MP. A high-quality A4 print (at 300 ppi) requires 8MP and a double-page spread requires 16MP. A poster print may not require more MP than an A4 page, as it is generally viewed at a distance.
Images with a very high number of megapixels (e.g., over 50) may appear excessive, but provide opportunities for cropping, for downsizing (to increase sharpness), or for interactively zooming to examine the detail in a portion. The future 8K UHD televisions require cameras with 40MP sensors.
Noise is a random variation in the brightness or colour of individual pixels. At high levels it can obscure details (such as a fine texture) or give the illusion of detail (such as on a smooth surface) and make the image grainy or blotchy.
Brightness noise, also called luminance noise, is closer to the effect of grain in a film. At low ISO settings and a well lit scene, this noise may not be apparent and no noise reduction required.
Colour noise, also called chromatic (or chroma) noise, causes splotches of red, green or blue. It is very objectionable and so all cameras and raw converter software apply some degree of colour noise reduction as standard.
Noise increases when the ISO setting is increased. It also increases if the image is brightened in post-processing, particularly in the shadows. Although some noise is due to the electronics in a camera, a large cause of noise is simply due to light being made by individual photons, which hit the sensor in random fashion. Increasing the exposure increases the amount of light recorded, and so lowering the amount of noise. The larger sensor in a DSLR is capable of capturing more light, so is less noisy, than the tiny sensor in a smartphone.
Noise reduction or NR or denoising is... (to be written)
With photo editing programs, it is easy to get carried away and push the controls in the program far more that is necessary; that will result in an overprocessed photo. Depending on where your photos are supposed to be used, tastes for what can be seen as overprocessing can be very different. On social media, there are always fads and fashion for how a photo should look. Artists might use overprocessing to give photos for a special exhibition a certain look, for example very dark, with extreme contrasts or perhaps posterisation. Filters and special effects are fun for Christmas card and birthday invitations.
Most professional photographers (and Commons) try to get the photo as close to how they saw, or experienced, the scene they shot. But even for such photos, it can be hard to know where to stop. Too much sharpness and you might get halos in the image, too much noise reduction and the photo looks like a watercolor painting, too much saturation makes the photo look like an old badly printed postcard, etc. Treat your photo gently and it will be timeless with the potential to be used in many different ways.
Oversaturated
[edit]See Saturation
P–T
[edit]Panning means panning or moving the camera in order to follow a moving subject. The result is a clear, sharp picture of the subject, with the surroundings blurred by movement.
For the effect to be noticeable, it is usually necessary to have a relatively slow shutter speed. This can be achieved by shooting with a small aperture, by using an ND filter, or by shooting in relatively low light. Panning will often become necessary with fast moving subjects in low light in order to keep the ISO low and achieve good image quality.
Keep in mind that, for extreme panning, the subject may change its orientation and size throughout the exposure. As a consequence, even a perfect pan may leave parts of the subject with motion blur. For example, panning with the cockpit of an aircraft can leave the wingtips and the tail slightly blurred.
(to be written)
- For tips about How to make a panorama see Panorama on image guidelines
Vertical perspective correction adjusts an image where the camera was titled upwards, and makes it appear as though the camera was held level with the ground. Converging verticals are made parallel, though parts of the image are stretched and cropped as a result. Similarly, horizontal perspective correction adjusts an image where the camera was rotated sideways at an angle to a wall. When adjusting verticals in a photograph, make sure the lines you fix truly are vertical: many street lamps and poles are tilted and taper towards the top, and old buildings may not be true.
We are more comfortable with seeing converging horizontals, which occur in linear perspective whenever we stand at an angle to a wall. We are less familiar with converging verticals because we generally look straight ahead rather than up, and because horizontal lines in our built world are much longer than any vertical line. If you stand at the bottom of a skyscraper and look up, the verticals do converge. Therefore this "correction" does not fix any fundamental fault or distortion but helps the image conform to expectations. Artistically, or in images where the subject is not a building, these concerns may matter less than being faithful to the viewpoint of the camera, which really is looking up or looking down at something.
Such software adjustments can produce images with a larger angle-of-view than any ultra-wide-angle lens. In that case, one is just trading one set of issues for another. An angle-of-view of 90° (or 45° above and below the centre) is the most one can generally capture without too many problems, though the edges of the frame may still appear unnaturally stretched. A rule of thumb for capturing a building with minimal disortion, is to stand at least as far back from it as the building is tall or wide. Finding an elevated position, rather than ground level, will also help minimise the angle-of-view required.
The verb to photoshop originally simply referred to the process of editing a photograph using Adobe Photoshop (one of the most popular image editing applications). The meaning of the term has shifted a lot, though, so that today it usually refers to image manipulation that goes far beyond what would be considered regular post-processing and retouching, distorting reality in some way or another (regardless of which software has been used). While it can be used for some kinds of digital art (see example), "photoshopping" has over time gained such a negative connotation that for many people it now means that an image has been deliberately edited for deceptive purposes. As a consequence, most photographers and digital artists shun the term, instead 'surreal photography' is often used to describe artistically manipulated, compiled or composed photos.
Pixel peeping
[edit]Pixel peeping is a term describing the analysis of an image at a pixel level, i.e. at a magnification of 100% (one pixel in the file corresponds to one pixel on the screen) or higher. The term is often used in a derogatory manner, criticising a focus on small technical defects of an image, implying a lack of regard for other aspects such as the composition and subject of the photo. For more info, see this essay.
Post processing or image editing is everything that is done with a photo after it has been taken. The alterations and improvements of the photo are done with editing programs. Some cameras have very simple editing programs installed in the camera, that allows you to adjust the photo, but most post processing is done on a computer with a more powerful program. Some of the free programs are described at Commons:Software.
Keep in mind that there is no such thing as an "unprocessed" digital image. For RAW files, processing is essential in order to develop the raw sensor data into a visible image. For JPEG files produced by a camera, at least some basic processing is always applied by the camera firmware. Smartphone cameras sometimes even go a step further and automatically apply retouching algorithms in order to smoothen skin etc.
See also: Overprocessed
Posterisation occurs when the intended smooth graduation of tones in an image instead jump from one tone to another in obvious steps. At an extreme level, it can be a deliberate effect, like a poster printed with a limited set of solid colours. It is particularly noticeable in areas without much detail where the eye expects to see smooth graduations, such as in the sky or on skin. See also: Banding
A photo in JPG format produced by a camera (with normal settings) will typically not show any posterisation in the midtones, but it can appear when the image is adjusted by photo editing software. Thus posterisation can be an indication the photo has been over-processed.
Posterisation can also occur in when the tones hit the extremes of black, white, or when the red, blue or green colour channels of the photo clip. The image is unable to display a brighter white or a redder red, and so a solid block of white, red, magenta, etc, results.
The GIF file format is prone to posterisation as there are only 256 colours available in any file.
A lens with a fixed focal length (compared with "zoom" lens, which can vary its focal length). Advantages compared to a zoom are that it is cheaper to design a high quality prime lens, and easier to make one with a larger aperture. It will typically be smaller and lighter than a zoom covering the same focal length. The drawback to prime lenses is that one cannot vary the extent of the scene captured other that by moving closer towards, or further away from the subject, or by cropping the image afterwards. This downside can to some degree be alleviated through the use of secondary lenses like teleconverters or wide angle adapters.
Purple fringing (PF) or fringing is the term for an out-of-focus purple or magenta "ghost" image in a photograph. The fringing is generally most visible along the edge of dark areas adjacent to bright areas such as daylight or various types of gas discharge lamps.
The effect is usually a result of longitudinal (axial) chromatic aberration of the lens, as well as other ways in which lens design is not optimized for very short visible light wavelengths.
The quality of light refers to how the light falling on a subject affects the highlights and shadows and the transition between them. A hard light source creates a sharp, clearly defined edge between the lit areas and those in shadow. A soft light creates a smooth, gradual transition from highlight to shade. The difference is due to the apparent size of the light source from the point-of-view of the subject. For example, a flash gun is a small light source that creates a hard light when pointed directly at the subject. But if the flash gun is pointed up at the ceiling or a side wall, the light source becomes the large ceiling or wall and creates a soft light. The sun on a clear day creates hard light because, although it is huge, it is extremely far away, and appears as a small light source from our point-of-view.
A soft light may be so large as to light the subject from many angles, leading to few or faint shadows. A hard light can be positioned to create strong high-contrast shadows.
As well as reflection off a large bright surface, another way to soften light is to diffuse it. Sunlight through clouds on an overcast day creates a huge soft light in the sky. In the studio, a softbox is a large lightweight empty box of fabric that is placed on a light stand. The softbox has a hole for the flash gun or studio strobe in one side and a white mesh opposite this, facing the subject. Reflective material inside the softbox also helps spread the light evenly.
A raw file is generated by a camera and directly represents the information collected by the image sensor. It is not directly viewable without further processing (though it often embeds a small JPG which can be used to show a thumbnail or preview-quality image). The format is generally proprietary to the camera manufacturer and changes as new cameras are released, though the DNG format created by Adobe is occasionally used. Raw files are very large because they do not use lossy compression such as found in JPGs. Because of the large size, proprietary format and complex processing involved in working with raw files, they are not used for displaying images on the internet. Many advanced photo editing programs can read raw files or have a "raw converter" component within them (such as the Adobe Camera Raw companion to Photoshop).
Raw files can represent a larger dynamic range of brightness and gamut of colours than is possible in image formats like JPG. They allow much more adjustment to be made to an image. An analogy often made is that they are like "digital negatives" whereas JPG is like a print. Another analogy is that the raw file is literally the raw ingredients for a cake and the JPG is baked.
Most advanced photographers appreciate the extra flexibility when working with raw files. But the extra post-processing step makes them less attractive for photographers who must get their images published rapidly, such as sports press photography or when uploading photos to social media.
Reciprocal rule
[edit]The Reciprocal rule is a guideline for handheld photography, which states that to achieve an acceptably sharp image one should use a shutter speed of at least 1/f where f is the focal length of your lens. This rule dates from 35mm film and the focal length must be multiplied by 1.5 for APS-C crop sensor cameras and by 2 for Micro Four Thirds cameras. For example, on a full-frame camera, a 50mm lens requires a shutter speed of at least 1/50s. When combined with good camera holding technique, the guideline should achieve images that appear sharp when printed. If the subject can move, then a higher shutter speed may be required. If pixel peeping then this rule may prove inadequate. See also Rule of doubles and Image stabilisation.
Reproduction ratio of 1:1
[edit]A 1:1 reproduction ratio means that the image formed on the film or sensor is the same size as the original object.
Lenses that can produce such a magnification are called macro lenses. They need to be designed to focus to subjects very close to the very front element of the lens (the exact distance depends on the focal length). In most lenses, the maximum reproduction ratio that can be achieved is between 1:3.5 and 1:9, making them unsuitable for true macro photography.
Rembrandt lighting features in studio portrait photography and is named after the style of portraiture used by the Dutch painter Rembrandt. The key light is positioned at 45° to the side, a little higher than eye-level, and pointing down at the subject. A fill light at half-power, or a reflector, is positioned at 45° on the other side. The arrangement produces a downward pointing triangle of light below the eye on the side of the face that is in shadow. The angle and height of the key height produces a catch light in both eyes.
Image resolution is the detail an image holds. The term applies to raster digital images, film images, and other types of images. Higher resolution means more image detail. Resolution of an optical system describes how close objects in real life can be while still being discernable as being separate in the resulting photograph. Because of varying acceptable levels of blurryness, resolution is not so easy to measure definitevely.
The Rule of thirds is a disputed way of composing a photo by dividing it into nine equal parts by two equally spaced horizontal lines and two equally spaced vertical lines. It was invented in 1797 by John Thomas Smith, misinterpreting artist Sir Joshua Reynolds, who stated that if there are two distinct areas of different brightness in a picture, one should dominate and they should not be equal in size. It is taught to beginner photographers to discourage them from simply sticking the subject or horizon in the centre of the frame all the time. Some have considered it as an approximation to the Golden ratio and the Golden spiral. Camera viewfinders and photo editing software often offer such framing and cropping guides. In practice, it is not used as a "rule" by any notable photographers, who have learned instinctively how to balance elements in the frame. Some have mockingly noticed that the human brain enjoys finding patterns and "rules" in nature, and overlaying any arbitrary divisions on a selection of famous images will appear to indicate alignment with an invented "rule".
- Comments about Rule of Thirds in media
The book "The Photographer's Eye: Composition and Design for Better Digital Photographs" by Michael Freeman, widely considered as one of the best contemporary guides to photographic composition, selling over a million copies [1], does not contain this rule. Indeed Freeman is intolerant of any "recommended" proportions and regards the Rule of Thirds as "nonsense" and a "rather silly instruction" "followed with mediocrity by artists and photographers who lack imagination", concluding that "it’s probably the worst piece of compositional advice I can imagine." [2]
Landscape photographer Joe Cornish calls it a "lamentable formula", a concept "absurd as it is ineffective". He notes that "dividing any four-sided figure into nine equal parts, giving four intersections, has a reasonable likelihood that something significant in the image will appear to coincide with third divisions and their intersections", which "may be taken by proponents of RoT as 'evidence' of its vital significance". [3]
Another notable landscape photographer, Ansel Adams, says "The so-called rules of photographic composition are, in my opinion, invalid, irrelevant and immaterial".
Some comments on Rule of Thirds and when to avoid it can be found in a PetaPixel article.
Rule of doubles
[edit]The Rule of doubles is a modern version of the reciprocal rule, invented by Tony Northrup, that takes advantage of image stabilisation and the fact that digital camera frames cost nothing. In low-light situations, the reciprocal rule forces the photographer to use a wide aperture, with consequential shallow depth of field, and/or very high ISO, with consequential high levels of noise. The rule of doubles aims to take sharp hand-held photos of a low-light static scene with the lowest possible ISO and useful depth of field.
The rule of doubles starts with the reciprocal rule, where the shutter speed is chosen at 1/f, where f is the focal length of the lens. The aperture is chosen and fixed to suit the image being created and limitations of the lens used. The camera is set to Auto ISO and continuous shooting (i.e., it keeps taking shots as long as the shutter button is held down). The photographer takes one shot. Then the shutter speed is doubled in length, which will cause the camera to halve the ISO. The photographer takes two shots. The shutter speed is again doubled in length and the ISO will again be halved by the camera. The photographer takes four shots. This repeats as long as the photographer and subject have patience. The photographer can then review their photos at 100% on the camera, looking backwards through the session, and stop at the first sharp shot. All other photos in the session can be deleted as this one will have the lowest ISO and so lowest noise.
Saturation describes the intensity of colors in an image. In most image editing programs you can modify the saturation. For most photos, it is best not to modify the saturation too much. Too much increase in saturation makes the picture unrealistic and will also enhance color fringes or chromatic aberration. Too much decrease in saturation results in a black and white picture. However, most photo processing programs have a special Black & White function for this purpose, giving better results and higher contrast B&W.
Advanced editing programmes such as Photoshop, GIMP, etc, also have select color functions which allows for separating a single color. Example: For a red rose in a field, the red can be increased while the green parts remain natural.
Shadows
[edit]The shadows represent a lower portion of the histogram, below the mid-tones and above black. A common post processing step is to "lift the shadows" to uncover more detail for the viewer. This may result in increased noise appearing in those areas, since the raw file records much less information in that part of the tone curve compared with the highlights.
Sharpness describes how accentuated/contrasted the edges of objects are. If not sharp enough, lines and edges appear blurry but when viewed in reality they have clear boundaries. Oversharpening in software can cause halos/glowing margins around edges. The term is often misused when someone meant to describe resolution.
The shutter in a camera opens and closes rapidly to expose the sensor to light for a chosen amount of time. A fast shutter speed exposes the sensor for a short duration of time, freezing motion and minimising the chance of camera shake. A slow shutter speed can blur motion or increase the effect of camera shake. Doubling or halving the shutter speed changes the exposure by a full stop (1 EV). A 1/250s shutter speed lets in twice the amount of light as a 1/500s shutter speed. Many camera allows the speed to be changed in thirds of a stop (1/3 EV).
DSLR cameras usually have a "focal-plane shutter", which consists of two curtains directly in front of the sensor. The first curtain drops down to expose the sensor to light and then the second curtain drops down to hide the sensor. For shutter speeds faster than around 1/160s or 1/250s, it is necessary for the second curtain to start coming down before the first curtain has fully opened — at least some of the sensor is covered at any given moment. The consequence is that above this speed, called the flash-sync speed, a normal camera flash cannot be used since part of the image would appear dark. When the scene is mostly lit by the flash, it may not be necessary to use a faster shutter than this, as the exposure duration will actually be determined by the length of the flash pulse. But if flash is combined with ambient light, such as fill-flash in daylight, then this is a limitation. Some flash guns have a mode called high speed sync (HSS) where many short pulses of flash are used rather than one powerful pulse. This ensures the image is evenly lit above the flash sync speed.
An alternative to a physical shutter is an electronic shutter where the sensor is turned on and off. It is currently possible to rapidly turn the entire sensor on but not to rapidly turn the entire sensor off. Many cameras offer an electronic first curtain shutter mode, which uses the physical shutter to end the exposure. This can prevent shutter shake blurring the exposure and also extends the lifespan of the shutter. When recording video, the camera uses a fully electronic rolling shutter. This scans the sensor from top to bottom with each video frame, so the top of the frame is exposed at a slightly different time to the bottom. For fast moving subjects (such as propeller blades) or when panning the camera, this can cause a distorted image. A video camera with a electronic "global shutter" avoids this problem.
The physical shutter in most cameras can automatically take exposures from 30s to 1/8000s. For exposures longer than this, the Bulb (B) setting takes an exposure for as long as the shutter button is held down. For exposures shorter than this, flash photography is useful, as the duration of a flash can be extremely short.
The shutter button (or shutter release) is a button on the camera that makes the camera take the photo when it is pressed, usually with the right index finger. When the shutter duration is set on the camera (automatically or manually), the shutter will open and close even if the finger is held down on the button. If the shutter speed is set to bulb (B) then the shutter remains open while the shutter button is depressed, and closes when released.
If the camera is set to single shot mode, then no more photographs are taken until the button is released and pressed again. If the camera is set to continuous shooting mode, then the camera will keep taking consecutive photographs while the button is held down (though there is usually a limit). These are called drive modes, which dates from when a motor drive was needed to keep advancing the film for continuous shooting.
Pressing the shutter button can make the camera shake a little bit even when the camera is mounted on a tripod. For this reason, there are various ways to fire the shutter without touching the camera. The most basic is to set a short delay (2s and 10s are common) between pressing the shutter button and the photo being taken. A 2s delay is usually enough to prevent most camera shake. If the floor or ground the camera is standing on is wobbly, you will need more time for the camera to become steady. A 10s delay is long enough for the photographer to run around and join in a group photograph. The 2s delay can also be good to use situations when the camera is handheld, such as in cold weather when you wear gloves and can't feel the buttons very well or in strong wind when it's hard to just hold the camera steady.
A simple device to prevent camera shake is a shutter release cable. Originally this was a flexible wire inside a plastic sleeve that was screwed into a hole in the shutter button. When the wire was pushed by a plunger, it triggered the shutter button mechanism. Modern digital cameras use an electrical cable instead.
Alternatively the shutter may be fired without any cable, using infra-red or wireless control. An infra-red remote control can be bought very cheaply. A more advanced form of remote control is an intervalometer, which allows multiple shots to be fired automatically. This small electronic device lets the photographer set the initial delay, the gap between shots, and the duration of each shot. This last variable is useful for long exposures, but the camera's own shutter setting can be used for short exposures. Some cameras have an intervalometer built into them.
There are many advanced release controls for enthusiasts. These may react to light (e.g., to photograph lightening), movement (e.g., to photograph a water drop) or a warm body (e.g., to photograph a wild animal). Some cameras have a mode where the shutter is fired when the subject smiles.
Stars in the night sky look like bright points to the eye, but in long-exposure photography star trails appear because the earth rotates.
The 500 rule is an approximate guide to the longest shutter speed to use when photographing the night sky to avoid star trails. If you want stars to appear as bright dots rather than dashes then ensure the shutter speed is no longer than 500 divided by the focal length of the lens. As with similar rules, you have to adjust the focal length if not using a full-frame sensor camera (× 1.5 for APS-C and × 2 for Micro Four Thirds). For example, if using a 24mm lens on a full-frame camera, then the longest shutter speed should be 20s. This rule may be inadequate if pixel-peeping and does not take into account the declination of the stars in the frame. Phone apps like Photo Pills can calculate the exposure more accurately.
Alternatively, a photographer may want to create a photo with long star trails. Often the best effect is achieved by making sure a celestial pole is in the frame, and there is an interesting feature on the ground that is sufficiently well lit (either by moonlight or light painting). With film photography, this was done by taking extremely long exposures. However, in digital photography, the sensor heats up over time and becomes noisy. Therefore it is more common to take lots of consecutive long exposures (e.g., 30s) and combine the frames in a program such as Photoshop. If doing this, the "long exposure noise reduction" feature of the camera should be turned off, otherwise gaps will appear between the shots. An intervalometer can be used to automatically fire these consecutive photos.
The starburst effect makes very bright (point-) light sources appear to emit rays of light, like a star. This is considered a defect in astronomy, but is generally accepted as normal or even desired in some areas of night photography. The optical reason for the starburst effect is diffraction, and in the scientific world the "rays" are called "diffraction spikes". Because of that, the aperture of the camera lens plays a major role in the formation of starbursts. As a rule of thumb:
- Smaller apertures (higher f-numbers) lead to more diffraction and thus to more pronounced starbursts.
- Straight aperture blades tend to produce more clearly defined starbursts than rounded ones.
- Lenses with an odd number of aperture blades produce more spikes: A lens with an even number of aperture blades produces one spike per aperture blade; a lens with an uneven number of aperture blades produces two spikes per blade.
Optical filters that provoke this effect were popular in the film era (and are still available new today), but they tende to have averse effect on over-all image quality. Some digital cameras have a star filter as one of the in-camera effects. Pixel-level image-manipulation applications like Gimp or Photoshop can also be used to simulate the effect. There's even a smartphone app that does nothing but make shiny things sparkle even more for maximum Instagram bling-bling [4]. Just like artificial lens flare, all of these artificial methods of creating starbursts would probably be considered to be "too much" for most purposes on Commons, though.
Stitching is the process of combining multiple photographs showing different parts of a scene, thereby creating one large, seamless image. This has two main benefits:
- It allows a photographer to take an image of a higher total resolution than with a single photograph, and
- it allows for a wider field of view than the widest one possible with the current lens. For example, if the widest focal length of the lens is 18 mm and you want to take an image at 14 mm focal length, the only way to achieve that with this lens is by taking four images at 18 mm, changing the orientation between those 4 photos to cover the entire desired range, and stitching those photographs together using software later.
A stitching error is a point in the image where the seam between two images is still noticeable in the final result. This is usually the result of
- differences in exposure
- a slight shift in the camera position between shots (more precisely a shift of the point of no parallax of the lens),
- a failure of the stitching software to correctly align the photographs because it could not recognize good overlapping features, or
- a failure of the stitching software to correct for lens distortion, vignetting or other aberrations.
To avoid the most common errors, you should shoot stitched images with a locked manual exposure whenever possible to achieve matching brightness between frames, and use a tripod to make sure the camera does not move between frames. Manual focus can also help guarantee that the focus does not shift between shots. Changes in the position of the point of no parallax are more critical the closer you are to the closes object in the photograph. Therefore, it is sometimes (especially for indoor scenes) necessary to use specialized panoramic rigs to move the camera around precisely the right point, which is usually somewhere around the front element of the lens.
Stitched images are sometimes also referred to as panoramas, though technically, that term only refers to the wide aspect ratio common among images that have been stitched together horizontally. Consequently, a panorama can also be achieved by cropping a regular wide angle image (see Panorama).
Making a stitched image - example
The photographer's own comment about making a stitched image:
"Having planned this for a while, I finally had an evening with suitable light and weather conditions. The light conditions changed very quickly so it wasn't easy to find the right settings to have a short exposure (to avoid that the first frames of the stitched mosaic were differently exposed compared to the last frames) and to get the water of the river Spree smooth (for this purpose I'd preferred a longer exposure even more but then the light situation would have change too much in-between the single exposures). For the same reason I used a 35mm lens in this case, instead of the 50mm: I wanted to have less frames to take."
"The dynamic range of the scene was very high so I had to use the HDR technique. To have a short exposure I decided not to take five exposures as usual, but only three exposures (-2 EV, 0 EV, +2 EV) for each frame. For the stitching itself I had to play around a little bit with different kinds of projections. In the end I chose a 'Vedutismo' projection because the rectilinear projection caused extreme stretching for such a wide field of view. Regarding the crop I decided to use a 16:9 ratio because 2:3 would have ended in too much empty space at the top and the bottom. Placing the buildings right in the middle of the picture follows the rule of thirds."
For more tips about making good stitched images see Panorama Howto (at the moment only in German).
Strip or slit photography. The best way to understand how strip photography works is to imagine that you are standing with your camera in a fixed position behind a door that is only open a very small crack. You rapidly take photos of what you can see through the crack from that point of view. At first nothing happens so your photos will all be a very thin vertical line that looks the same in each frame. Then something, a streetcar, passes by the crack and you get a small piece of the vehicle in each of your photos. When it has passed by the opening, the scene may go back to what it was before and once again all the photos you take of the crack will look the same.
After shooting this sequence, you take all the photos and crop away all the black around the crack in the doorway, leaving you with lots of photos of just the crack and you paste them side by side in chronological order. As you do this a photo of the streetcar as it passed the crack will start to appear. If you were taking photos at the same time-interval during the shooting session and the car moved at a constant speed, you'd end up with a photo like the one in this example.
Imagine if the car had slowed down, then you would have got more photos with it in the crack towards the end of the session and the back of it would have been sort of "drawn out". And if it had stopped in front of the crack all your photos until the end of the session would have had a bit of streetcar in them that looked the same until the end. This is done with a special camera that shoots only a slit that is two pixels wide and the photos are taken in an automatic fashion to make sure the time between shots is the same and the shots are merged into a photo with software, but the principle is the same as photographing through a crack in the doorway.
Refers to a photo taken indoors where a subject is set up and photographed under controlled conditions. A studio shot can be made under conditions as simple as photographing something on a kitchen work top with the overhead lamp as light source, or it can be done in a dedicated room with special backdrops, studio lights, screens, reflective surfaces and other professional equipment.
The sunny 16 rule is a rule for exposure on a sunny day that enables a reasonable exposure to be taken without any light meter: On a sunny day, for a subject in direct sunlight, set the aperture to f/16 and the shutter speed to the reciprocal of the ISO setting (or film speed). So with ISO 100, an appropriate shutter speed would be 1/100s.
The looney 11 rule is identical in formula, but using an aperture of f/11, and is suitable for photographs of the moon's surface.
A teleconverter (abbreviated TC, also called extender by Canon and rear converter by Pentax) is an accessory that is placed between the camera and the lens. It contains lens elements that increase the focal length of the system by a certain amount, usually expressed as a multiplication factor. For example, a 2× TC attached to a 200mm lens will lead to an over-all focal length of 400mm; the same TC combined with a 50mm lens will lead to an over-all focal length of 100mm. 1.4× and 2× TCs are most common, but 1.7× and 3× versions also exist. Teleconverters were popular during the film era for their convenience, but are much less commonly used today as they can have rather strong adverse effects on image quality and zoom lenses have become significantly better in terms of optical performance lately. Today, TCs are probably most commonly used in sports and wildlife photography, where they can provide extra reach to the already very long telephoto prime lenses used in these genres.
Strictly, a "telephoto" is a design of lens where it is physically shorter than its focal length, but the term is commonly used to refer to lenses with a long focal length and a narrow field of view.
Texture photography refers to photographing a surface, pattern, repetition or clutter rather than a scene or composition. The key components for such photos are colors, light and depth. The texture usually fills the whole image, giving the impression that the photo is just part of something that goes on forever outside the frame.
Tilt
[edit]Tilt, rotation, rotated refers to cases where the vertical axis in an image is not aligned with the vertical in the scene that was photographed. This results in horizons not being level as well as buildings and trees leaning towards one side. Even in the absence of a visible horizon or perfectly vertical structures, a tilted image will usually look unbalanced.
In rare cases, tilt can be intentional and make an image more interesting. This is called a Dutch angle.
When describing the amount that an image is tilted, people usually give an amount of x degrees of tilt to the left (rotated cw, short for clockwise) or right (rotated ccw, short for counterclockwise). To correct the tilt, a rotation by that amount in the opposite direction must be applied.
A level horizon is only one way of determining if an image is tilted, and this method is often not practical due to the horizon not being visible in the image — either because it is not in frame or because it is hidden behind hills, structures or objects. In these cases, a good way of determining the tilt is to find structures that are perfectly vertical (often found on buildings) or reflections of features in water. Because water will almost always be horizontal due to the force of gravity, a line through a feature above the water and its reflection in water will usually be perfectly vertical.
Time of day
[edit]The time of day affects the quality, colour and direction of light.
- Midday is when the sun is at its highest in the sky. In summer closer to the equator the sun will appear directly overhead, but further from the equator and in winter it may never rise high. This is generally considered the worst light unless one is wanting harsh bright light for effect.
- Daytime is when the sun is above 6° and brightly lighting the sky and ground. It isn't a special period of the day for photography other than that there is plenty light to work with.
- Sunrise and Sunset are the periods where the sun is above but close to the horizon. Depending on the angle of view and the degree of cloud cover the sky may glow orange or red. The sun appears to be larger in the sky than when higher up.
- Golden hour overlaps sunset/sunrise and civil twilight. The sun is approximately between 6° and -4°. The light is low, very warm and relatively soft, which can be appealing, with long shadows. When photographing in such conditions it is important to not let the camera or software eliminate this warmth when setting the colour temperature.
- Twilight is the period where the sun is below the horizon (between 0° and -18°) but the sky is still sunlit and is one of the best times for photography. Since there is no sun in the sky during twilight, the quality of light is very soft. There are various stages to twilight.
- Civil Twilight is when the sun is between 0° and -6°. The colour of the sky changes rapidly, with red, orange and yellow as well as blue and purple. The street and car lights are on but there is enough sunlight to see without them. The planets are visible but few stars, and the contrast between moon and the sky is lower so it is practical to expose for both moon and land.
- Nautical Twilight is when the sun is between -6° and -12°. The sky may be dark blue with more stars but it is no longer possible to expose for both the moon and the land. It is difficult to see without artificial light or a very bright moon.
- Astronomical twilight is when the sun is between -12° and -18°. The sky is even darker with many stars. The phase and position of the moon becomes more important than the sun.
- Dawn is the beginning of twilight in the morning.
- Dusk is the end of twilight in the evening.
- The Blue hour overlaps civil and nautical twilight. The sun is approximately between -4° and -8°. The clear sky is a deep blue and the horizon may have a gradient of colours towards orange. Despite the name, in most populated areas the "blue hour" only lasts between 15 and 30 minutes. Unlike the black night sky, a blue hour sky is attractively colourful and buildings and other land features can silhouette against it.
- Nighttime occurs when the sun is so far below the horizon (below -18°) that the sky is black. Since the sky is so dark, the faintest stars and nebulae may be visible. In urban areas, light pollution can spoil the nighttime sky, producing a sickly yellow-orange glow and making it hard to see the stars.
Tripod or tripod with head is...
(to be written)
U–Z
[edit]Upsampling
[edit]The terms upsampling or upsizing refer to the process of increasing the resolution of an image though interpolation, i.e. stretching out the same image across more pixels. This can either be from the original resolution of the camera sensor or based on an image that was previously downsampled. When you look at an upsampled image at 100%, you can usually see the pixels clearly.
In almost all cases, upsampling makes the file larger but adds no additional information. For this reason, images on Wikimedia Commons should, in general, not be upsampled. In many cases, some information will actually be lost when an image is upsampled (depending on the upsampling ratio and the interpolation method that is used).
An exception where information in the image can be increased in upsampling (compared to keeping the resolution the same) are steps in image processing where any geometric transformation is applied to an image. This could be applying perspective or distortion correction (ideally done based on a RAW file) or creating a stitched image. In these cases, upsampling can reduce the amount of information that is lost through the tranformation.
Vignetting is a reduction of an image's brightness or saturation at the periphery compared to the image center or in plainer words, it is a soft darker border along the sides of the photo. Vignetting can be a result of how the light enters the camera lens or it can be added in post processing as an effect. The vignette can be of any size, intensity and shape. It can be just darker corners in the photo or around the whole edge of the photo, leaving a square or round central area.
Weather
[edit]Weather influences the light and mood of a scene, and can become the subject itself.
- Sunny weather produces hard light with blue skies and a generally happy mood and warm feeling. There is a strong colour temperature difference between areas lit directly by the yellow sun and areas in shade lit by the blue sky. A clear blue sky creates a simple background.
- Cloudy weather produces an interesting sky, with variations in brightness and colour. The land may change in brightness and warmth depending on which areas are under cloud. Clouds moving across the sky can make it hard to take a stitched photo as each frame may vary in brightness and the software confused by trying to align moving clouds. Very long exposures, using an ND filter, cause moving clouds to form streaks across the sky.
- Overcast weather is often considered unattractive, unexciting and depressing. It produces very soft neutral consistent light with no shadows, which may suit some photography. Generally photographers try to avoid including an overcast sky in the frame, as it may be either a boring grey or become blown white. However, a mix of dark and light clouds can be interesting, and very dark storm clouds can evoke a strong mood. Overcast weather is considered best for photographing in woodland as sunny weather produces too strong contrast through the leaves. The consistency of the light helps when photographing interiors and stitched photography.
- Rain can be difficult to capture in a still photograph, where it can be appear as a blur over the subject: a fast shutter speed such as 1/500s will capture the drops. Raindrops become more visible and attractive if backlit. Raindrops splashing or retained on foliage can make attractive subjects. Rain produces still puddles for reflection photography, and the chance of a rainbow.
- Snow falling can be hard to capture just as with rain. The main attraction of snow is the way it transforms the landscape and covers everything. Snow looks best pristine without random footprints or dirty from traffic. The camera may expose snow as grey, so exposure compensation may be required.
- Foggy weather produces a strong loss of contrast and colour with increasing distance, which tends towards grey white. This can provide an isolating effect on a close subject. A temperature inversion produces a layer of fog in a valley, which can look spectacular if photographed from above (such as a hill or mountain top).
- Windy weather requires the photographer to find a subject that demonstrates movement. A slow shutter speed will blur branches swaying in a breeze. A fast shutter will freeze a large wave crashing on the shore in a storm.
- Icy weather produces a variety of effects at all levels of scale. At the macro level, individual crystals can be fascinating. Coloured leaves gain a white rim of frost and can be isolated on white. Back lighting can transform ice into a natural light crystal. Melting ice combines the frozen and the moving, the sharp and the rounded.
- Thunderstorms often come with dramatic dark clouds and the opportunity for lightning or hail.
Adjusting the white balance in a photo involves judging the qualities of the illuminating light, which is assumed to be white. There are two factors to the quality of white light. The first axis ranges from yellow to blue and is called the colour temperature. The second axis ranges from green to magenta and may sometimes be referred to as "tint". Incandescent light is often quite yellow, whereas direct sun has more blue. Fluorescent lights can have a slight green tint.
A camera's "auto white balance" takes a guess at adjusting this white balance, with varying degrees of success. By photographing a neutral gray card, it is possible to determine the "correct" white balance for a scene more precisely. This is especially important when accurate colour reproduction is desired, for example when photographing an old painting in a museum. In other cases, using the technically correct white balance may hinder the attempt to capture a certain mood in a scene. For example, it may wrongly remove the yellow desired in a scene shot at the "golden hour".
Sometimes, a scene is lit by a mix of light sources with quite different white qualities, such as sunlight from a window mixing with the glow from lamps inside a room, which makes adjusting the photo difficult. For this reason, it may be better to turn off indoor lights, or to use a colour gel (filter) over a flash light.
A raw file contains sensor data that has not yet been adjusted for the white balance, though it may record the settings made or chosen by the camera at the time. This allows for the white balance to be freely adjusted during post processing.
A lens that can vary its focal length (compared with "prime" lens, with a fixed focal length). Zoom lenses are popular and convenient as they allow you to easily change the extent of the scene captured without having to move or change lenses. However, they are heavier and larger than prime lenses, and it is expensive to make them of high quality. The maximum aperture of a zoom lens is usually smaller than a similar prime lens, and this may vary depending on the focal length the lens is set to. A zoom lens that maintains the same bright maximum aperture throughout the focal length range is known as a constant (or fixed) aperture zoom, and this is considered a desirable property. The "kit lens" that is often sold with a DSLR often has a variable and small maximum aperture, as well as other optical compromises to achieve low cost.
Back to;
[edit]See also
[edit]External links
[edit]- Glossar and help (German wikipedia)
- Glossary of Digital Photography Terms by Allan Weitz
- Web browser color management tutorial
- For finding out how to fix any of the problems mentioned above, YouTube is full of tutorials for almost any post-processing program. Just enter the thing you want to fix and the name of your program in the searchbox on the site.
Suggestions for terms to add to the list
[edit]- Please add: (this term)
- Please add: (this term)