Photogrammetry

From Agisoft
Jump to: navigation, search

Introduction

What is photogrammetry?

Photogrammetry (Greek: phot (light) + gramma (something drawn) + metrein (measure)) is the science of making measurements from photographs.

Basic example: the distance between two points that lie on a plane parallel to the photographic image plane can be determined by measuring their distance on the image and then multiplying the measured distance by scale parameter.

Typical outputs: a map, drawing or a 3d model of some real-world object or scene.

Related fields: Remote Sensing, GIS.

Main task of photogrammetry

If one wants to measure the size of an object, let’s say the length, width and height of a house, then normally one will carry this out directly at the object. However, the house may not exist anymore – e.g. it was destroyed, but some historic photos exist. Then, if one can determine the scale of the photos, it must be possible to get the desired data.

Of course one can use photos to get information about objects. This kind of information is different. So, for example, one may receive qualitative data (the house seems to be old, the walls are colored light green) from photo interpretation, or quantitative data like mentioned before (the house has a base size of 10 by 14 meters) from photo measurement, or information in addition to one's background knowledge (the house has elements of classic style), and so on.

Photogrammetry provides methods to get information of the second type: quantitative data. As the term already indicates, photogrammetry can be defined as the “science of measuring in photos”, and is traditionally a part of geodesy, belonging to the field of remote sensing (RS). If one would like to determine distances, areas or anything else, the basic task is to get object (terrain) coordinates of any point in the photo from which one can then calculate geometric data or create maps.

Obviously, from a single photo (two-dimensional plane) one can only get two-dimensional coordinates. Therefore, if one needs three-dimensional coordinates, a way how to get the third dimension is to be found. This is a good moment to remember the properties of human vision. Humans are able to see objects in a spatial manner, and with this they are able to estimate the distance between an object and themselves. But how does it work? In fact, human brain at any moment gets two slightly different images resulting from the different positions of the left and the right eye and due to the fact of the eye’s central perspective.

Exactly this principle, the so-called stereoscopic viewing, is used to get three-dimensional information in photogrammetry: If there are two (or more) photos of the same object but taken from different positions, one may easily calculate the three-dimensional coordinates of any point which is represented in both photos (by setting up the equations of the rays originating in image projections of the object point on the photos mentioned and passing through the object point itself and after that calculating their intersection). Therefore, the main task of photogrammetry could be defined in the following way: For any object point represented in at least two photos one has to calculate the three-dimensional object coordinates. If this task is fulfilled, it is possible to digitize points, lines and areas for map production or calculate distances, areas, volumes, slopes and much more.

When do we need photogrammetry?

The first idea that comes up to one's mind in association with measuring distances, areas and volumes is most likely to be about a ruler or a foot rule. However, there are situations when it doesn't work, like one of the following: the object itself doesn’t exist any more (but the photos from the object are preserved) or the object cannot be reached (for example, areas far away or in countries without adequate infrastructure which still can be photographed).

Furthermore, photogrammetry is a perfect option for measuring "easily transformed" objects like liquids, sand or clouds, as it avoids physical contact.

In addition, photogrammetry enables one to measure fast moving objects . For instance these may be running or flying animals or waves. In industry, high-speed cameras with simultaneous activation are used to get data about deformation processes (like crash tests with cars).

Comparing laser scanning techniques which are widely spread today both in terrain models generating and in close range case to get large amount of 3D point data (dense point cloud) to photogrammetry, one could note the following. The advantage of laser scanning is that the object can be low textured – a situation where photogrammetric matching techniques often fail. On the other hand, laser scanning cannot be used for fast moving objects. Moreover, laser scanning is time consuming and still very expensive, comparing with photogrammetric methods. Therefore, these methods may be considered as a complimentary to each other.

Types of photogrammetry

Photogrammetry can be classified in a number of ways but one standard method is to split the field basing on camera location during photography. On this basis we have Aerial Photogrammetry and Close-Range Photogrammetry.

In Aerial Photogrammetry the camera is mounted on an aircraft and is usually pointed vertically towards the ground. Multiple overlapping photos of the ground are taken as the aircraft flies along a flight path.

In Close-range Photogrammetry the camera is close to the subject and is typically hand held or on a tripod. Usually this type of photogrammetry work is non-topographic - that is the output is not topographic products like terrain models or topographic maps, but instead drawings and 3D models. Everyday cameras are used to model buildings, engineering structures, vehicles, forensic and accident scenes, film sets, etc.

Short history

In fact, development of photogrammetry reflects the general development of science and technology. Technological breakthroughs like inventions of photography, airplanes, computers and electronics determined 4 major stages in the history of the science.

1. The invention of photography by L. Daguerre and N. Niepce in 1839 laid grounds for photogrammetry to originate. The first phase of development (till the end of the XIXth century) was a period for pioneers to study absolutely new field and formulate first methods and principles. The greatest achievements were done in terrestrial and balloon photogrammetry.

2. The second turning point was the invention of stereophotogrammetry (that based on stereoscopic viewing, see Main task of photogrammetry section) by C. Pulfrich (1901). During first world war airplanes and cameras became operational and just several years later the main principles of aerial survey were formulated. In fact, analog rectification and stereoplotting instruments, based on mechanical theory, were already known that days, yet the amount of computation was prohibitive for numerical solutions. Not surprising that Von Gruber called photogrammetry of the period 'the art of avoiding computations'.

3. The third phase started with the advent of the computer. 1950s saw the birth of analytical photogrammetry, with matrix algebra forming the basis. For the first time a serious attempt was made to employ adjustment theory to photogrammetric measurements, yet the first operational computer programs became available only several years later. Brown developed the first block adjustment program based on bundles in the late sixties. As a result, the accuracy performance of aerial triangulation improved by a factor of ten. Apart from aerial triangulation, the analytical plotter is another major invention of the third generation.

4. The forth generation, digital photogrammetry, emerged due to the invention of digital photo and the availability of storage devices which permit rapid access to digital imagery. Hardware supported by special CPUs and GPUs that speed up imagery data processing, digital photogrammetry has taken the leading position in the field.

Image sources

Analogue and digital cameras

The development of photogrammetry is closely connected with that of aviation and photography. During more than 100 years, photos have been taken on glass plates or film material (negative or positive). Despite the fact that such cameras are still in use, one have to admit that today we are living in the age of digital photography.

Unlike traditional cameras that use film to capture and store an image, digital cameras use a solid-state device called an image sensor. These fingernail-sized silicon chips contain millions of photosensitive diodes called photosites. In the brief flickering instant that the shutter is open, each photosite records the intensity or brightness of the light that falls on it by accumulating a charge; the more light, the higher the charge. The brightness recorded by each photosite is then stored as a set of numbers that can then be used to set the color and brightness of dots on the screen or ink on the printed page to reconstruct the image.

The chief advantage of digital cameras over the classical film-based cameras is the instant availability of images for further processing and analysis. This is essential in real-time applications (e.g. robotics, certain industrial applications, bio-mechanics, etc.). Another advantage is the increased spectral flexibility of digital cameras.

Digital cameras have been used for special photogrammtric applications since the early seventies. However, vidicon-tube cameras available at that time were not very accurate because the imaging tubes were not stable. This disadvantage was eliminated with the appearance of solid-state cameras in the early eighties. The charge-coupled device provides high stability and is therefore the preferred sensing device in today’s digital cameras.

Metric and digital consumer cameras

Metric cameras In principle, specific photogrammetric cameras (also simply called metric cameras) work the same way as the amateur camera. The differences result from the high quality demands which the first ones must fulfill. First of all, it refers to high precision optics and mechanics.

Metric cameras are usually grouped into aerial cameras and terrestrial cameras. Aerial cameras are also called cartographic cameras. Panoramic cameras are examples of non-metric aerial cameras.

The lens system of aerial cameras is constructed as a unit with the camera body. No lens change or “zoom” is possible to provide high stability and a good lens correction. The focal length is fixed, and the cameras have a central shutter. Furthermore, aerial cameras use a large film format. While the size of 24 by 36 mm is typical for amateur cameras – aerial cameras normally use a size of 230 by 230 mm. As a result, the values of “wide angle”, “normal” and “telephoto” focal lengths differ from those widely known – for example, wide angle aerial camera has a focal length of about 153 mm, the normal one a focal length of about 305 mm.

Similar to this, for close-range applications special cameras were developed with a medium or large film format and fixed lenses.

Digital consumer cameras Nowadays digital consumer cameras have reached a high technical standard and good geometric resolution. Due to this fact these cameras can be successfully used for numerous photogrammetric tasks.

The differences of the construction principles between metric and consumer cameras can be seen, in general, in quality and stability of the camera body and the lens. Further, consumer cameras usually have a zoom (“vario”) lens with larger distortions which are not constant but vary, for instance, with the focal length, so it is difficult to correct them with the help of a calibration.

Having decided to purchase a digital camera to use it for photogrammetry it would be useful to take the following remarks into account:

  1. General: It should be possible to set the parameters (focal length, focus, exposure time and f-number) manually, at least as an option.
  2. Resolution (Number of pixels): Decisive is the real (physical), not an interpolated resolution. Generally, the higher the number of pixels, the better – but not at any price. Small chips with a large number of pixels of course have a very small pixel size and are not very light sensitive, furthermore the signal-noise ratio is worse. This one will encounter especially with higher ISO values (200 and more) and in dark parts of the image.
  3. Focal length range (zoom): Decisive is the optical, not the digital (interpolated) range.
  4. Distance setting (focus): It should be possible to deactivate the auto focus. If the camera has a macro option you can use it also for small objects.
  5. Exposure time, f-number: The maximum f-number (lens opening) should not be less than 1:2.8, the exposure time should have a range of at least 1 ... 1/1000 seconds.
  6. Image formats: The digital images are stored in a customary format like JPEG or TIFF. Important: The image compression rate must be selectable or, even better, the compression can be switched off to minimize the loss of quality.
  7. Others: Sometimes a tripod thread, a remote release and an adapter for an external flash are useful.

Camera calibration

During the process of camera calibration, the interior orientation of the camera is determined. The interior orientation data describe the metric characteristics of the camera needed for photogrammetric processes.

There are several ways to calibrate the camera. After assembling the camera, the manufacturer performs the calibration under laboratory conditions. Cameras should be calibrated once in a while because stress, caused by temperature and pressure differences of an airborne camera, may change some of the interior orientation elements. Laboratory calibrations are also performed by specialized government agencies.

In in-flight calibration, a test field with targets of known positions is photographed. The photo coordinates of the targets are then precisely measured and compared with the control points. The interior orientation is found by a least-square adjustment.

The main purpose of interior orientation is to define the position of the perspective center and the radial distortion curve. Modern aerial cameras are virtually distortion free. Thus, a good approximation for the interior orientation is to assume that the perspective center is at a certain distance c (calculated during camera calibration) from the fiducial center.

Classification of aerial photographs

Aerial photography is the basic data source for making maps by photogrametric means. Many factors determine the quality of aerial photography, first of all they are design and quality of lens system and weather conditions and sun angle during photo flight.

Aerial photographs are usually classified according to the orientation of the camera axis, the focal length of the camera, and spectral sensitivity.

  • Orientation of camera axis

True vertical photograph A photograph with the camera axis perfectly vertical (identical to plumb line through exposure center). Such photographs hardly exist in reality.

Near vertical photograph A photograph with the camera axis nearly vertical. The deviation from the vertical is called tilt. Gyroscopically controlled mounts provide stability of the camera so that the tilt is usually less than two to three degrees.

Oblique photograph A photograph with the camera axis tilted between the vertical and horizontal. A high oblique photograph is tilted so much that the horizon is visible on the photograph. A low oblique does not show the horizon. The total area photographed with obliques is much larger than that of vertical photographs.

  • Angular coverage

The angular coverage is a function of focal length and format size. Standard focal lengths and associated angular coverages are summarized in Table 1.

superwide wideangle intermediate normalangle narrowangle
focal length [mm] 85 157 210 305 610
angular coverage [o] 119 82 64 46 24

Table 1: Summary of photography with different angular coverage (for 9" × 9" format size).

  • Spectral Sensitivity

panchromatic black and white;

color (originally color photography was mainly used for interpretation purposes, however, recently, color is increasingly being used for mapping applications as well);

infrared black and white (since infrared is less affected by haze it is used in applications where weather conditions may not be as favorable as for mapping missions);

false color (this is particular useful for interpretation, mainly for analyzing vegetation (e.g. crop disease) and water pollution. E.g. Green, Red, NIR single sensor camera (see multispectral cameras)).

Fiducial marks

Fiducial marks are fixed points in the image plane, that serve as reference positions visible in the image. They are useful for image coordinate system setting in case of analog photography. Generally they are several fixed points on the sides of an image, that define fiducial center as the intersection of lines joining opposite fiducial marks. Fiducial center is used as the origin for image coordinate system.

Image geometry modeling

Note: The information below is relevant to frame photography (photographs exposed in one instant) with central projection assumption.

Object, camera and image spaces

To fulfill the task of geometric reconstruction it is necessary to represent points in the object coordinate system, i.e. a 3D local coordinate system related to the targeted object or a geographical coordinate system.

At the same time, input data (points on the photos) are referenced in the image coordinate system, i.e. 2D sensor related coordinate system with the origin at the position of pixel (0,0) (for digital frame cameras) / in the fiducial center (for analog images).

Finally, the third space is determined by the camera itself. Camera coordinate system has its origin at the projection center (the center of the lens).

Thus, certain relations have to be defined between these three spaces to allow photogrammetric procedures. Camera modeling, with intrinsic and extrinsic parameters being introduced, solves the problem.

Camera modeling

As the position of the camera in space varies much more quickly than than the geometry and physics of the camera, it is logical to distinguish between two sets of parameters in modeling:

1) extrinsic parameters describe the position of the camera in space. They are the six parameters of the exterior orientation: the three coordinates of the projection center and the three rotation angles around the three camera axes. The parameters of the exterior orientation may be directly measured (with GPS and IMU systems), however, they are usually also estimated during photogarmmetric procedures.

2) intrinsic parameters are all parameters necessary to model the geometry and physics of the camera. They allow to detect the direction of the projection ray to an object point given an image point and exterior orientation data. The intrinsic parameters describe the interior orientation of the camera, that is determined by camera calibration.

For a pin-hole camera projective mapping from 3D real world coordinates (x, y, z) (object space) to 2D pixel coordinates (u, v)(image space) is simulated by the following linear model:

(u, v, 1)T = A [R T] (x, y, z, 1)T,

where homogeneous coordinates notation is used.

fx  u0 
A = fy  v0 
 1

- is the intrinsic matrix containing 5 intrinsic parameters: fx, fy - focal length in terms of pixels; u0, v0 - principle point coordinates; s - skew coefficient between x and y axis.

Other intrinsic camera parameters, such as lens distortion, are also important, but can be covered by linear camera model.

R and T are extrinsic camera parameters: rotation matrix and translation vector respectively, which denote the transformation from 3D object coordinates to 3D camera coordinates.

Orientation Angles

To denote camera orientation two different sets of angles are used: ω, ϕ, κ and yaw, pitch, roll triplets. Both sets define transformation between real world and camera coordinates. The difference comes from how the georeferenced system is defined: if the reference system is UTM projection, then orientation parameters are ω, ϕ, κ; in case the tangent plane is involved - yaw, pitch, roll are relevant parameters. Most airborne measuring systems work with and save yaw, pitch, roll angles, while GIS systems operate with omega, phi, kappa angles.

In the case of aerial photos, the values of ϕ and ω will normally be near to zero. If they are exactly zero, it is a so-called nadir photo. But in practice this will never happen due to wind drift and small movements of the aircraft.

Some geometric principles

Photo scale

Fig. 1: Flight height, flight altitude and aerial photo scale.

The representative fraction is used for scale expressions, in form of a ratio, e.g. 1 : 5,000. As illustrated in Fig. 1 the scale of a near vertical photograph can be approximated by

mb = c/H

where mb is the photograph scale number, c - the calibrated focal length, and H - the flight height above mean ground elevation. Note that the flight height H refers to the average ground elevation. If it is with respect to the datum, then it is called flight altitude HA, with HA = H + h.

The photograph scale varies from point to point. For example, the scale for point P can easily be determined as the ratio of image distance CP' to object distance CP by

mP = CP'/CP

Clearly, above equation takes into account any tilt and topographic variations of the surface (relief).

Relief displacement

Fig. 2: Relief displacement.

The effect of relief does not only cause a change in the scale but can also be considered as a component of image displacement (see Fig. 2). Suppose point T is on top of a building and point B at the bottom. On a map, both points have identical X, Y coordinates; however, on the photograph they are imaged at different positions, namely in T' and B'. The distance d between the two photo points is called relief displacement because it is caused by the elevation difference dh between T and B.

The magnitude of relief displacement for a true vertical photograph can be determined by the following equation

dr = rB dh/H = rTdh/(H − dh)

where dh is the elevation difference of two points on a vertical. Then the elevation h of a vertical object

h = dr H/r.

The direction of relief displacement is radial with respect to the nadir point, independent of camera tilt.

How does flight height and camera focal length influence on displacement?

Let the goal be to take a photo of a house, filling the complete image area. There are several possibilities to do that: take the photo from a short distance with a wide-angle lens (like camera position 1 in the figure), or from a far distance with a small-angle lens (telephoto, like camera position 2), or from any position in between or outside. Results will differ in the following ways:

The smaller the distance camera – object and the wider the lens angle, the greater are the displacements due to the central perspective, or, vice versa:
The greater the distance camera – object and the smaller the lens angle, the smaller are the displacements.

In an extreme (theoretical) case, if the camera could be as far as possible away from the object and if the angle would be as small as possible (“super telephoto”), the projection rays would be nearly parallel, and the displacements near to zero. This is similar to the situation of images taken by a satellite orbiting some hundreds of kilometres above ground, where we have nearly parallel projection rays, yet influences come from the earth curvature.

So, at first glance, it seems that if one would like to transform a single aerial image to a given map projection, it would be the best to take the image from as high as possible with a small angle camera to have the lowest displacements. Yet, the radial-symmetric displacements are a prerequisite to view and measure image pairs stereoscopically, that is why in photogrammetric practice most of the aerial as well as terrestrial photos are taken with a wide-angle camera, showing relatively high relief-depending displacements.

Relative camera positions

Fig. 3: Camera positions parallel (left) and convergent (right).

To get three-dimensional coordinates of object points one needs at least two images of the object, taken from different positions. The point P (x, y, z) will be calculated as an intersection of the two rays [P'P] and [P"P]. One can easily imagine that the accuracy of the result depends among others on the angle between both rays. The smaller this angle, the less will be the accuracy. It is reasonable to take into account that every measurement of the image points P' and P" will have more or less small errors, and even very small errors here will lead to a large error especially in z when the angle is very small. This is one more reason why wide-angle cameras are preferred in photogrammetry (see Fig. 3).

Let A be the distance between the cameras and the object and B be the distance between both cameras (or camera positions when only a single camera is used), then the angle between both projection rays (continuous lines) depend on the ratio A/B, in the aerial case called the height-base ratio. Obviously it is possible to improve the accuracy of the calculated coordinates P(x, y, z) by increasing the distance B (also called base). If then the overlap area is too small you may use convergent camera positions – “squinting” in contrast to human vision (parallel). The disadvantage of this case is that you will get additional perspective distortions in the images. Note: The parallel (aerial) case is good for human stereo viewing and automatic surface reconstruction, the convergent case often leads to a higher precision especially in z direction.

Main photogrammetric procedures

Orientation of a stereo pair

The application of single photographs in photogrammetry is limited because they cannot be used for object space reconstruction, since the depth information is lost when taking an image. Even though the exterior orientation elements may be known it will not be possible to determine ground points unless the scale factor of every bundle ray is known.

This problem is solved by exploiting stereopsis, that is by using a second photograph of the same scene, taken from a different position. If the scene is static, the same camera may be used to obtain the two images, one after the other. Otherwise, it is necessary to take the two images simultaneously, and thus it is necessary to synchronize the two different cameras. Two photographs with different camera positions that show the same area, at least in part, is called a stereo pair.

The images in general have different interior orientations and different exterior orientations. And even if corresponding points (images of the same object point) are measured on both images, their coordinates will be known in different systems, thus preventing determination of 3D coordinates of the object point. Consequently, a mathematical model of the stereo pair and a uniform coordinate system for the image pair (model coordinate system), is needed.

To define a stereo pair model, supposing that the camera(s) is(are) calibrated and interior orientation parameters are known, one needs to determine:

  • relative orientation of the two cameras;
  • absolute orientation of the image model.

Relative orientation

Relative orientation of the two cameras is fixed by the following parameters:

  • the rotation of the second camera relative to the first (these are three parameters - three relative orientation angles);
  • the direction of the base line connecting the two projection centers (these are additional two parameters; no constraint exists against for shift of the second camera in the direction toward or away from the first camera).

Therefore, the relative orientation of two calibrated cameras is characterized by five independent parameters. They can be determined if 5 corresponding image points are given. An object can be reconstructed from images of calibrated cameras only up to a spatial similarity transformation. The result is a photogrammetric model.

Absolute orientation

The orientation of the photogrammetric model in space is called absolute orientation. This is actually a task of 7-parameter transformation application. The transformation can only be solved if priory information about some of the parameters is introduced. This is most likely to be done with control points.

Control points it is an object point with known real world coordinates. A point with all three coordinates known is called full control point. If only X and Y is known then we have a planimetric control point. Obviously, with an elevation control point we know only the Z coordinate.

How many control points are needed? In order to calculate 7 parameters at least seven equations must be available. For example, 2 full control points and one elevation control point would render a solution. If more equations (that is, more control points) are available then the problem of determining the parameters can be solved as a least-square adjustment. The idea is to minimize the discrepancies between the transformed and the available control points.

Aerial triangulation

Aerial triangulation (AT) (aerotriangulation) is a complex photogrammetric production line. The main tasks to be carried out are the identification of tie points and ground control points, the transfer of these points in homologous image segments and the measurement of its image coordinates. Lastly, the image-to-object space transform is performed by bundle block adjustment.
Transition to digital imagery led to appearance of the term digital aerial triangulation. The task implies selection, transfer and measurement of image tie points by digital image matching. Digital aerial triangulation is generally associated with automated aerial triangulation thanks to potential of digital approach to be automated.

Bundle adjustment (bundle block adjustment) is the problem of refining a visual reconstruction to produce jointly optimal 3D structure and viewing parameter (camera pose and/or calibration) estimates. Optimal means that the parameter estimates are found by minimizing some cost function that quantifies the model fitting error, and jointly that the solution is simultaneously optimal with respect to both structure and camera variations. The name refers to the ‘bundles’ of light rays leaving each 3D feature and converging on each camera centre, which are ‘adjusted’ optimally with respect to both feature and camera positions. Equivalently — unlike independent model methods, which merge partial reconstructions without updating their internal structure — all structures and camera parameters are adjusted together ‘in one bundle’.

Bundle adjustment is really just a large sparse geometric parameter estimation problem, the parameters being the combined 3D feature coordinates, camera poses and calibrations.

Advantages of bundle block adjustment against other adjustment methods:

  • Flexibility: Bundle adjustment gracefully handles a very wide variety of different 3D feature and camera types (points, lines, curves, surfaces, exotic cameras), scene types (including dynamic and articulated models, scene constraints), information sources (2D features, intensities, 3D information, priors) and error models (including robust ones). It has no problems with missing data.
  • Accuracy: Bundle adjustment gives precise and easily interpreted results because it uses accurate statistical error models and supports a sound, well-developed quality control methodology.
  • Efficiency: Mature bundle algorithms are comparatively efficient even on very large problems. They use economical and rapidly convergent numerical methods and make near-optimal use of problem sparseness.

Systematic error corrections

Correction for lens distortion

Fig. 4: Barrel-shaped (left) and pincushion-shaped (right) distortions.

Lens irregularities and aberrations result in some image displacement.

A typical effect with wide angle lenses are the barrel-shaped distortions, that means, straight lines near the image borders are shown bended to the borders. This effect usually will be less or zero in medium focal lengths and may turn into the opposite form (pincushion-shaped) with telephoto lenses (see Fig. 4).

Beside these so-called radial-symmetric distortions, which have their maximum at the image borders, there are more systematic effects (affine, shrinking) and also non-systematic displacements. The distortions depend among others on the focal length and the focus. To minimize the resulting geometric errors efforts have been undertaken to find suitable mathematical models (one of the most widely used is Brown model).

In most cases the radial-symmetric part has the largest effect of all, consequently, it is the main object for correction. Distortion values are determined during the process of camera calibration. They are usually listed in tabular form, either as a function of the radius or the angle at the perspective center. For aerial cameras the distortion values are very small. Hence, it is sufficient to linearly interpolate the distortion. Suppose one wants to determine the distortion for image point xp, yp. The radius is rp = (xp2 + yp2)½. From the table we obtain the distortion dri for ri < rp and drj for rj > rp. The distortion for rp is interpolated

drp = (drj − dri) rp / (rj − ri)

The corrections in x- and y-direction are

drx = (xp/rp) drp
dry = (yp/rp) drp

Finally, the photo coordinates must be corrected as follows:

xp = xp − drx = xp (1 − drp/rp)
yp = yp − dry = yp (1 − drp/rp)

The radial distortion can also be represented by an odd-power polynomial of the form

dr = p0 r + p1 r3 + p2 r5 + · · ·

The coefficients pi are found by fitting the polynomial curve to the distortion values. This equation is a linear observation equation. For every distortion value, an observation equation is obtained. In order to avoid numerical problems (ill-conditioned normal equation system), the degree of the polynomial should not exceed nine.

Correction for refraction

Fig. 5: Correction for refraction.

Fig. 5 shows how an oblique light ray is refracted by the atmosphere. According to Snell’s law, a light ray is refracted at the interface of two different media. The density differences in the atmosphere are in fact different media. The refraction causes the image to be displayed outwardly, quite similar to a positive radial distortion. The radial displacement caused by refraction can be computed by

dr = K (r + r3/c2)
K = {2410 H / (H2 − 6 H + 250) − 2410 h2 / (h2 − 6 h + 250) H} 10−6

where c is calibrated focal length. These equations are based on a model atmosphere defined by the US Air Force. The flying height H and the ground elevation h must be in units of kilometers.

Correction for earth curvature

Fig. 6: Correction for earth curvature.

The mathematical derivation of the relationships between image and object space are based on the assumption that for both spaces, 3D Cartesian coordinate systems are employed. Since ground control points may not directly be available in such a system, they must first be transformed, say from a State Plane coordinate system to a Cartesian system. The X and Y coordinates of a State Plane system are Cartesian, but not the elevations. Fig. 6 shows the relationship between elevations above a datum (h) and elevations in the 3D Cartesian system. If we approximate the datum by a sphere, radius R = 6372.2 km, then the radial displacement can be computed by

dr = r3 (H − h) / (2 c2 R)

Strictly speaking, the correction of photo coordinates due to earth curvature is not a refinement of the mathematical model. It is much better to eliminate the influence of earth curvature by transforming the object space into a 3D Cartesian system before establishing relationships with the ground system. This is always possible, except when compiling a map. A map, generated on an analytical plotter, for example, is most likely plotted in a State Plane coordinate system. That is, the elevations refer to the datum and not to the XY plane of the Cartesian coordinate system. It would be quite awkward to produce the map in the Cartesian system and then transform it to the target system. Therefore, during map compilation, the photo coordinates are “corrected" so that conjugate bundle rays intersect in object space at positions related to reference sphere.

Length and angle units

Normally, for coordinates and distances in photogrammetry metric units are used according to the international standard. But in several cases, also some non metric units can be found like:

Foot ( ' ): Sometimes used to give the terrain height above mean sea level, for example in North American or British topographic maps, or the flying height above ground.

Inch ( " ): For instance used to define the resolution of printers and scanners (dots per inch).

1' = 12" = 30.48 cm; 1" = 2.54 cm; 1m = 3.281'; 1 cm = 0.394"

Angles are normally given in degrees. In mathematics also radians are common. In geodesy and photogrammetry, they use grads. In the army, the so-called mils are used.

A full circle is:

360 degrees = 400 grads = 2π (pi) = 6400 mils

Glossary

--B--

Base
Distance between the projection centers of neighboring photos.
Block
All images of all strips.

--D--

Datum
A set of parameters and control points used to accurately define the three dimensional shape of the Earth. The corresponding datum is the basis for a planar coordinate system.

--F--

Flight altitude
Flight height above datum.
Flight height
Flight height above mean ground elevation.
Fiducial marks
Any marker built into an aerial camera that registers its image on an aerial photograph as a fixed reference mark in the form of an image. There are usually four fiducial marks on a photograph which are used to define the principal point of the photograph.

--I--

Image
The photo in digital representation – the scanned film or the photo directly taken by a digital camera.
Image coordinates / pixel coordinates
In digital image processing the expression image coordinates refers to pixel positions (row / column), while in classical photogrammetry it indicates the coordinates transformed to the fiducial mark nominal values. For differentiation, the expression pixel coordinates sometimes is used in the context of digital image processing.
Image refinement
The process to correct photos for systematic errors, such as radial distortion, refraction and earth curvature.

--M--

Model (stereo model, image pair)
Two neighboring images within a strip.
Model area
The area being covered by stereo images (image pair).

--O--

Overlaps
An image flight normally is carried out in the way that the area of interest is photographed strip by strip, turning around the aircraft after every strip, so that the strips are taken in a meander like sequence. The two images of each model have a longitudinal overlap of approximately 60 to 80% (also called end lap), neighboring strips have a lateral overlap of normally about 30% (also called side lap). This is not only necessary for stereoscopic viewing but also for the connecting of all images of a block within an aerial triangulation.

--P--

Photo
The original photo on sensor.
Plumb line
Vertical.

--R--

Relief
topographic variations of the surface.
Resolution
The minimum distance between two adjacent features, or the minimum size of a feature, which can be detected by photogrammetric data acquisition systems.

--S--

Skew
A transformation of coordinates in which one coordinate is displaced in one direction in proportion to its distance from a coordinate plane or axis.
Stereoplotter
An instrument that lets an operator see two photos at once in a stereo view.
Strip
All overlapping images taken one after another within one flight line.

--T--

Tilt
Deviation of the camera axis from the vertical.

References

Multilingual Dictionary of Remote Sensing and Photogrammetry, ASPRS, 1983, p. 343

Manual of Photogrammetry, ASPRS, 5th Ed., 2004, p. 1151

Moffit, F.H. and E. Mikhail, 1980. Photogrammetry, 3d Ed., Harper & Row publishes, NY.