Detecting runways in complex airport scenes

1
ABSTRACT OF PAPERS ACCEPTED FOR PUBLICATION 243 B-connected lines while retaining connectivity and endpoints. Sequential and parallel algorithms are given. A procedure to obtain line widths in the course of thinning is described. Examples are shown illustrating the reduction in iterations with increase of k, and the trade-off between size of k and the coarseness of the result. Because of the highly repetitive and local operations of the algorithm, it is straightforwardly mapped into VLSI hardware, and an example of this is given. Detecthg Runways in Compkx Airport Scenes. A. HUERTAS, W. COLE, AND R. NEVATIA. Institute for Robotics and Intelligent Systems, School of Engineering, Powell Hall of Engineering, University of Southern California, Los Angeles, California, 90089-0273. Received March 29, 1988; accepted July 14. 1989. Detection of runways in aerial images is part of a project to automatically map complex cultural areas such as a major commercial airport complex. This task is much more difficult than appears at first. Runways are not merely homogeneous strips in the image due to several markings on the surface, changes in the surface material, and presence of other objects such as taxiways and aircraft. We use some generic sources of knowledge to help with this problem in a hypothesize and test paradigm. Hypotheses are formed by looking for instances of long rectangular shapes, possibly interrupted by other long rectangles. Runway markings, mandated by standards for runway construction, are used to verity our hypotheses. Our system gives good performance on a variety of complex scenes and does not rely on location specific knowledge. NOTES A Note on the Co&kids of Compass Mask Convolutions. MORTON NADLER. The Bradley Department of Electrical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061-0111. Received July 18, 1988; accepted July 12, 1989. A derivation is given to show that the well-known “Sobel” gradient estimator is the result of combining two independent estimates of the gradient, in the principal and oblique directions. Applica- tion of the same method of derivation to determine the coefficients of “compass masks” in the oblique directions shows that they should not be obtained by simple rotation of the masks for the orthogonal directions. A corrected set of compass masks is given. Finally, a computationally simple means is given for finding the edge directions from a set of two Sobel masks, Ax, A)‘ . Boumby and Object L.abelling in Three-Dimensional Images. JAYARAM K. UDUPA. Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 418 Service Drive-4th Floor, Philadelphia, Pennsylvania 19104; VENKATRAMAN A G. AJJANAGADDE. Computer and Infor- mation Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104. Received August 8, 1988; accepted July 27, 1989. There are many imaging modalities (e.g., medical imaging scanners) that capture information about internal structures and generate three-dimensional (3D) digital images of the distribution of some physical property of the material of the structure. Such images have been found to be very useful in analyzing the form and function of the structure and in detecting and correcting deformities in the structure. Visualization of 3D, structures is an essential component of such analyses. One commonly used approach to visualization consists of identifying the structure of interest, forming its surfaces, and then rendering the surfaces on a two-dimensional screen. This paper addresses the surface formation problem assuming that object identification has already been done and a 3D binary image is available that represents the structure. For the existing 3D boundary tracking algorithms, the user has to somehow specify each surface that is to be tracked. Often, the 3D image consists of many surfaces of interest. Their manual specification is very tedious and may be impossible if the structure is of complex shape. This paper describes a methodology for automatically tracking all boundary surfaces-i.e., labelling boundary surfaces-in the given 3D image. The algorithms also generate additional informa- tion from which the 3D connected components in the image are trivially obtained. Examples from medical imaging are included to illustrate the usefulness of the new methodology.

Transcript of Detecting runways in complex airport scenes

Page 1: Detecting runways in complex airport scenes

ABSTRACT OF PAPERS ACCEPTED FOR PUBLICATION 243

B-connected lines while retaining connectivity and endpoints. Sequential and parallel algorithms are given. A procedure to obtain line widths in the course of thinning is described. Examples are shown illustrating the reduction in iterations with increase of k, and the trade-off between size of k and the coarseness of the result. Because of the highly repetitive and local operations of the algorithm, it is straightforwardly mapped into VLSI hardware, and an example of this is given.

Detecthg Runways in Compkx Airport Scenes. A. HUERTAS, W. COLE, AND R. NEVATIA. Institute for Robotics and Intelligent Systems, School of Engineering, Powell Hall of Engineering, University of Southern California, Los Angeles, California, 90089-0273. Received March 29, 1988; accepted July 14. 1989.

Detection of runways in aerial images is part of a project to automatically map complex cultural areas such as a major commercial airport complex. This task is much more difficult than appears at first. Runways are not merely homogeneous strips in the image due to several markings on the surface, changes in the surface material, and presence of other objects such as taxiways and aircraft. We use some generic sources of knowledge to help with this problem in a hypothesize and test paradigm. Hypotheses are formed by looking for instances of long rectangular shapes, possibly interrupted by other long rectangles. Runway markings, mandated by standards for runway construction, are used to verity our hypotheses. Our system gives good performance on a variety of complex scenes and does not rely on location specific knowledge.

NOTES

A Note on the Co&kids of Compass Mask Convolutions. MORTON NADLER. The Bradley Department of Electrical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061-0111. Received July 18, 1988; accepted July 12, 1989.

A derivation is given to show that the well-known “Sobel” gradient estimator is the result of combining two independent estimates of the gradient, in the principal and oblique directions. Applica- tion of the same method of derivation to determine the coefficients of “compass masks” in the oblique directions shows that they should not be obtained by simple rotation of the masks for the orthogonal directions. A corrected set of compass masks is given. Finally, a computationally simple means is given for finding the edge directions from a set of two Sobel masks, Ax, A)‘.

Boumby and Object L.abelling in Three-Dimensional Images. JAYARAM K. UDUPA. Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 418 Service Drive-4th Floor, Philadelphia, Pennsylvania 19104; VENKATRAMAN A G. AJJANAGADDE. Computer and Infor- mation Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104. Received August 8, 1988; accepted July 27, 1989.

There are many imaging modalities (e.g., medical imaging scanners) that capture information about internal structures and generate three-dimensional (3D) digital images of the distribution of some physical property of the material of the structure. Such images have been found to be very useful in analyzing the form and function of the structure and in detecting and correcting deformities in the structure. Visualization of 3D, structures is an essential component of such analyses. One commonly used approach to visualization consists of identifying the structure of interest, forming its surfaces, and then rendering the surfaces on a two-dimensional screen. This paper addresses the surface formation problem assuming that object identification has already been done and a 3D binary image is available that represents the structure. For the existing 3D boundary tracking algorithms, the user has to somehow specify each surface that is to be tracked. Often, the 3D image consists of many surfaces of interest. Their manual specification is very tedious and may be impossible if the structure is of complex shape. This paper describes a methodology for automatically tracking all boundary surfaces-i.e., labelling boundary surfaces-in the given 3D image. The algorithms also generate additional informa- tion from which the 3D connected components in the image are trivially obtained. Examples from medical imaging are included to illustrate the usefulness of the new methodology.