Development of a field-portable imaging system for scene classification using multispectral data fusion algorithms

Battelle scientists have assembled a reconfigurable multispectral imaging and classification system which can be taken into the field to support automated real-time target/background discrimination. The system may be used for a variety of applications including environmental remote sensing, industri...

Full description

Saved in:
Bibliographic Details
Published in:IEEE aerospace and electronic systems magazine Vol. 9; no. 9; pp. 13 - 19
Main Authors: Preston, E., Bergman, T., Gorenflo, R., Hermann, D., Kopala, E., Kuzma, T., Lazofson, L., Orkis, R.
Format: Magazine Article
Language:English
Published: IEEE 01-09-1994
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Battelle scientists have assembled a reconfigurable multispectral imaging and classification system which can be taken into the field to support automated real-time target/background discrimination. The system may be used for a variety of applications including environmental remote sensing, industrial inspection and medical imaging. This paper discusses hard tactical target and runway detection applications performed with the multispectral system. The Battelle-developed system consists of a passive, multispectral imaging electro-optical (EO) sensor suite and a real-time digital data collection and data fusion image processor. The EO sensor suite, able to collect imagery in 12 distinct wavebands from the ultraviolet (UV) through the long wave infrared (LWIR), consists of five charge-coupled device (CCD) cameras and two thermal IR imagers integrated on a common portable platform. The data collection and processing system consists of video switchers, recorders and a real-time sensor fusion/classification hardware system which combines any three input wavebands to perform real-lime data fusion by applying "look-up tables", derived from tailored neural network algorithms, to classify the imaged scene pixel by pixel. The result is then visualized in a video format on a full color, 9-inch, active matrix Liquid Crystal Display (LCD). A variety of classification algorithms including artificial neural networks and data clustering techniques were successfully optimized to perform pixel-level classification of imagery in complex scenes comprised of tactical targets, buildings, roads, aircraft runways, and vegetation. Algorithms implemented included unsupervised maximum likelihood, Linde Buzo Gray, and "fuzzy" clustering algorithms along with Multilayer Perceptron and Learning Vector Quantization (LVQ) neural networks. Supervised clustering of the data was also evaluated. To assess classification robustness, algorithms were tested on imagery recorded over broad periods of time throughout the day. Results were excellent, indicating that scene classification is achievable despite. Temporal signature variations. Waveband saliency analyses were performed to determine which spectral bands contained the bulk of the discriminating information for discerning objects in the scenes. Optimized classification algorithms are then used to populate the look-up tables in the sensor fusion board for real-time use in the field.< >
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0885-8985
1557-959X
DOI:10.1109/62.312974