0% found this document useful (0 votes)
145 views15 pages

ENVI Tutorial 2

This tutorial covers multispectral image classification using Landsat TM data. It includes examining the Landsat color images to identify features, performing unsupervised classifications using K-Means and ISODATA methods, supervised classification using training data, and post-classification processing including sieving, clumping and accuracy assessment. Required files for the tutorial are listed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views15 pages

ENVI Tutorial 2

This tutorial covers multispectral image classification using Landsat TM data. It includes examining the Landsat color images to identify features, performing unsupervised classifications using K-Means and ISODATA methods, supervised classification using training data, and post-classification processing including sieving, clumping and accuracy assessment. Required files for the tutorial are listed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

ENVI Tutorial #2:

Multispectral Classification

The following topics are covered in this tutorial:

Overview of This Tutorial

Examine Landsat TM Color Images

Unsupervised Classification

Supervised Classification

¡Error! Referencia de hipervínculo no válida.

Post Classification Processing

Classes to Vector Layers

Classification Keys Using Annotation

Overview of This Tutorial

This tutorial leads you through a typical multispectral classification procedure using
Landsat TM data from Canon City, Colorado. Results of both unsupervised and
supervised classifications are examined and post classification processing including
clump, sieve, combine classes, and accuracy assessment are discussed. It is
assumed that you are already generally familiar with multispectral classification
techniques.

Files Used in This Tutorial

You must have the ENVI TUTORIALS & DATA CD-ROM mounted on your system to
access the files used by this tutorial, or copy the files to your disk.

The files used in this tutorial are contained in the CAN_TM subdirectory of the
ENVIDATA directory on the ENVI TUTORIALS & DATA CD-ROM.

Required Files

The files listed below are required to run this exercise.

CAN_TMR.IMG Boulder Colorado TM Reflectance

CAN_TMR.HDR ENVI Header for Above

CAN_KM.IMG KMEANS Classification


CAN_KM.HDR ENVI Header for Above

CAN_ISO.IMG ISODATA Classification

CAN_ISO.HDR ENVI Header for Above

CLASSES.ROI Regions of Interest (ROI) for Supervised Classification

CAN_PCLS.IMG Paralleleliped Classification

CAN_PCLS.HDR ENVI Header for Above

CAN_BIN.IMG Binary Encoding Result

CAN_BIN.HDR ENVI Header for Above

CAN_SAM.IMG SAM Classification Result

CAN_SAM.HDR ENVI Header for Above

CAN_RUL.IMG Rule image for SAM classification

CAN_RUL.HDR ENVI Header for Above

CAN_SV.IMG Sieved Image

CAN_SV.HDR ENVI Header for Above

CAN_CLMP.IMG Clump of sieved image

CAN_CLMP.HDR ENVI Header for Above

CAN_COMB.IMG Combined Classes image

CAN_COMB.HDR ENVI Header for Above

CAN_OVR.IMG Classes overlain on Grayscale image

CAN_OVR.HDR ENVI Header for Above

CAN_V1.EVF Vector layer generated from class #1

CAN_V2.EVF Vector layer generated from class #2

Examine Landsat TM Color Images

This portion of the exercise will familiarize you with the spectral characteristics of
Landsat TM data of Canon City, Colorado, USA. Color composite images will be used
as the first step in locating and identifying unique areas for use as training sets in
classification.
Start ENVI

Before attempting to start the program, ensure that ENVI is properly installed as
described in the installation guide.

 To start ENVI in Unix, enter " envi " at the UNIX command line.
 To start ENVI from a Windows or Macintosh system, d ouble-click on the
ENVI icon.

The ENVI Main Menu appears when the program has successfully loaded and
executed.

Open and Display Landsat TM Data

To open an image file:

1. Select File -> Open Image File on the ENVI Main Menu.

Note that on some platforms you must hold the left mouse button down to display
the submenus from the Main Menu.

An Enter Input Data File file selection dialog appears.

1. Navigate to the CAN_TM subdirectory of the ENVIDATA directory on the


ENVI TUTORIALS & DATA CD-ROM just as you would in any other application
and select the file CAN_TMR from the list and click "OK".

The Available Bands List dialog will appear on your screen. This list allows you to
select spectral bands for display and processing.

Note that you have the choice of loading either a grayscale or an RGB color image.

1. Select bands 4, 3, and 2 listed at the top of the dialog by first selecting the
RGB Color toggle button in the Available Bands List, then clicking on the
bands sequentially with the left mouse button.

The bands you have chosen are displayed in the appropriate fields in the center of
the dialog.

1. Click "Load RGB" to load the image into a new display.

Review Image Colors

Use the displayed color image as a guide to classification. This image is the
equivalent of a false color infrared photograph. Even in a simple three-band image,
it's easy to see that there are areas that have similar spectral characteristics. Bright
red areas on the image represent high infrared reflectance, usually corresponding
to healthy vegetation, either under cultivation, or along rivers. Slightly darker red
areas typically represent native vegetation, in this case in slightly more rugged
terrain, primarily corresponding to coniferous trees. Several distinct geologic
classes are also readily apparent as is urbanization.
Figure 1: Landsat TM Color Infrared Composite, Bands 4, 2, 1 (RGB).

Cursor Location/Value

Use ENVI's cursor location/value function to preview image values in all 6 spectral
bands. To bring up a dialog box that displays the location of the cursor in the Main,
Scroll, or Zoom windows.

 Select Basic Tools-> Cursor Location/Value from the ENVI Main Menu.

Alternatively, click the right mouse button in the image display to toggle the
Functions menu and choose Functions->Interactive Analysis->Cursor
Location/Value.

1. Move the cursor around the image and examine the data values for specific
locations and note the relation between image color and data value.
2. Select Files->Cancel in the Cursor Location/Value dialog to dismiss when
finished.

Examine Spectral Plots

Use ENVI's integrated spectral profiling capabilities to examine the spectral


characteristics of the data.

1. Click the right mouse button in the image display to toggle the Functions
menu and choose Functions->Profiles->Z Profile (Spectrum) to begin
extracting spectral profiles.
2. Examine the spectra for areas that you previewed above using color images
and the Cursor/Location Value function. Note the relations between image
color and spectral shape. Pay attention to the location of the image bands in
the spectral profile, marked by the red, green, and blue bars in the plot.
Figure 2: Spectral Plots

Unsupervised Classification

Start ENVI's unsupervised classification routines by choosing Classification-


>Unsupervised->Method, where Method is either K-Means or Isodata, or review the
precalculated results of classifying the image in the CAN_TM directory.

K-Means

Unsupervised classification uses statistical techniques to group n-dimensional data


into their natural spectral classes. The K-Means unsupervised classifier uses a
cluster analysis approach which requires the analyst to select the number of
clusters to be located in the data, arbitrarily locates this number of cluster centers,
then iteratively repositions them until optimal spectral separability is achieved.

Choose K-Means as the method, use all of the default values and click on OK, or
review the results contained in CAN_KM.IMG.

1. Open the file CAN_KM.IMG, click on the grayscale radio button in the
Available Bands List, click on the band name at the top of the List, click
New, and then Load Band.
2. Click the right mouse button in the Main Image Display window to toggle the
Functions menu then select Functions->Link->Link Displays and click OK in
the dialog to link the images.
3. Compare the K-MEANS classification result to the color composite image by
clicking and dragging using the left mouse button to move the dynamic
overlay around the image.
4. When finished, select Functions->Link->Unlink Displays to remove the link
and dynamic overlay.

If desired, experiment with different numbers of classes, Change Thresholds,


Standard Deviations, and Maximum Distance Error values to determine their effect
on the classification.

Isodata

IsoData unsupervised classification calculates class means evenly distributed in the


data space and then iteratively clusters the remaining pixels using minimum
distance techniques. Each iteration recalculates means and reclassifies pixels with
respect to the new means. This process continues until the number of pixels in each
class changes by less than the selected pixel change threshold or the maximum
number of iterations is reached.

Choose ISODATA as the method, use all of the default values and click on OK, or
review the results contained in CAN_ISO.IMG.

1. Open the file CAN_ISO.IMG, click on the grayscale radio button in the
Available Bands List, click on the band name at the top of the List, click
New, and then Load Band.
2. Click the right mouse button in the Main Image Display window to toggle the
Functions menu then select Functions->Link->Link Displays. Click OK to link
this image to the color composite image and the KMEANS result.
3. Compare the ISODATA classification result to the color composite image by
clicking and dragging using the left mouse button to move the dynamic
overlay around the image. Toggle the dynamic overlay of the third image by
holding the left mouse button down and simultaneously clicking on the
middle mouse button. Compare the ISODATA and K-MEANS classifications.
4. Click the right mouse button in each of the classified images to toggle the
Functions menu and click Cancel to dismiss the two image displays.

If desired, experiment with different numbers of classes, Change Thresholds,


Standard Deviations, Maximum Distance Error, and class pixel characteristic values
to determine their effect on the classification.

Supervised Classification

Supervised classification requires that the user select training areas for use as the
basis for classification. Various comparison methods are then used to determine if a
specific pixel qualifies as a class member. ENVI provides a broad range of different
classification methods, including Parallelepiped, Maximum Likelihood, Minimum
Distance, Mahalanobis Distance, Binary Encoding, and Spectral Angle Mapper.
Examine the processing results below, or use the default classification parameters
for each of these classification methods to generate your own classes and compare
results.

To perform your own classifications use Classification->Supervised->Method, where


Method is one of ENVI's supervised classification methods. Use one of the two
methods below for selecting training areas (Regions of Interest).

Select Training Sets Using Regions of Interest (ROI)

As described in ENVI Tutorial #1 and summarized here, ENVI lets you easily define
"Regions of Interest" (ROIs) typically used to extract statistics for classification,
masking, and other operations. For the purposes of this exercise, you can either
use predefined ROIs, or create your own.

Restore Predefined ROIs

1. Use preselected Regions of Interest by starting the Region of Interest


Controls dialog by choosing Basic Tools->Region of Interest->Define Region
of Interest, then choosing File->Restore ROIs and choosing CLASSES.ROI as
the input file.

Create Your Own ROIs

1. Select Basic Tools->Region of Interest->Define Region of Interest from the


ENVI Main Menu. The ROI Definition dialog will appear.
2. Draw a polygon that represents the region of interest.
3. Click the left mouse button in the Main window to establish the first point of
the ROI polygon.
4. Select further border points in sequence by clicking the left button again,
and close the polygon by clicking the right mouse button.The middle mouse
button deletes the most recent point, or (if you have closed the polygon) the
entire polygon.
5. ROIs can also be defined in the Zoom and Scroll windows by choosing the
appropriate radio button at the top of the ROI Controls dialog.

When you have finished defining an ROI, it is shown in the dialog's list of Available
Regions, with the name, region color, and number of pixels enclosed, and is
available to all of ENVI's classification procedures.

1. To define a new ROI, click "New Region".


2. You can enter a name for the region and select the color and fill patterns for
the region by clicking on the "Edit" button. Define the new ROI as described
above.

Classical Supervised Multispectral Classification

The following methods are described in most remote sensing textbooks and are
commonly available in today's image processing software systems.

Parallelepiped

Parallelepiped classification uses a simple decision rule to classify multispectral


data. The decision boundaries form an n-dimensional parallelepiped in the image
data space. The dimensions of the parallelepiped are defined based upon a
standard deviation threshold from the mean of each selected class.

1. Presaved results are in the file CAN_PCLS.IMG. Examine these or perform


your own classification using the CLASSES.ROI Regions of Interest described
above. Try using the default parameters and various standard deviations
from the mean of the ROIs.
2. Use image linking and dynamic overlay to compare this classification to the
color composite image and previous unsupervised classifications.
Figure 3: Parallelepiped classification results.

Maximum Likelihood

Maximum likelihood classification assumes that the statistics for each class in each
band are normally distributed and calculates the probability that a given pixel
belongs to a specific class. Unless a probability threshold is selected, all pixels are
classified. Each pixel is assigned to the class that has the highest probability (i.e.,
the "maximum likelihood").

1. Perform your own classification using the CLASSES.ROI Regions of Interest


described above. Try using the default parameters and various probability
thresholds.
2. Use image linking and dynamic overlay to compare this classification to the
color composite image and previous unsupervised and supervised
classifications.

Minimum Distance

The minimum distance classification uses the mean vectors of each ROI and
calculates the Euclidean distance from each unknown pixel to the mean vector for
each class. All pixels are classified to the closest ROI class unless the user specifies
standard deviation or distance thresholds, in which case some pixels may be
unclassified if they do not meet the selected criteria.

1. Perform your own classification using the CLASSES.ROI Regions of Interest


described above. Try using the default parameters and various standard
deviations and maximum distance errors.
2. Use image linking and dynamic overlay to compare this classification to the
color composite image and previous unsupervised and supervised
classifications.

Mahalanobis Distance

The Mahalanobis Distance classification is a direction sensitive distance classifier


that uses statistics for each class. It is similar to the Maximum Likelihood
classification but assumes all class covariances are equal and therefore is a faster
method. All pixels are classified to the closest ROI class unless the user specifies a
distance threshold, in which case some pixels may be unclassified if they do not
meet the threshold.
1. Perform your own classification using the CLASSES.ROI Regions of Interest
described above. Try using the default parameters and various maximum
distance errors.
2. Use image linking and dynamic overlay to compare this classification to the
color composite image and previous unsupervised and supervised
classifications.

"Spectral" Classification Methods

The following methods are described in the ENVI Users's guide. These were
developed specifically for use on Hyperspectral data, but provide an alternative
method for classifying multispectral data, often with improved results that can
easily be compared to spectral properties of materials. They typically are used from
the Endmember Collection dialog using image or library spectra, however, they can
also be started from the classification menu, Classification->Supervised->Method.

The Endmember Collection Dialog

The endmember collection dialog is a standardized


means of collecting spectra for supervised
classification from ASCII Files, Regions of Interest,
Spectral Libraries, and Statistics Files. Start the
dialog by selecting, Spectral Tools->Endmember
Collection (this can also be started by choosing
Classification->Endmember Collection. Click on the
Open Image File button at the bottom of the
Classification Input File dialog and choose the input
file CAN_TMR.IMG and click OK.

The Endmember Collection dialog appears with the


Parallelepiped classification method selected by
default. The available classification and mapping
methods are listed by choosing Algorithm->Method
from the dialog menu bar. Available supervised
classification methods currently include
Parallelepiped, Minimum Distance, Manlanahobis Distance, Maximum Likelihood,
Binary Encoding, and the Spectral Angle Mapper (SAM).

Binary Encoding Classification

The binary encoding classification technique encodes the data and endmember
spectra into 0s and 1s based on whether a band falls below or above the spectrum
mean. An exclusive OR function is used to compare each encoded reference
spectrum with the encoded data spectra and a classification image produced. All
pixels are classified to the endmember with the greatest number of bands that
match unless the user specifies a minimum match threshold, in which case some
pixels may be unclassified if they do not meet the criteria.
1. Presaved Binary Encoding results are in the file CAN_BIN.IMG. These were
created using a minimum encoding threshold of 75%. Examine these or
perform your own classification using the CLASSES.ROI Regions of Interest
described above. To do your own classification, elect Algorithm->Binary
Encoding from the menu bar. Use the predefined Regions of Interest in the
file CLASSES.ROI. Select Import->From ROI from input file, click on Select
All, and click OK. You can view the spectral plots for he ROIs by choosing
Options->Plot Endmembers.
2. Click on the arrow toggle button next to the text "Output Rule Images" in
the Binary Encoding Parameters dialog to and click OK at the bottom of the
dialog to start the classification.
3. Use image linking and dynamic overlays to compare this classification to the
color composite image and previous unsupervised and supervised
classifications.

Spectral Angle Mapper Classification

The Spectral Angle Mapper (SAM) is a physically-based spectral classification that


uses the n-dimensional angle to match pixels to reference spectra. The algorithm
determines the spectral similarity between two spectra by calculating the angle
between the spectra, treating them as vectors in a space with dimensionality equal
to the number of bands.

1. Presaved SAM Classification results are in the file CAN_SAM.IMG. Examine


these or perform your own classification using the CLASSES.ROI Regions of
Interest described above, which will be listed in the Endmember Collection
dialog. To do your own classification, select Algorithm->Spectral Angle
Mapper from the menu bar. Click Apply to start the classification.
2. If performing your own classification, enter an output filename for the SAM
classification image in the Endmember Collection dialog. Also enter the
filename CAN_RUL.IMG as the rule output image name and click OK at the
bottom of the dialog to start the classification.

Use image linking and dynamic overlays to compare this classification to the color
composite image and previous unsupervised and supervised classifications.

Rule Images

ENVI creates images that show the pixel values used to create the classified image.
These optional images allow users to evaluate classification results and to reclassify
if desired based on thresholds. These are grayscale images; one for each ROI or
endmember spectrum used in the classification.

The rule images represent different things for different types of classifications, for
example:

Classification Method Rule Image Values

Parallelepiped Number of bands that satisfied the parallelepiped criteria.

Minimum Distance Sum of the distances from the class means

Maximum Likelihood Probability of pixel belonging to class

Manalanobis Distance Distances from the class means


Binary Encoding Binary Match in Percent

Spectral Angle Mapper Spectral Angle in Radians (smaller angles indicate closer
match to the reference spectrum)

1. For the SAM classification above, load the classified image and the rule
images into separate displays and compare using dynamic overlays. Invert
the SAM rule images using Functions->Display Enhancements->Color
Mapping->ENVI Color Tables and dragging the "Stretch Bottom" and
"Stretch Top" sliders to opposite ends of the dialog. Areas with low spectral
angles (more similar spectra) should appear bright.
2. Create classification and rule images using the other methods. Use dynamic
overlays and Cursor Location/Value to determine if better thresholds could
be used to obtain more spatially coherent classifications.
3. If you find better thresholds, select Classification->Post Classification->Rule
Classifier and enter the appropriate threshold to create a new classified
image. Compare your new classification to the previous classifications

Figure 5: Rule Image for Canon City Landsat TM, Spectral Angle Mapper
Classification. Stretched to show best matches (low spectral angles) as bright
pixels.

Post Classification Processing

Classified images require post-processing to evaluate classification accuracy and to


generalize classes for export to image-maps and vector GIS. ENVI provides a series
of tools to satisfy these requirements

Class Statistics

This function allows you to extract statistics from the image used to produce the
classification. Separate statistics consisting of basic statistics (minimum value,
maximum value, mean, std deviation, and eigenvalue), histograms, and average
spectra are calculated for each class selected.

1. Choose Classification->Post Classification->Class Statistics to start the


process and select the Classification Image CAN_PCLS.IMG and click OK.
2. Next select the image used to produce the classification CAN_TMR.IMG and
click OK. Finally, choose the statistics to be calculated, enter the output
filenames and click OK at the bottom of the Compute Statistics Parameters
dialog.

Figure 6: Classification Statistics Report for Region 3 of the Canon City Landsat TM
data.

Confusion Matrix

ENVI's confusion matrix function allows comparison of two classified images (the
classification and the "truth" image), or a classified image and ROIs. The truth
image can be another classified image, or an image created from actual ground
truth measurements.

1. Select Classification->Post Classification->Confusion Matrix-> Method,


where method is either Using Ground Truth Image, or Using Ground Truth
ROIs.
2. For the Ground Truth Image Method, compare the Parallelepiped and SAM
methods by entering the two filenames CAN_SAM.IMG and CAN_PCLS.IMG
and clicking OK (for the purposes of this exercise, we are using the
CAN_PCLS.IMG file as the ground truth). Use the Match Classes Parameters
dialog to pair corresponding classes from the two images and click OK.
Examine the confusion matrix and confusion images. Determine sources of
error by comparing the classified image to the original reflectance image
using dynamic overlays, spectral profiles, and Cursor Location/Value.

Figure 7: Confusion Matrix (percent) using ROIs as Ground Truth.

1. For the ROI method, select the classified image to be evaluated. Match the
image classes to the ROIs loaded from CLASSES.ROI, and click OK to
calculate the confusion matrix. Examine the confusion matrix and determine
sources of error by comparing the classified image to the ROIs in the original
reflectance image using spectral profiles, and Cursor Location/Value.
Figure 8: Confusion Matrix (pixel count) using ROIs as Ground Truth.

Clump and Sieve

Clump and Sieve provide means for generalizing classification images. Sieve is
usually run first to remove the isolated pixels based on a size (number of pixels)
threshold, and then clump is run to add spatial coherency to existing classes by
combining adjacent similar classified areas. Compare the precalculated results in
the files CAN_SV.IMG (sieve) and CAN_CLMP.IMG (clump of the sieve result) to the
classified image CAN_PCLS.IMG (parallelepiped classification) or calculate your own
images and compare to one of the classifications.

1. To execute the function, select Classification->Post Classification->Sieve


Classes, choose one of the classified images, enter an output filename and
click OK. Use the output of the sieve operation as the input for clumping.
Choose Classification->Post Classification->Clump Classes, enter an output
filename, and click OK.
2. Compare the three images and reiterate if necessary to produce a
generalized classification image.

Combine Classes

The Combine Classes function provides an alternative method for classification


generalization. Similar classes can be combined to form one or more generalized
classes.

1. Examine the precomputed image CAN_COMB.IMG or perform your own


combinations as described below.
2. Select Classification->Post Classification->Combine Classes and choose
Region 3 to combine with Unclassified, click on Add Combination, and then
OK in the Combine Classes Parameters dialog. Enter an output filename and
click OK.
3. Compare the combined class image to the classified images and the
generalized classification image using image linking and dynamic overlays.

Edit Class Colors

When a classification image is displayed, you can change the color associated with
a specific class by editing the class colors.

1. Select Functions->Display Enhancements->Color Tables->Class Color


Mapping in the Main Image Display window.
2. Click on one of the class names in the Class Color Mapping dialog and
change the color by dragging the appropriate color sliders or entering the
desired data values. Changes are applied to the classified image
immediately. To make the changes permanent, select File->Save Changes in
the dialog.

Overlay Classes

Overlay classes allows the user to place the key elements of a classified image as a
color overlay on a grayscale or RGB image.

1. Examine the precalculated image CAN_OVR.IMG or create your own


overlay(s) from the CAN_TMR.IMG reflectance image and one of the
classified images above.
2. Select Classification->Post Classification->Overlay Classes from the ENVI
Main menu and use CAN_COMB.IMG as the classification input and
CAN_TMR.IMG band 3 as the RGB image layers (the same band for RGB).
Click OK and then choose Region #1 and Region #2 to overlay on the
image. Enter an output name and click OK to complete the overlay.
3. Load the overlay image into an image display and compare with the
classified image and the reflectance image using linking and dynamic
overlays.

Classes to Vector Layers

Load the precalculated vector layers onto the grayscale reflectance image for
comparison to raster classified images, or execute the function and convert one of
the classification images to vector layers.

1. To Load the precalculated vector layers produced from the clumped


classification image above, select Functions->Overlays->Vector Layers in
the Main Image Display with the clumped image CAN_CLMP.IMG displayed.
2. Choose File->Open Vector File->ENVI Vector File in the Display Vector
Parameters dialog and choose the files CAN_V1.EVF AND CAN_V2.EVF. Click
on Apply to load the vector layers onto the image display.
3. The vectors derived from the classification polygons will outline the raster
classified pixels.
4. To complete your own Classification to Vector conversion, select
Classification->Post Classification->Classes to Vector Layers and choose the
generalized image CAN_CLMP.IMG as the Raster to Vector input image.
5. Select Region #1 and Region #2, enter the root name "CANRTV" and click
OK to begin the conversion.
6. Select the two regions in the Available Vectors List by clicking in the
appropriate check boxes and click on Load Selected at the bottom of the
dialog.
7. Choose the correct display number in the Load Vector dialog for the
grayscale reflectance image and the vector layers will be loaded into the
Display Vector Parameters Dialog. Click Apply to display the vectors over the
image. Use Edit Layer to change the colors and fill of the vector layers to
make them more visible.
Classification Keys Using Annotation

ENVI provides annotation tools to put classification keys on images and in map
layouts. The classification keys are automatically generated.

1. Choose Functions->Overlays->Annotation in the Main Image display window


for either one of the classified images, or for the image with the vector
overlay.
2. Select Object->Map Key to start annotating the image. You can edit the key
characteristics by clicking on the Edit Map Key Items button in the
annotation dialog and changing the desired characteristics. Click and drag
the map key using the left mouse button in the display to place the key.
Click in the display with the right mouse button to finalize the position of the
key. For more information about image annotation, please see the ENVI
User's Guide.

Figure 9: Classification image with classification key.

Copyright » 1993 - 1998, BSCLLC, All rights reserved. ENVI is a registered


trademark of Better Solutions Consulting LLC, Lafayette, Colorado,Web:
http://www.envi-sw.com, Email: envi@bscllc.com. .(Last Update, December 10,
1997)

You might also like