0% found this document useful (0 votes)
39 views7 pages

OMIS

Uploaded by

673555014qq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views7 pages

OMIS

Uploaded by

673555014qq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

M INING

SCIENCE AND
TECHNOLOGY
Mining Science and Technology 19 (2009) 0835–0841
www.elsevier.com/locate/jcumt

Feature extraction for target identification and image


classification of OMIS hyperspectral image
DU Pei-jun1, TAN Kun1, SU Hong-jun2
1
Department of Remote Sensing and Geographical Information Science, China University of Mining & Technology,
Xuzhou, Jiangsu 221008, China
2
Key Laboratory for Virtual Geographic Environment of Ministry of Education, Nanjing Normal University, Nanjing,
Jiangsu 210046, China

Abstract: In order to combine feature extraction operations with specific hyperspectral remote sensing information processing
objectives, two aspects of feature extraction were explored. Based on clustering and decision tree algorithm, spectral absorption
index (SAI), continuum-removal and derivative spectral analysis were employed to discover characterized spectral features of dif-
ferent targets, and decision trees for identifying a specific class and discriminating different classes were generated. By combining
support vector machine (SVM) classifier with different feature extraction strategies including principal component analysis (PCA),
minimum noise fraction (MNF), grouping PCA, and derivate spectral analysis, the performance of feature extraction approaches in
classification was evaluated. The results show that feature extraction by PCA and derivate spectral analysis are effective to OMIS
(operational modular imaging spectrometer) image classification using SVM, and SVM outperforms traditional SAM and MLC
classifiers for OMIS data.
Keywords: hyperspectral remote sensing; feature extraction; decision tree; SVM; OMIS

1 Introduction related with hyperspectral remote sensing data and


application objectives.
Hyperspectral remote sensing information can be Most of previous experiments have been conducted
used to fine identification, classification and dis- for AVIRIS, HyMap, Hyperion and other hyperspec-
crimination tasks which couldn’t be solved by mul- tral data sources, however, OMIS the aerial hyper-
tispectral remote sensing information, for example, spectral sensor developed by Shanghai Institute of
fine classification, endmember abundance estimation, Technical Physics of Chinese Academy of Sciences,
state diagnosis and other sophisticated applica- as one of the most important breakthroughs of
tions[1–5]. Hyperspectral remote sensing has such China’s earth observation technologies, hasn’t got
characteristics as vast data volume, large waveband enough attention so far. Besides, it is promising to
numbers, narrow spectral interval, strong correlation employ the new information processing approaches to
among adjacent wavebands, information redundancy, feature extraction from hyperspectral remote sensing
and combination of image and spectrum, so some data. Therefore, some novel approaches, including
new problems are encountered, for example, treat- data mining (specifically, clustering and decision tree)
ment of high-dimensional data, requirements to high and SVM, are experimented to extract features from
computation capacity and need for a prior knowl- OMIS hyperspectral data for target identification and
edge[6–7]. Feature extraction, as one of the most sig- image classification in this paper.
nificant operations to hyperspectral remote sensing
data, has been investigated from two aspects: 1) ef- 2 Experimental dataset
fective feature extraction for dimensionality reduction
and classification, and 2) reliable spectral signatures The experimental data is the OMIS imagery cap-
discovery for target identification. It is worth nothing tured at Changping region of Beijing city, and the
that specific feature extraction algorithms should be location of the image is 40.174670'N and
Received 01 March 2009; accepted 22 April 2009
Projects 40401038 and 40871195 supported by the National Natural Science Foundation of China, NCET-06-0476 by the Program for New Century Excellent
Talents in University, and 20070290516 by the Specialized Research Fund for the Doctoral Program of Higher Education
Corresponding author. Tel: +86-516-83591316; E-mail address: dupjrs@cumt.edu.cn
836 Mining Science and Technology Vol.19 No.6

116.254089'E. OMIS sensor has 64 wavebands within ple components. It is difficult to extract characterized
spectral interval from 0.46 ȝm to 1.1 ȝm. The ex- spectral features from original spectral curve directly.
perimental image consists of 512 rows and 512 pixels. Pre-processing of spectral curves is used to reinforce
Fig. 1 is the RBG composite image of Band 36(0.81 characterized reflectance and absorption spectral fea-
ȝm), Band 23(0.68 ȝm) and Band 11(0.56 ȝm) as red, tures. Continuum removal is a very effective algo-
green and blue components respectively. rithm for spectral curve processing[10]. The continuum
of the spectral curve is equal to its crust, and the
value after continuum removal is the ratio of actual
Bare soil value with the corresponding value on the continuum.
By using continuum removal the reflectance and ab-
Water Cropland sorption features can be reinforced. By comparing
those spectral features extracted from original data
and continuum-removal data, it proves contin-
uum-removal is effective to extract significant spec-
tral features.
3.3 Spectral absorption feature
Spectral curve picked from hyperspectral data can
illustrate the reflectance or absorption features of
ground objects because most objects have typical
Fig. 1 False composite of the original hyperspectral spectral features which are highly correlated with
remote sensing image
their chemical components[11]. Spectral absorption
features can be extracted from spectral curve directly
3 Feature extraction for target identifica- or derivative spectra indirectly, and some parameters
tion based on data mining including width, height, slope and symmetry, can be
further derived to calculate a comprehensive index:
This section focuses on spectral feature extraction SAI for every spectral absorption location. SAI can
for target identification based on data mining algo- be obtained by Eq.(1)[11]:
rithms. For the given spectra acquired from field W ⋅ Rs + (1 − W ) Re (1)
SAI =
spectral measurement using spectrometer or pixels on Rm
hyperspectral remote sensing image, how to extract
where W = λe − λs , Rs and λs are the reflectance
those significant features or characterized spectral
features that can characterize the objects is still one of the absorption peak and wavelength left to the ab-
important task for hyperspectral information process- sorption trough respectively. Re and λe are the
ing and applications. Different methods to extract reflectance of absorption peak and wavelength right
characteristic spectral features have been researched to absorption trough respectively. Rm is the reflec-
in the past[8–9]. Data mining (DM) algorithms, in-
cluding clustering, association rules and decision tree, tance of absorption trough. The wavelength of the
are capable of providing new powerful tools for hy- absorption trough band is called as spectral absorp-
perspectral information intelligent processing and tion position (SAP).
feature discovery. Some researches on spectroscopy have show that
SAI can represent the variation characteristics of
3.1 Clustering spectral absorption features essentially. Based on our
Before candidate spectral features are derived, experiments, SAI is suitable to identify and discrimi-
clustering is used to categorize those processed spec- nate objects. The index of SAP and SAI which are
tra into different groups for further processing. Hy- suitable to construct the decision tree can be derived
perspectral image or spectral clustering aims to parti- by selecting the most distinct features. For example,
tion all pixels into different categories based on simi- the features that are effective to describe different
larity measure. Four spectral similarity measure indi- classes are listed in Table 1.
cators, including spectral angle, spectral information Table 1 Spectral features derived from spectral curve
divergence, distance and correlation coefficients, are of different objects
commonly used in clustering. By comparison of Objects SAI SAP (nm)
above four indexes, spectral angle is adopted for Crop I 0.5389 690
similarity measure. Crop II 0.4520 690

3.2 Continuum removal Grassland 0.6701 530


Built-up land 0.7417 530
Due to the complex structure of the ground objects, Bare soil 0.9029 630
usually the spectrum of a pixel is composed of multi-
DU Pei-jun et al Feature extraction for target identification and image classification of … 837

3.4 Decision tree algorithm T2: (SAP=530±10 and SAI=0.74±0.05);


T3: (SAP=630±10 and SAI=0.90±0.05).
Decision tree is widely used for identification and
The rules can be used to construct decision trees
classification[12]. Based on above features two types
using the form of IF-THEN usually, where the IF part
of decision trees are generated. The first type of deci-
includes the decision condition and the THEN clause
sion tree is used to identify a specific target class. The
is identified class. The extracted rules can be used
second type is the multi-class decision tree that de-
further to identify and discriminate object classes
termines the class of an unknown spectrum based on
based on their spectra.
its spectral features.
3.4.1 Decision tree to identify a specific target 3.5 Experiments
Based on the experiment, SAI indices extracted
In this experiment, the spectral absorption parame-
from spectral curve after continuum removal are used
ters are extracted from spectral curve after continuum
to construct the decision tree to identify a specific
removal of Crop I, Crop II, Grassland, Built-up land
object.
and Bare soil from OMIS imagery. After the candi-
In Fig. 2, T0 is discrimination rule, and it is differ-
date spectral features are extracted, the next step is to
ent when using various spectral curves. The following
recognize those characterized spectral features that
condition is for the third-order derivative spectrum,
are effective to identify the specific objects and dis-
7HVW GDWD criminate the object class from others.
Table 2 shows the spectral feature data extracted
7 7
from the spectral curve after continuum removal.
)
From Table 2 we can obviously find that the feature
&URS 2WKHUV
from the same group (Crop I and Crop II) object is at
Fig. 2 Decision tree for target identification close range, whereas it has remarkable difference
from other groups. Then the spectra that belong to the
T0: (SAP=690±10 and SAI=0.5±0.05). T means same group (Crop I, and Crop II) are compared and
‘true’, and F means ‘false’. In this decision tree, the analyzed. Therefore, some common characterized
characterized spectra of specific objects should be features, which distinguish different objects, can be
known a priori, and then spectral features in accor- identified to indicate the specific objects. Table 3 is
dance with the rule can be found. the features used to identify Crop.
3.4.2 Multi-class decision tree Table 2 Results of spectral curve features extracted from
The basic idea of multi-class decision tree is to reflectance spectrum after continuum removal
identify the class to which the unknown spectrum Crop I Crop II
Grass Built-up Bare
land land soil
belongs. The multi-class decision tree is shown as Fig. Wavelength
3. 690 690 530 530 630
(nm)
Reflectance 1 1 1 1 1
Width (nm) 130 130 150 70 70
Depth 0.5977 0.6455 0.6058 0.1882 0.1812
Slope 1.2324 1.1125 1.3828 –1.1651 1.2269
Symmetry 0.3846 0.3846 0.2667 0.1429 0.2857
SAI 0.5389 0.4520 0.6701 0.7417 0.9029
Num 13 12 11 15 14

Note: Crop I is the spectra from (X:282, Y:188), Crop II from (X:244,
Y:150), Grassland from (X:314, Y:46), Built-up land from (X:65, Y:448),
Bare soil from (X:443, Y:352). X is column and Y is row of the imagery.

Table 3 Valid features to recognize crops


Valid features Crop I Crop II
Wavelength (nm) 690 690
Width 130 130

Fig. 3 Multi-class decision tree for target identification Symmetry 0.3846 0.3846
SAI 0.5389 0.4520
In Fig. 3, T0, T1, T2 and T3 are discrimination
rules, and they may be different when using various If we want to distinguish some objects from others,
spectral curves. The following discrimination condi- the characterized features that can stand for that ob-
tions are based on third-order derivative spectra. ject should be extracted and employed. Spectra of
T0: (SAP=690±10 and SAI=0.5±0.05); different groups can be compared, and then those
T1: (SAP=530±10 and SAI=0.65±0.05); common features are ignored because they are not
838 Mining Science and Technology Vol.19 No.6

effective to distinguish different targets, but the fea- tions may be combined to give Eq.(2)[15–17]:
tures that can distinguish from others are remained. yi ( w ⋅ xi + b) ≥ 1 (2)
Those effective features are listed in Table 4. So the geometrical margin between the two classes
Table 4 Valid features to distinguish grassland, is given by 2 / w , named as margin. The concept of
built-up land and bare soil from crops
margin is central in the SVM approach, since it is a
Valid Built-up Bare
features
Crop I Crop II Grassland
land soil measure of its generalization capability. The larger
Wavelength
690 690 530
530 630 the margin is, the higher the expected generalization
(nm)
70 70
is.
Width 130 130 150
Accordingly, it turns out the optimal hyperplane
Depth 0.5977 0.6455 0.1882 0.1812
can be determined as the solution of the following
Slope 1.2324 1.1125 –1.1651
convex quadratic programming problem:
Symmetry 0.3846 0.3846 0.2667 0.2667 0.2857
1 2
SAI 0.5389 0.4520 0.7417 0.7417 0.9029 m in : w
w ,b 2
subject to: yi [ (w ⋅ xi ) + b] ≥ 1, i = 1, 2, , N (3)
By above operations those characterized features to
recognize a specific class and discriminate different The classical linearly constrained optimization
classes can be extracted. Based on those features, the problem can be translated (using a Lagrangian for-
decision tree to identify specific class can be created mulation) into the following dual problem:
and then adopted to further image processing and N
1 N N

target identification. This approach shows good per-


Maximize : ¦α − 2 ¦¦α α y y ( x
i =1
i
i =1 j =1
i j i j i xj )
formance to fast target identification. N
subject to : ¦a y = 0 i i and ai ≥ 0, i = 1,2,, N (4)
4 Feature extraction for SVM classifica- i =1

tion of OMIS image For linearly nonseparable case, a kernel function


that satisfies the condition stated by Mercer’s theo-
4.1 SVM classifier rem so as to correspond to some type of inner product
in the transformed (higher) dimensional feature space
SVM, as the most effective statistical learning al- is introduced:
gorithm, uses structural risk minimization (SRM) K ( xi , x ) = Φ ( xi ) ⋅Φ ( x ) (5)
criterion rather than empirical risk minimization
So the dual problem can be expressed by
(ERM) in other machine learning methods[13]. It is
advantageous to mitigate those difficulties in hyper- N
1 N N
spectral classification such as small-size samples,
Maximize : ¦α
i =1
i − ¦¦ α iα j yi y j K ( xi , x j )
2 i =1 j =1
high dimensionality, poor generalization and uncer- N

tainty impacts, so SVM has been employed to hyper- subject to : ¦a y


i =1
i i = 0 and 0 ≤ ai ≤ C , i = 1, 2,  , N (6)
spectral remote sensing image classification in recent
years[14–18]. Although it is generally concluded that where C is the regularization parameter.
SVM performs better than other conventional classi- So the final result is a discriminate function f(x)
fiers and suitable for high dimensional features (for conveniently expressed as a function of the data in
example, the direct use of all bands of hyperspectral the original (lower) dimensional feature space:
image), the time consumption and computation ca- ª n º
pacity are still challenging, therefore, feature extrac- f ( x ) = sgn[( w ⋅ x ) + b] = sgn « ¦ α i∗ yi K ( xi ⋅ x ) + b∗ » (7)
¬ i =1 ¼
tion is still meaningful on many occasions. In this
section, feature extraction is employed to hyperspec- Some popular kernel functions include: 1) linear
tral remote sensing imagery classification using SVM kernel: K ( xi , x ) = ( xi ⋅ x ) ; 2) polynomial kernel:
and the effectiveness of different feature extraction K ( xi , x ) = ( xi ⋅ x + 1) d (where d is a constant); 3)
strategies are evaluated. As to our knowledge, there
haven’t been any experiments of SVM for OMIS im- Gaussian Radial Basis Function kernel:
2
age classification. K ( xi , x ) = exp( −γ x − xi ) ; and 4) Sigmoid ker-
Classification by SVM is based on the fitting of an
nel: K ( xi , x ) = S ( a( xi x ) + t ) [15–17].
optimal separating hyperplane between classes. A
hyperplane in feature space is defined by the equation In this paper, RBF kernel function is selected. So C
w ⋅ x + b = 0 , where x is a point lying on the hyper- and γ are required parameters for the SVM classi-
plane, w is normal to the hyperplane and b is the bias. fier.
So a separating hyperplane can be defined for two 4.2 Feature extraction for SVM classification
classes as: w ⋅ xi + b ≥ 1 (for the class yi=+1) and
Five classes are defined for the image: built-up
w ⋅ xi + b ≤ −1 (for the class yi = –1). These two equa- land, water, bare soil, cropland and grassland. Dif-
DU Pei-jun et al Feature extraction for target identification and image classification of … 839

ferent feature extraction approaches, including PCA, from original data is effective to OMIS hyperspectral
MNF and grouping PCA, are experimented. image classification.
Different combination schemes of principal com- When MNF is used for feature extraction and di-
ponents are experimented. The classification accuracy mensionality reduction, the same component combi-
using different components is shown in Table 5. At nations as PCA are used to experiment the three clas-
beginning, accuracy increases with components num- sifiers: SVM, SAM and MLC. The classification ac-
bers, but after a specific stage the classification accu- curacy indicators are shown in Table 7 and Table 8.
racy decrease with more components used in classi- Table 7 Classification accuracy of SVM using
fication. The possible reason is the noises in those MNF components
latter components are introduced when more compo- Amount of Total accuracy
Kappa C σ
nents are used. components (%)
1 43.56 0.316 32 0.5
Table 5 Classification accuracy of SVM using PCA
2 44.78 0.319 32 0.2
different components
Amount of 3 44.89 0.330 32 32
Total accuracy (%) Kappa C σ
components 4 45.34 0.334 32 0.2
1 40.57 0.299 32 0.5 5 45.47 0.350 32 0.2
2 65.52 0.540 32 0.2 10 55.69 0.411 32 0.1
3 67.39 0.576 32 32 15 56.53 0.424 16 0.06
4 67.63 0.581 32 0.2 20 57.68 0.440 8 0.05
5 68.83 0.591 32 0.2 30 56.21 0.414 8 0.03
10 66.92 0.567 32 0.1 40 56.11 0.411 8 0.03
15 66.54 0.561 16 0.06 50 55.98 0.409 4 0.02
20 66.35 0.559 8 0.05 60 55.56 0.402 4 0.02
30 66.05 0.550 8 0.0333
40 65.95 0.550 8 0.03 Table 8 Classification accuracy of SAM and MLC
50 65.56 0.541 4 0.02 using MNF components
60 64.30 0.530 4 0.02 SAM MLC
Amount of
components Total Total
Kappa Kappa
In order to compare the performance of SVM clas- accuracy (%) accuracy (%)
sifier with traditional spectral angle mapper (SAM) 2 15.71 0.008 19.10 0.033
and maximum likelihood classifier (MLC) to OMIS 3 16.25 0.002 21.26 0.058
data, the classification accuracy of SAM and MLC 4 17.39 0.008 21.29 0.058
are listed in Table 6. 5 18.58 0.020 18.99 0.030

Table 6 Classification accuracy of SAM and MLC 10 18.97 0.020 16.33 0.002
using PCA components 15 20.03 0.029 14.79 0.019
SAM MLC 20 18.41 0.001 14.26 0.026
Amount of
components Total accuracy Total accuracy 30 18.32 0.003 15.07 0.015
Kappa Kappa
(%) (%) 40 18.22 0.006 16.31 0.001
2 17.35 0.008 15.36 0.009
50 18.11 0.007 16.28 0.001
3 17.57 0.010 16.46 0.001
60 17.49 0.007 16.24 0.001
4 17.44 0.008 15.25 0.013
5 17.45 0.008 16.56 0.003 It can be concluded that the accuracy of SVM clas-
10 17.60 0.010 15.52 0.009 sifier increases with the component amounts and ar-
15 17.62 0.010 15.16 0.014 rives at its maximum when 20 components are em-
20 17.61 0.010 14.95 0.014 ployed, and then decreases with the increase of com-
30 17.64 0.011 15.76 0.006 ponent numbers. In contrast with PCA, MNF is less
40 17.66 0.011 16.35 0.001 effective to OMIS image feature extraction for OMIS
50 17.70 0.011 16.42 0.001 image classification.
60 17.69 0.011 16.37 0.001 Apart from the PCA and MMN transformation to
the entire dataset, grouping PCA is experimented. In
It shows that SVM outperforms MLC and SAM to grouping PCA, PCA are used to different groups that
OMIS image classification, especially the classifica- contain some similar bands of original data based on
tion accuracy of SVM is much higher than that of subspace partition and the first components of each
SAM and MLC when principal components are group are selected for classification. Correlation co-
adopted. It also shows that the former 5 components efficients among adjacent bands are used as the crite-
can result in high accuracy. Therefore, SVM classifier rion of subspace partition.
using the first five principal components extracted Two grouping schemes are used. In the first
840 Mining Science and Technology Vol.19 No.6

scheme, five groups are created: band 1 to band 11, Derivative spectral analysis can enhance some in-
band 12 to band 22, band 23 to band 38, band 39 to trinsic spectral features for target identification and
band 53, and band 54 to band 64. In the second classification. For the OMIS hyperspectral remote
scheme, ten groups are generated: band 1 to band 7, sensing imagery with 64 bands, 62 new derivative
band 8 to band 11, band 12 to band 19, band 20 to spectral spaces (images) are extracted based on the
band 22, band 23 to band 34, band 35 to band 38, principle of one-order derivative spectral analysis. By
band 39 to band 47, band 48 to band 53, band 54 to comparing the classification accuracy of 64 dimen-
band 60, band 61 to band 64. sional original data and 62 dimensional one-order
When grouping PCA is used, PCA is conducted to derivative spectral data, it shows that classifier using
each group and the first component of that group is derivative spectra as inputs performs better than that
selected to generate the data sets for classification, so using original data. In addition, when the mixed data-
5 components for the first scheme, and 10 compo- set of derivative spectra and original data is used for
nents for the second scheme are extracted. Table 9 is classification, there is a bit improvement to classifi-
the classification accuracy of grouping PCA. cation accuracy. Table 11 is the classification accu-
Table 9 Classification accuracy of SVM to grouping PCA
racy of SVM using original data, derivative spectra
σ
and mixed dataset.
Grouping PCA Total accuracy (%) Kappa C
PC1 of five groups 65.96 0.558 32 0.2 Table 11 Classification accuracy of SVM, SAM and MLC
using derivative spectra
PC1 of ten groups 66.16 0.561 32 0.1
Total
accuracy Kappa C σ
(%)
As a comparison, the band with maximum infor- SVM using derivative spectra 66.08 0.559 32 0.018
mation content that is measured by the maximum /62 dimensional
variance in each group is selected to generate other SVM using Original data 64.18 0.525 32 0.011
/64 dimensional
data sets for classification. Table 10 is the accuracy of
SVM using mixed data 66.91 0.570 32 0.0079
grouping band selection. It can be known that group- /126 dimensional
ing PCA is also an effective mean to feature extrac- SAM using derivative spectra 21.33 0.033
tion and classification, but its accuracy is a bit lower /62 dimensional
than that by overall PCA. MLC using derivative spectra 19.25 0.025
/62 dimensional
Table 10 Classification accuracy of SVM to grouping
band selection In order to indicate the performance of different
Grouping selection Total accuracy (%) Kappa C σ feature extraction and combination schemes, some
Five groups 61.72 0.502 32 0.05 classification results are listed in Fig. 4. Those results
Ten groups 64.71 0.541 32 0.11
can show the effectiveness of those feature extraction
methods further.

(a) Original data (b) Former five principal components (c) Former 20 MNF components

(d) Grouping PCA (10 groups) (e) First order derivatie spectra (f) Mixed data set of original data
and first derivative spectra
Fig. 4 Classification results of SVM using different feature combination schemes
DU Pei-jun et al Feature extraction for target identification and image classification of … 841

5 Conclusions [2] Du Y, Chang C, Ren H. New hyperspectral discrimina-


tion measure for spectral characterization. Optical Engi-
In this paper, feature extraction from OMIS hyper- neering, 2004, 43(8): 1777–1788.
[3] VanDer M, Bakker W. Cross correlogram spectral
spectral remote sensing imagery is investigated from matching: application to surface mineralogical mapping
two aspects: characterized spectral feature extraction by using AVIRIS data from Cuprite, Nevada. Remote
for target identification, and dimensionality reduction Sensing of Environment, 1997(6): 371–382.
for image classification. Some popular information [4] Chang C. Hyperspectral Imaging: Techniques for Spec-
processing methods, including decision tree, support tral Detection and Classification. London: Kluwer Aca-
vector machine and data mining, are experimented for demic/Plenum Publishers, 2003.
[5] Shaw G, Burke H. Spectral imaging for remote sensing.
the specific data source. Some conclusions are sum- Lincoln Laboratory Journal, 2003, 14(1): 3–28.
marized as follows. [6] Varshney P, Arora M. Advanced Image Processing
1) Decision tree is suitable to distinguish one class Techniques for Remotely Sensed Hyperspectral Data.
from others, or identify the label of an unknown spec- Berlin: Springer Press, 2004.
trum based on the extracted spectral features. Some [7] Chang C, Liu W, Chang C. Discrimination and identifi-
basic data mining algorithms, including clustering, cation for subpixel targets in hyperspectral imagery. In:
International Conference on Image Processing, 2004:
association rules and decision tree, are effective to 3339–3342.
intelligent information processing. [8] Garcia-Haro F J, Sommer S, Kemper T. A new tool for
2) SVM is suitable for high-dimensional image variable multiple end member spectral mixture analysis
classification. Feature extraction for dimensionality (VMESMA). International Journal of Remote Sensing,
reduction is meaningful to SVM since it doesn’t re- 2005, 26(10): 2135–2162.
quire much computation capacity. By using PCA, [9] Manolakis D, Siracusa C, Shaw G. Hyperspectral sub-
pixel target detection using the linear mixing model.
MNF and derivative spectral analysis for feature ex- IEEE Transactions on Geoscience and Remote Sensing,
traction, SVM classifier is compared with traditional 2001, 39(7): 1392–1409.
SAM and MLC for different feature combination [10] Huang Z, Turner B, Dury S. Estimating foliage nitrogen
schemes. It is obviously revealed that SVM outper- concentration from HYMAP data using continuum re-
forms MLC and SAM for OMIS classification no moval analysis. Remote Sensing of Environment, 2004
matter what features are used, and feature extraction (93): 18–29.
[11] Wang J, Zhang B, Liu J. Hyperspectral data mining–
by PCA is much better than MNF, so SVM classifier toward target recognition and classification. Journal of
with the former five components of PCA is recom- Image and Graphics, 1999, 4(11): 957–964. (In Chinese)
mended to OMIS imagery classification. [12] Pal M, Mather P M. An assessment of the effectiveness
3) SVM classifier using the results of derivative of decision tree methods for land cover classification.
spectral analysis as input can obtain higher accuracy Remote Sensing of Environment, 2003(86): 554–565.
than that using original data as input, which means [13] Vapnik V N. Statistical Learning Theory. New York:
Wiley, 1998.
derivative spectral analysis is not only suitable for [14] Melgani F, Bruzzone L. Classification of hyperspectral
target identification, but also for image classification. remote sensing images with support vector machines.
By the aforementioned experiments and analysis, IEEE Transactions on Geoscience and Remote Sensing,
some effective feature extraction strategies for target 2004, 42(8): 1778–1790.
identification and image classification of OMIS hy- [15] Bruzzone L, Chi M, Marconcini M. A novel transductive
perspectral remote sensing data have been developed SVM for semisupervised classification of remote sensing
images. IEEE Transactions on Geoscience and Remote
and proposed for further uses. In the future, we will Sensing, 2006, 44(11): 3363–3373.
give more experiments to OMIS data processing and [16] Camps-Valls G, Gomez-Chova L, Calpe-Maravilla J.
promote its applications in different fields. Robust support vector method for hyperspectral data
classification and knowledge discovery. IEEE Transac-
Acknowledgements tions on Geoscience and Remote Sensing, 2004, 42(7):
1530–1542.
The authors thank the grant from the National [17] Foody G, Mathur A. The use of small training sets con-
Natural Science Foundation of China (40401038, taining mixed pixels for accurate hard image classifica-
tion: training on mixed spectral responses for classifica-
40871195), the Program for New Century Excellent tion by a SVM. Remote Sensing of Environment, 2006
Talents in University (NCET-06-0476) and the Spe- (103): 179–189.
cialized Research Fund for the Doctoral Program of [18] Tso B, Mather P. Classification Methods for Remotely
Higher Education (20070290516). Sensed Data. London: Taylor & Francis, 2001.

References
[1] Pu R, Gong P. Hyperspectral Remote Sensing and Its
Application. Beijing: Higher Education Press, 2000. (In
Chinese)

You might also like