0% found this document useful (0 votes)
17 views211 pages

BDY513 Chapter 7.pptx-Combined

Chapter 7 discusses various applications of remote sensing, including land cover mapping, vegetation monitoring, forestry, agriculture, geology, disaster management, hydrology, and ocean monitoring. It highlights the use of the Normalized Difference Vegetation Index (NDVI) for assessing vegetation health and the importance of different satellite sensors in capturing relevant data. The chapter emphasizes the significance of remote sensing in understanding environmental changes and managing natural resources.

Uploaded by

Syainatul 1036
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views211 pages

BDY513 Chapter 7.pptx-Combined

Chapter 7 discusses various applications of remote sensing, including land cover mapping, vegetation monitoring, forestry, agriculture, geology, disaster management, hydrology, and ocean monitoring. It highlights the use of the Normalized Difference Vegetation Index (NDVI) for assessing vegetation health and the importance of different satellite sensors in capturing relevant data. The chapter emphasizes the significance of remote sensing in understanding environmental changes and managing natural resources.

Uploaded by

Syainatul 1036
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 211

Chapter 7

Remote
Sensing
Applications
Dr. Mohammad Kamaruddin bin
Zainul Abidin
UiTM Pahang
Some remote sensing applications are:

1. Land cover and land use mapping


2. Vegetation cover monitoring
3. Forestry
4. Agriculture
5. Geology
6. Disaster Management
7. Hydrology
8. Ocean and coastal monitoring
• Land use and land cover mapping

Mapping is related to each and every activity that is being covered by


the satellite. The remotely sensed data that are captures are shown on
the maps. Through mapping we get to know about the land cover,
settlements, types of crop, soil, etc.
Both land cover and use are different as land use is related to the
various activities of human in which way they use the land for
ex.-industrial, residential, recreational etc. Whereas land cover is
related to the physical state of the land ex.-forest ,grassland, minerals
etc.
Land degradation:
Remote
sensing
targets
Decrease in Monitored
vegetation include:
using NDVI
cover

Mapped using
Increase in
image
soil exposure
classification
FCC (5,4,3) multi-temporal
comparison image
FCC image and NDVI
map
Vegetation Cover Monitoring

Normalized Difference Vegetation Index (NDVI)


The NDVI is an index that provides a standardized
method of comparing vegetation greenness
between satellite images.

The formula to calculate NDVI is:

NDVI:
Cont’…
Index values can range from -1.0 to 1.0, but vegetation
values typically range between 0.1 and 0.7.

Higher index values are


associated with higher levels
of healthy vegetation cover

whereas clouds and snow will


cause index values near zero,
making it appear that the
vegetation is less green.
Cont’…
Bands from the following satellite sensors can be used to
calculate NDVI:

Satellite 1st band 2nd band

Landsat MSS Bands 5 (0.6-0.7 µm) Bands 6 (0.7-0.8µm)

Landsat TM Bands 3 (0.63-0.69 µm) Bands 4 (0.76-0.90 µm)

Landsat ETM Bands 3 (0.63-0.69 µm) Bands 4 (0.75-0.90 µm)

NOAA AVHRR Bands 1 (0.58-0.68 µm) Bands 2 (0.72-1.0 µm)


Cont’…
• NDVI can be used as an indicator of relative biomass and
greenness.
• If sufficient ground data is available, the NDVI can be
used to calculate and predict:

highly
primary dominant
grazing impact correlated
production species
with climatic

•It is also highly correlated with climatic variables, such as


the El Niño Southern Oscillation (ENSO) and precipitation.
Exercise: Find the NDVI

Visible NI
120 (Red) 220 50 R 80

100 15 250 90
Exercise: Answer

Visible NI
120 (Red) 220 50 R 80

100 15 250 90

-0.41 -0.47

0.43 0.71
Global Vegetation Cover - Use of NDVI
Global Vegetation Cover
Global Vegetation Cover
Global Vegetation Cover
• Forestry
https://www.youtube.com/watch?v=lTG-0brb98I
It helps in monitoring
the type of forest, its
coverage, exploitation
and many other ways
application is helpful.
Deforestatio
n
Removal of forests,
usually rapidly and over
large areas
Deforestatio
n
Local Environmental Monitoring
Deforestation - multi temporal image
Mapping deforestation
using FCC image (4,3,2)
Deforestation
• Agriculture https://www.youtube.com/watch?v=581Kx8wzTMc
Remote sensing application in agriculture helps in identification of
crops, its yield, management, condition farming etc.
• Geology
The application in geology is helpful
in knowing about the earth's crust.
It provides knowledge about the
landform, structure, composition by
physical, chemical and biological
changes on and within the surface.
Mineral/ Oil Exploration, Mining
Areas. We come to know about
deposition, bedrock, minerals, soil
etc.
https://www.youtube.com/watch?v=oimEGpyTpxM

• Disaster Management
Operationally addressing various natural disasters like
Flood, drought, Landslide, Earthquake and Forest Fire.
R&D Studies on Early warning Systems, Decision Support
Tools
• Hydrology
This is the application that gives
information about every process
that is related with water. e.g.
water quality, soil moisture, snow,
flood, lake etc.

http://article.sapub.org/10.5923.j.ajgis.20160501.02.html
Local Environmental Monitoring
Water Pollution:
Thermal
pollution

Industrial
effluent
Factors
Nutrients

Increased
soil erosion
• Oceans and coastal
https://www.youtube.com/watch?v=sfH8ggKVP4g
monitoring
Ocean application of remote
sensing helps identification of
the ocean and each and
every activity related to
oceans like shipping, oil spill,
storm, currents etc.
Some remote sensing applications
1. Land cover and land use mapping
2. NDVI (https://www.youtube.com/watch?v=Kyzql8FriIY)
3. Forestry (https://www.youtube.com/watch?v=lTG-0brb98I)
4. Agriculture (https://www.youtube.com/watch?v=581Kx8wzTMc)
5. Geology
6. Disaster Management
(https://www.youtube.com/watch?v=oimEGpyTpxM)
7. Hydrology
8. Ocean and coastal monitoring
(https://www.youtube.com/watch?v=sfH8ggKVP4g)
Chapter 6
Remote Sensing
Technology and
Applications
Dr. Mohammad Kamaruddin bin
Zainul Abidin
UiTM Pahang
Learning outcomes:
• Describe remote sensing systems (in respect to wavelength regions
and type of energy resources)
• Explain some of remote sensing applications
Common terminology
• Pixel – a picture element that is the smallest non-divisible element of
a digital image

• Matrix/array – a grid of “n” lines and “n” columns that make up a


digital image

• Brightness Value (BV) – the digital numbers stored for a given pixel

• Spatial Resolution (IFOV) – the area of the earth’s surface shown in a


single pixel
Common terminology
• Quantization – the range of brightness values in an image, measured in bits,
e.g. 8-bit = 2 to the power of 8 = 256 values (0-255)

• Swath width – the width of the Earth’s surface shown in a single image

• Band – the portion of the electromagnetic spectrum the sensor is attuned


to

*Land observations satellites record information in multiple bands that may


be combined to create the equivalents of normal color or CIR photographs
(Color-infrared (CIR) imagery).
Terminology: Resolution

Distance on the ground


Spatial that corresponds to a single
pixel..
Frequency with which an
Temporal image of a specific area or
object can be acquired.
Types of
resolution Wavelength intervals to
Spectral which the sensor can
detect.
Number of data file values
Radiometric associated with a pixel for
each band of data
detected.
Spatial resolution

• The spatial resolution is the smallest unit of an image and


is measured by a pixel (picture element).
• A spatial resolution of 10 m means that an individual pixel
represents an area on the ground of 10 m by 10 m.
• Thus any objects which are smaller than 10 m will not be
distinguishable in the image.
10 m resolution, 10 m 30 m resolution, 10 m 80 m resolution, 10 m
pixel size pixel size pixel size

▪ Cover small
scale of area
▪ Cover large ▪ Detect more
scale of area Low spatial details
▪ Detect roughly resolution High spatial ▪ individual
object resolution objects can
be seen
Pixel Size = 10 m Pixel Size = 20 m
Image Width = 160 pixels, Height = 160 pixels Image Width = 80 pixels, Height = 80 pixels

Pixel Size = 40 m Pixel Size = 80 m


Image Width = 40 pixels, Height = 40 pixels Image Width = 20 pixels, Height = 20 pixels
High-resolution imagery allows
1 details, like houses and cars, to be
seen sharply and clearly.

This type of imagery is often used for


2 community and urban planning and
for agricultural purposes.

The higher the spatial resolution of


3 the imagery, the smaller the region
of earth covered in each image.

In order to see a large area, such as a

4 county or a state, numerous


high-resolution images would be
required - an expensive and
time-consuming effort.

If an organization is working on a

5 regional scale, lower-resolution


imagery, which covers a greater area
of land, might be a better choice.
Temporal resolution

• Each satellite has its own unique revisit schedule for obtaining imagery of a particular
area. The frequency at which the sensor revisits an area is known as temporal
resolution. For instance, if a satellite imaged the same area every ten days, then its
temporal resolution would be ten days. Temporal resolution is crucial factor to
consider in change detection studies.
• To date one of the problems associated with satellite imagery has been the temporal
resolution.
• In farming systems where there is constant change during a growing season the time
between satellite revisits has not been frequent enough for crop monitoring.
• Airborne systems have offered greater flexibility when scheduling flyovers, their
limiting factor being local weather conditions.
Spectral resolution

• Spectral resolution describes the


ability of a sensor to define fine
wavelength intervals. The finer
the spectral resolution, the
narrower the wavelength range
for a particular channel or band.
• Each band records a specific
portion of the electromagnetic
spectrum. Spectral resolution
refers to the specific wavelength
intervals in the electromagnetic
spectrum that a sensor can
record. Narrower bands have
higher spectral resolution.
https://www.slideserve.com/gannon-burns/resolution
Spectral resolution in detecting atmosphere, soil, water and
vegetation
Radiometric resolution
• When describing a camera or sensor this is referred to as
the number of bits into which the recorded data can be
divided.
• In a 12 bit panchromatic camera system for example, the
pixel values may range from 0 (corresponding to black)
where there was no electromagnetic radiation recorded to
a maximum intensity or brightness value (corresponding to
white) of 4096
Radiometric resolution determines how fine the sensor can
distinguish between objects of similar reflection. Eg:

Low radiometric resolution High radiometric resolution

High radiometric resolution


→ The picture on right provide
Provides much better image
more
information Better distinguish
between even subtle
differences in
reflection.
8-bit quantization (256 levels) 6-bit quantization (64 levels)

4-bit quantization (16 levels) 3-bit quantization (8 levels)

2-bit quantization (4 levels) 1-bit quantization (2 levels)


Sensor of interest:
Some of sensors available. (Noted that sensor technology especially in
terms of resolution size are improving with time)

• Landsat
• SPOT
• MODIS
• IKONOS
• GOES
• AVHRR
First Second
Generation Generation (High
(Multi-spectral) resolution)
Multispectral Image

A multispectral image consists of a few image layers, each layer


represents an image acquired at a particular wavelength band.

Multispectral Sensors Properties

7 bands: = blue, green and red


= near-IR bands
Landsat TM = two SWIR bands
= a thermal IR band

3 bands = green, red


SPOT-HRV = NIR bands.

A single SPOT • 3 intensity images in the three wavelength bands.


• Each pixel of the scene has 3 intensity values
multispectral scene
corresponding to the 3 bands
4 bands = blue, green, red
IKONOS = NIR
LANDSAT-7 ETM+ Satellite
•Landsat-7 ETM+ (Enhanced Thematic Mapper)
was launched on 15 April 1999
•Spectral Bands :
Enhanced Thematic Mapper
(Bands 1,2,3,4,5 and 7)
–Pixel Size : 30 x 30 m
Thermal Infrared (Bands 6L and 6H)
–Pixel Size : 120 x 120 m

•Coverage Area : 185km x 185km


•Site revisit : Every 16 days

•Swath Width : 185 km


•Altitude : 705 km

❑ Number of Pixels/Lines
– 3240 x 2340

❑ Panchromatic Band (Band 8)


–Pixel Size : 30 x 30 m
LANDSAT-7 ETM+ SPECTRAL BAND APPLICATIONS

SPECTRAL BANDS (MICRONS) APPLICATION


BAND 1 .45 - .515 (Blue Light) Coastal water mapping, differentiation of vegetation from
soils
BAND 2 .525 - .605 (Green Light) Assessment of vegetation vigour (healthy)

BAND 3 .63 - .69 (Visible Red Light) Chlorophyll absorption for vegetation differentiation

BAND 4 .75 - .90 (Reflective-Infrared) Biomass surveys and delineation of water bodies (soil/water
contrast)
BAND 5 1.55 – 1.75 (Middle Infrared) Vegetation and soil moisture measurements; differentiation
between snow and cloud
BAND 6 10.40 – 12.50 (Thermal Infrared) Thermal mapping, soil moisture studies and plant heat stress
measurement
BAND 7 2.09 – 2.35 (Middle Infrared) Hydrothermal mapping

BAND 8 .52 - .90 Large area mapping, urban change studies.


(Green, Visible Red, Near Infrared)
Chapter 5
Image
classification
DR MOHAMMAD KAMARUDDIN B ZAINUL
ABIDIN
UiTM JENGKA, PAHANG
Lesson outcomes:

• Identify and explain types of image classification


• Identify classification scheme (categories of clusters in the
legend)
• Classification consideration (general steps)
What is image classification?

• Image Classification uses the spectral information


represented by the digital numbers in one or more spectral
bands, and attempts to classify each individual pixel based on
this spectral information
Aim of image classification

• The objective is to assign all pixels in the image to particular


classes or themes (e.g. water, coniferous forest, deciduous
forest, corn, wheat, etc.).
Why classify?

• Make sense of a landscape


• Place landscape into categories (classes)
• Forest, Agriculture, Water, etc
Output of image classification

• Result : classified image, e.g. vegetation map, land use map


and other thematic map.
• Comprised of a mosaic of pixels, each of which belong to a
particular theme
• Categories in the legend are defined by the intended use of
the map. Can be few or many categories, depending on the
purpose of the map and available resources
How image classification works?

1. Different objects have different spectral


signatures
2. In an easy world, all “Vegetation” pixels
would have exactly the same spectral
signature
3. Then we could just say that any pixel in
an image with that signature was
vegetation
4. We’d do the same for soil, etc. and end
up with a map of classes
Classified
image
Types of image classification strategy

Classification

Unsupervised Classification Supervised Classification Hybrid


(Clustering)
• Requires “training • Use
• No extraneous unsupervised
pixels”, pixels
data is used: and supervised
where both the
classes are classification
spectral values
determined together
and the class is
purely on
known.
difference in
spectral values.
Unsupervised Image Classification

• Definition: identification based on natural groups, or structures within


multispectral data; classified by the software algorithm itself
• does NOT use training data for individual information classes as the basis for
classification;
• image pixels are examined and aggregated (divide) into a number of spectral
classes based on natural clustering in multi-dimensional space
• After the fact, we assign class names to those clusters.
*UC is the definition, identification, labeling and mapping of natural spectral classes.
Unsupervised Image Classification
Unsupervised Image Classification
The analyst requests the computer to examine
the image and extract a number of spectrally
Spectrally Distinct Clusters
distinct clusters…

Cluster 3 Cluster 6

Cluster 5 Cluster 2

Cluster 1 Cluster 4

Digital Image
Unsupervised Image Classification
Output Classified Image
Saved Clusters

Cluster 3 Cluster 6

Next Pixel to
be Classified
Cluster 5 Cluster 2

Cluster 1 Cluster 4

Unknown
Unsupervised Image Classification
The result of the The analyst determines the
unsupervised classification is ground cover for each of the
not yet information until… clusters…

?? Water
?
?? Water
?

?? Conifer
?
?? Conifer
?
?? Hardwoo
? d
?? Hardwoo
? d
Unsupervised Image Classification
It is a simple process to The result is essentially
regroup (recode) the clusters the same as that of the
into meaningful information supervised classification:
classes (the legend).
Land Cover Map Legend
Labels
Water
Wate
r
Wate
r
Conif
.
Conifer
Hardw
Conifer .

Hardwoo
d
Hardwoo
d
Supervised Image Classification

• Definition: Is a classifier which requires a training sample for


each class. Then based on how “close” each pixel to be
classified to each training sample.
*Process of using sample (training sample) of known identity to
classify pixels of unknown identity
Supervised Image Classification
Supervised classification requires the
analyst to select training areas where
he/she knows what is on the ground and Mean Spectral
then digitize a polygon within that The computer then creates... Signatures
area…
Conifer

Known Conifer
Area

Water
Known Water
Area

Deciduous

Known Deciduous
Area

Digital
Supervised Image Classification
Mean Spectral Information
Signatures Multispectral Image (Classified Image)

Conifer

Deciduous

Water Unknown
Spectral Signature
of Next Pixel to be
Classified
UNSUPERVISED VS SUPERVISED

VS
General steps of image classification
(land cover)

1. Define why you want a classified image, how will it be used?


2. Decide if you really need a classified image
3. Define the study area
4. Select or develop a classification scheme (legend)
5. Select imagery
6. Prepare imagery for classification
7. Collect ancillary data
8. Choose classification method and classify
9. Adjust classification and assess accuracy
Digital Image
Processing
Chapter 4
DR MOHAMMAD KAMARUDDIN BIN ZAINUL
ABIDIN
UITM JENGKA
Learning outcomes:
1) Overview of digital image processing
• What is a digital image?
• What is digital image processing?
• Importance of DIP

2) Key Stages in Digital Image Processing


“One picture is worth more than ten thousand
words”

Anonymous
What is a Digital Image?

A digital image is a representation of a two-dimensional image as a


finite set of digital values, called picture elements or pixels
What is a Digital Image? (cont…)

Pixel values typically represent gray levels, colours, heights, opacities


etc
Remember digitization implies that a digital image is an approximation
of a real scene

1 pixel
What is a Digital Image? (cont…)

Common image formats include:


• 1 sample per point (B&W or Grayscale)
• 3 samples per point (Red, Green, and Blue)
• 4 samples per point (Red, Green, Blue, and “Alpha”, a.k.a. Opacity)
An example of grey image
Pixel Values: The
magnitude of the
electromagnetic
energy captured in
a digital image is
represented by
positive digital
numbers
The pixel
• Pixel is a short abbreviation
for Picture Element.

• It is unit of an digital image.

• The cells are sensed one


after another along the line.

• In the sensor, each cell is


associated with a pixel that
is tied to a microelectronic
detector
Band interleaved Band interleaved Band sequential
by pixel (BIP) by line (BIL) (BSQ)
Each row of data grid Each row of data grid Digital numbers for
contains the digital contains the digital each band are stored
number value for number value for in their entire grid
each pixel and for each band followed by the next
each band band
sequentially
• Each bit records an exponent of
power 2 (e.g. 1 bit = 21 = 2)

• The maximum number of brightness


levels available depends on the
number of bits used in representing
the energy recorded.

• Thus, if a sensor used 8 bits to


record the data, there would be 28 =
256 digital values available, ranging
from 0 to 255; 8-bit is the most
common bit values
What is Digital Image Processing?

Digital image processing focuses on two major tasks


•Improvement of pictorial information for human
interpretation
•Processing of image data for storage, transmission and
representation for autonomous machine perception
(computer system)
What is DIP? (cont…)

The continuum from image processing to computer vision can be


broken up into low-, mid- and high-level processes

Low Level Process Mid Level Process High Level Process


Input: Image Input: Image Input: Attributes
Output: Image Output: Attributes Output: Understanding
(edges, contours,)
Examples: Noise Examples: Scene
removal, image Examples: Object understanding,
sharpening recognition, autonomous navigation
segmentation
Importance of DIP
• One picture worth 1000 • Entertainment
words! • Keep record, history
• Support visual • Managing multimedia
communication information
• Facilitate inspection, • Security,
diagnosis of complex – monitoring,
systems – watermarking, etc
– Human body
– Manufacturing
Applications: Image Enhancement

One of the most common uses of DIP techniques: improve quality,


remove noise etc
Applications: Artistic Effects
Artistic effects are used
to make images more
visually appealing, to
add special effects and
to make composite
images
Applications: Medicine (cont...)
Ultrasound imaging
Applications: Law Enforcement
Image processing techniques
are used extensively by law
enforcers
• Number plate recognition
for speed
cameras/automated toll
systems
• Fingerprint recognition
• Enhancement of CCTV
images
Key Stages in Digital Image Processing

Outputs of these processes generally are image attributes


Outputs of these processes generally are images

Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing

Image
Restoration
Segmentation

Image
Enhancement Object
Recognition

Image
Acquisition presentation &
Description

Problem Domain
Key Steps in DIP: (Description)
Step 1: Image Acquisition
The image is captured by a sensor (eg. Camera), and digitized if the
output of the camera or sensor is not already in digital form, using
analogue-to-digital convertor.

*Image processing will started after image acquisition


Key Steps in DIP: (Description)
Step 2: Image Enhancement
The process of manipulating an image so that the result is more
suitable than the original for specific applications.

The idea behind enhancement techniques is to bring out details


that are hidden, or simple to highlight certain features of
interest in an image.
Key Steps in DIP: (Description)
Step 3: Image Restoration
- Improving the appearance of an image
- Removing systematic errors like removing scanning line on
image
- Enhancement, on the other hand, is based on human subjective
preferences regarding what constitutes a “good” enhancement
result.
Key Steps in DIP: (Description)
Step 4: Colour Image Processing
Use the colour of the image to extract features of interest in an image
Key Steps in DIP: (Description)
Step 5: Wavelets
Are the foundation of representing images in various degrees of
resolution. It is used for image data compression.
Key Steps in DIP: (Description)
Step 6: Compression
Techniques for reducing the storage required to save an image
Key Steps in DIP: (Description)
Step 7: Morphological Processing
Tools for extracting image components that are useful in the
representation and description of shape. In this step, there would be
a transition from processes that output images, to processes that
output image attributes (edges, contour).
Involves special filtration techniques, again the purpose is to enhance
image quality.
Key Steps in DIP: (Description)
Step 8: Image Segmentation
Segmentation of a continuous image into classes
Segmentation procedures partition an image into its constituent parts
or objects. Yet, reduce complexity in image
Important Tip: The more accurate the segmentation, the more likely
recognition is to succeed.
Key Steps in DIP: (Description)
Step 9: Recognition and Interpretation
Recognition: the process that assigns label to an object based on the
information provided by its description.
KeySteps in DIP: (Description)
Step 10: Representation and Description
- Representation: Make a decision whether the data should be represented as a
boundary or as a complete region. It is almost always follows the output of a
segmentation stage.
- Boundary Representation: Focus on external shape characteristics, such as
corners and inflections
- Region Representation: Focus on internal properties, such as texture or
skeleton shape
CHAPTER 3
Spectral response
patterns of earth
features
DR. MOHAMMAD KAMARUDDIN B ZAINUL ABIDIN
UiTM PAHANG
Learning outcomes
1. Describe interaction of EMR with Earth's Surfaces
2. Explain geometry of reflection
3. Explain spectral reflectance
Introduction
• When electromagnetic energy hits the earth's surface three possible
energy interactions with the surface feature:

1. Reflection: occurs when radiation "bounces" off the target and is


redirected
2. Absorption: occurs when radiation (energy) is absorbed into the
target
3. Transmission: occurs when radiation passes through a target
How proportion of energy is absorbed, transmitted or reflected will
depend upon:

• the properties or of the surface materials


• the surface smoothness relative to the radiation wavelength
• Wavelength of the energy
• angle of illumination and sunlight
energy
interactions with
the surface
feature
Energy Interactions with Earth Surface Features

• EI(λ) = ER(λ) + EA(λ) + ET(λ)


Where:
EI(λ) = Incident energy (from sun)
ER(λ) = Reflected energy
EA(λ) = Absorbed energy
ET(λ) = Transmitted energy
• In remote sensing, we are most interested in measuring the radiation
reflected from targets. Objects reflect different wavelengths
differently
• Reflection from surfaces occurs in two ways:
1. Specular reflectors :
• are flat surfaces that manifest mirrolike reflections. The angle of reflection
equals the angle of incident.
• mirrors and calm surfaces of lakes are specular reflectors and produce
mirror-like reflections
2. Diffuse (or Lambertian) reflectors :
• are rough surfaces that reflect uniformly in all the directions
• most surfaces are idealized diffuse or lambertian reflectors.
• If the surface is rough, the reflected rays go in many directions, depending on
the orientation of the smaller reflecting surfaces

• Most of the surfaces object on earth are neither perfectly specular or


diffuse reflector but their characteristics are somewhere in between.
• Nevertheless, reflection depends on the surface roughness of the
feature in comparison to the wavelength of the incoming radiation.
Specular reflector vs Diffuse reflector
Spectral response of material
More on Concepts of Remote Sensing
(cont.)

One part of the Another part of And another part of


sensor records the sensor the sensor records
only the records the the amount of red
amount of blue amount of green light reflected.
light reflected. light reflected.
This information, also called data, The data collected about the earth's
is recorded as a series of numbers. surface is sent to a receiving
antenna at a ground station.
Computers are used to process the data. The data about the amount of
blue, green and red light reflected off the earth's surface is put
together to make a satellite image. Scientists use these remote sensing
images to study the earth.
Spectral response of material
• The amount of reflectance from a surface can be measured as a
function of wavelength. (So what? why we bother about measuring
reflectance energy?)
• By measuring the energy that is reflected (or emitted) by targets on
the Earth's surface over a variety of different wavelengths, we can
build up a spectral response for that object.
• Spectral response is important because from this we will be able to
identify and interpret the object on earth’s surface. However, in order
to do this, we should have a way to display those spectral response…
• The spectral response of a material to different wavelengths of EMR
can be represented graphically as a Spectral Reflectance Curve.
Spectral Reflectance is a measure of how much energy (as a percent)
a surface reflects at a specific wavelength.
• Surfaces reflect different amount of energy in different portions of
the spectrum. These differences in reflectance make it possible to
identify different earth surface features or materials by analyzing their
spectral reflectance signatures (every object has its own spectral
reflectance “signature”)
• Spectral signature is the variation of reflectance or emittance of a
material with respect to wavelengths
Each object has
its own spectral
reflectance
signature
e.g. each
vegetation
types has its
own spectral
reflectance
signature
Why spectral reflectance curves is crucial in
remote sensing?
• It may not be possible to distinguish between different materials if we
were to compare their response at one wavelength.
• But by comparing the response patterns of these materials over a
range of wavelengths (in other words, comparing their spectral
reflectance curves), we may be able to distinguish between them. For
example, water and vegetation may reflect somewhat similarly in the
visible wavelengths but are almost always separable in the infrared.
Spectral reflectance curves for vegetation,
soil, and water
Leaf
Target
Why do leaves appear green?
Spectral reflectance of vegetation
• Spectral reflectance curves for healthy green vegetation
almost always manifest the ‘peak-and-valley’
configuration.

Chlorophyll absorbs energy in wavelength bands 0.45 – 0.67 μm

Vegetation appears green

Very high absorption in blue and red by plant leaves

High reflectance of green energy


Con’t.
Chlorophyll is the major pigment.
Absorbs in the red but not much in the
Why do green
leaves
appear Related to structure of chloroplasts
green? i- Contains parallel layers of grana & intergrana
two
reasons: ii- Chlorophyll stored in the grana

iii- Grana 500 – 600 nm long, same wavelength as


green light
iv- When light encounters objects with a length
similar to the wavelength, light is scattered
Interaction Between Plants and EMR

Chlorophyll primarily absorbs


a light in the violet to blue and
red wavelengths.
.
Green light is not readily
absorbed and is reflected
b. thus giving the leaf a green
color appearance.

The internal cell wall


structure of the mesophyll
c. causes high reflectance of
near infrared radiation.

Chlorophyll is transparent to
d. near infrared radiation.
Con’t.

• Although the epidermis is fairly transparent, it


has been observed that quite a bit of the
reflected NIR exits out of open stomata.
• Therefore, it is advantageous to image a scene
when the stomas are open
The leaf already reflects 40 – 60%
The main reasons that of the incident NIR energy from
the spongy mesophyll
leaves reflect so much
NIR energy are:
The remaining 45 – 50% of
the energy penetrates (i.e.
transmitted through the
leaf and can be reflected
once again by leaves below
it.
How a remote sensing image is
created?

When we look at trees and This light is called infrared light. Special film is
grass, we see green. used to record infrared information. Since we
The grass reflects green light cannot see infrared light, scientists give it a
and absorbs all the other color. Red is the color most commonly used to
colors. But grass and trees also show this light, but it can be shown in any
reflect light we cannot see. color. Special sensors on the satellites can also
record infrared light.
Farmers can use infrared images to identify healthy and unhealthy crops.

Images showing the infrared light reflected off the plants can
identify plants that are sick or that need fertilizer. By treating only
the areas that are in need, farmers save money and the
environment by using less fertilizer and pesticide.
Which combination of data would you use if you were a farmer?
Scientists use data received from the satellite in different combinations

True color does not help This combination of This combination displays
scientists see differences data shows concrete of city streets as
between many of the vegetation, like a the color blue. This might
earth's features. farmer's crops. help a city planner but a
Rock appears dark in this farmer would not find
image and so do areas this image very useful.
of vegetation such as
farmer's crops.
DIFFERENT HABITATS
How can we find a forest in the satellite
image?

We know that the red areas in this satellite image show


vegetation, like trees.

A closer look at this satellite image shows differences in the red areas. Scientists can study
these areas on the ground to learn what type of vegetation is there. Then, scientists can
map the location of different habitats.
Water
Target
Dry soil has relatively high reflectance. Soil moisture decreases
reflectance.
Impact of Atmospheric interaction
• Sun is the primary source of electromagnetic radiation. Before the
electromagnetic energy from the Sun reaches the Earth’s surface, it must
pass through the atmosphere.
• As the energy passes through the atmosphere, it interacts with the
molecules and particles present in the atmosphere. In the atmosphere,
EMR is scattered or reflected, absorbed and a portion of the energy passes
through the atmosphere to reach the Earth's surface.
• Its important to understand interaction of electromagnetic radiation and
suspended particles/molecules in the atmosphere
• Incoming radiation often require correction, this process is known as
"atmospheric correction" and is a common image processing technique
• Crucial to be able to identify region of atmospheric windows across the
spectrum
Some of the incoming energy is absorbed by the atmosphere whereas most of the infrared
energy emitted by the earth is absorbed. Source: https://www.weather.gov/jetstream/absorb
Chapter 2
Principles of
Electromagnetic
Energy
DR. MOHAMMAD KAMARUDDIN B ZAINUL ABIDIN
UITM PAHANG
Learning outcomes

1) Explain basic principles and definition of


electromagnetic waves
2) Describe electromagnetic spectrum
3) Identify properties of light important in remote sensing
Introduction

• Vibration of charged particles—such as electrons and


protons—create electromagnetic fields when they move,
and these fields transport the type of energy we call
electromagnetic radiation / wave, or light.
• Electromagnetic waves or EM waves are composed of
oscillating magnetic and electric fields.
• An electromagnetic wave can travel through anything
including air, solid materials or even in a vacuum. It does
not require a medium to travel or propagate from one
place to another.
• Unlike EW, Mechanical waves (like sound waves or water
waves), on the other hand require a medium to travel. The
medium can be a solid, gaseous or liquid state.
• Energy around us transported by two important ways,
mechanical waves and electrical waves.
• Example of mechanical waves are waves in water and
sound waves in air
• Disturbance or vibration in matter whether gas, solid,
liquid, or plasma create the mechanical waves
• Matter is also known as medium, of which the waves travel
through it
• These mechanical waves travel through a medium by causing
the molecules to bump into each other, e.g. sound waves are
formed by vibrations of molecules in air (gaseous) and water
waves are formed by vibrations of molecules or particles in a
liquid
• A mechanical wave requires an initial energy input. Once this
initial energy is applied, the wave travels through the medium
until all its energy is transferred.
• Thus, dependency on a medium to propagate causing the
mechanical waves like sound waves unable to travel in a
vacuum of space
How electromagnetic waves generated?

• Electromagnetic waves created from the


continuous changing of electrical field
and magnetic field
• Changing of electrical field will generates
magnetic field, and the changing of
magnetic field will generates electrical
field.
• Called transverse wave because E field
and B field are perpendicular to the
propagation direction of wave
• EM do not require a medium to propagate
• EM waves travel with a constant velocity which is about
3.0x108 meters per second through a vacuum.
• Two characteristics of electromagnetic radiation are
particularly important for understanding remote sensing. These
are the wavelength and frequency.
• EM waves is described in RS in term of velocity (c), wavelength
(λ) and frequency (f)
• Velocity is referred to speed of light, which is c = λf
• Wavelength is the distance between
crests. Shortest wavelength can be a
size of an atom and longest wavelength
could larger than a diameter of our
earth!
• Frequency is the number of crests that
pass a given point within one second.
One wave—or cycle—per second is
called a Hertz (Hz). For instance, wave
with three cycles that pass a point in a
second has a frequency of 3 Hz.
• Wavelength and frequency are related by the following
formula:

• Therefore, the two are inversely related to each other. The


shorter the wavelength, the higher the frequency. The longer
the wavelength, the lower the frequency.
Electromagnetic spectrum

• The electromagnetic (EM) spectrum is the range of all


types of EM radiation.
• A continuum of energy that ranges from nano meters to
meters in wavelengths, travels at the speed of light and
propagate through a vacuum such as outer space.
Mnemonic
Raul's Mother Is Visiting Uncle Xavier's Garden

R- Radio
M- Microwaves
I- Infrared V- Visible (0.4 – 0.7 µm)
U- Ultraviolet
X- X-Rays
G- Gamma Rays
speed of light:
c = λf
Wien’s Displacement Law

• Wien’s displacement law, relationship between the


temperature of a blackbody (an ideal substance or
object that emits and absorbs all frequencies of light) and
the wavelength at which it emits the most light
• black-body radiation curve for different temperatures will
peak at different wavelengths that are inversely
proportional to the temperature.
b = Wien constant
b = 2.898 x 10^-3
Some properties of light important in RS

1. γ rays (10-7 -10-5 μm) and X rays (10-5 -10-5 μm)


• Very high energy
• Not much used on earth due to noise in
atmosphere
• Need to fly close to the ground at night
• Some experimental being done to use γ rays
to measure calcium deficiency in trees
• γ rays used quite a bit in planetary remote
sensing since many planets have no atmosphere
Blue
• For water body penetration, making it useful for coastal
water mapping.
• Useful for soil/vegetation discrimination, forest type
mapping, and cultural feature identification.

Green
• To measure green reflectance peak of vegetation for
vegetation discrimination and vigor assessment.
• Useful for cultural feature identification.

Red
• to sense in a chlorophyll absorption region aiding in
plant species differentiation.
• useful for cultural feature identification.
• 4. Near Infra Red - NIR (0.7 to 1.1 μm)
• Just beyond human vision
• Very useful for mapping vegetation
• (will see later why)
5. Shortwave Infrared - SWIR (1.1 to 2.5 μm)
• Suitable for determining the chemical composition of objects
on the ground (e.g., determining amount of oxygen in tree
leaves)
• geology

6. Thermal Infrared a.k.a Mid Infra Red (2.5 to 30 μm)


• Very sensitive to heat differences
• In a laboratory setting very useful for identifying organic and
inorganic materials
7. Microwave (1 mm to 1 m)
• Able to penetrate clouds
• Some wavelength able to penetrate dry soil
and tree canopies
• Very useful for measuring moisture differences
Chapter 1
Concepts and foundations
of Remote Sensing
UITM PAHANG
General definition of Remote Sensing:
The Science and art of obtaining information about an
object, area, or phenomenon through the analysis of data
acquired by a device that is not in contact with the object,
area, or phenomenon under investigation.

• e.g. reading process


•word ➔ eyes ➔ brain ➔ meaning
•data ➔ sensor ➔ processing ➔ information
As you view the screen of your computer monitor
(you are actively engaged in remote sensing)

A physical quantity (light)


emanates/emitted from that screen,
The radiated light passes over a distance, which is a source of radiation.
and thus is "remote" to some extent, until
it encounters and is captured by a sensor
(your eyes).

Each eye sends a signal to a processor


(your brain) which records the data and
interprets this into information.
Remote Sensing is more than looking at pictures!!
Remote Sensing = observing with artificial eyes
It is a different way of seeing/looking and studying objects on earth –
with prior knowledge
RS is the science and are the techniques of deriving
information about the e.g. earth’s land and water areas
from images at a distance (LOOK ONLY-NO TOUCH)

It depend upon measurement of Electro-Magnetic (EM)


energy reflected or emitted from the objects of interest at
the earth surface.
REMOTE SENSING HISTORY
Some of Historical Notes

• Balloon Photography (1858)

• Pigeon cameras (1903)


REMOTE SENSING HISTORY (cont..)
• Camera systems
were placed on V-2
rockets tested at
White Sands, NM (New Mexico)
after WW II.

• Sputnik (world’s first artificial satellite) in 1957


changed our outlook
toward using outer
space as a place from
which to observe the earth.
Milestones in the history of Remote
Sensing
1800 : Discovery of infrared by Sir William Herschel
1839: Beginning of practice of photography
1847: Infrared spectrum shown by A. H. L. Fizeau & J.B.L.
1850-1860: Photography from balloons
1873: Theory of electromagnetic energy developed by James Clerk Maxwell
1909: Photography from airplanes
1914-1918: World War 1: aerial reconnaissance
1920-1930: Development & initial application of aerial photography &
photogrammetry
1929-1939: Economic depression generates environmental crises that lead to
governmental application of aerial photography.
Multistage Remote Sensing Concept:
1. Ground observation
2. Low altitude
3. High altitude
Component of Remote Sensing
Component of Remote Sensing (cont…)
1.Energy source or illumination - It is the source of electromagnetic radiation
which is incident on the target of interest. Energy source primarily comes for
the sun. The sensors can use the external source of illumination(i.e. the Sun)
or can have their own source of illumination. Sensors which have their own
energy source are called active remote sensors while the sensors which use
the external source of energy are called the passive remote sensors. In most
cases, sensors use the solar radiation reflected from the Earth.
2.Interaction with the atmosphere - The energy emitted from the source
reaches the target passing through the earth’s atmosphere which contains
obstructions such clouds, haze, smog, etc.
3.Interaction with the target - When the electromagnet radiations interact with
the target, there are various possibilities in the way they behave. They can get
reflected, refracted, absorbed & diffused.
Component of Remote Sensing (cont…)
4. Recording of energy by the sensor - Once the energy has interacted
with the target under study it is recorded by the sensor. Generally, the
reflectance values are recorded which vary with the type matter the
EMR interacts with.
5. Transmission, Reception and Processing - After recording the
reflectance values, these are processed to remove any errors,
converted to raster images and transmitted to ground station.
6. Interpretation & Analysis - These raster images are then visually
interpreted and analysed.
7. Application - These are then used for numerous applications in
various fields.

Source: https://www.quora.com/What-are-the-components-of-a-remote-sensing-satellite
Process of Acquiring data in remote sensing
Types and Classification of Sensor
Passive sensors: Active sensors:
detect natural energy provide their own source of
(radiation) that is emitted or energy to illuminate the objects
reflected by the object or they observe. An active sensor
scene being observed. emits radiation in the direction of
Reflected sunlight is the most the target to be investigated. The
common source of radiation sensor then detects and
measured by passive sensors. measures the radiation that is
reflected or backscattered from
the target.

Source: https://earthdata.nasa.gov/learn/remote-sensors
Active Sensor
The majority of active sensors operate in the microwave portion of the electromagnetic
spectrum, which makes them able to penetrate the atmosphere under most conditions
The change in apparent view direction (parallax) is related to the absolute distance between the
instrument and target.

*Parallax - the effect by which the position of an object seems to change when it is looked at
from different positions

Active system – doing well in recording information, because the system emitting signal (waves,
light, sound) at particular frequency according to the information that they needed.

Source: https://earthdata.nasa.gov/learn/remote-sensors
Active remote sensors includes:

1. Radar
2. Lidar
3. Sonar
Radar
I. Emits radio waves and then measures what is returned.
II. Radar is frequently used in the atmosphere sciences, including
meteorology.
III. The Shuttle Radar Topography Mission (spaceborne radar) generated
a comprehensive digital elevation database of the planet
IV. The time required for the energy to travel to the target and return back
to the sensor determines the distance or range to the target.
Lidar
I. A light detection and ranging sensor that
uses a laser. Emits beams of light and
measures their return.
II. Often mounted to aircraft and used to
produce detailed topographic datasets.
III. Distance to the object is determined by
recording the time between transmitted and
backscattered pulses and by using the speed
of light to calculate the distance traveled.
Sonar
I. emits sound waves and measures
their return through water.
II. Predominantly used in
oceanography.
Passive sensor
Most passive systems used in remote sensing applications operate in the visible,
infrared, thermal infrared, and microwave portions of the electromagnetic
spectrum.

Source: https://earthdata.nasa.gov/learn/remote-sensors
Passive remote sensors includes:
Hyperspectral radiometer—An advanced multispectral sensor that detects hundreds of
very narrow spectral bands throughout the visible, near-infrared, and mid-infrared
portions of the electromagnetic spectrum. This sensor’s very high spectral resolution
facilitates fine discrimination between different targets based on their spectral response in
each of the narrow bands.

Spectrometer—A device that is designed to detect, measure, and analyze the spectral
content of incident electromagnetic radiation. Conventional imaging spectrometers use
gratings or prisms to disperse the radiation for spectral discrimination

Radiometer—An instrument that quantitatively measures the intensity of electromagnetic


radiation in some bands within the spectrum. Usually, a radiometer is further identified by
the portion of the spectrum it covers; for example, visible, infrared, or microwave.

Source: https://earthdata.nasa.gov/learn/remote-sensors

You might also like