WT All Modules
WT All Modules
"Qualcomm", which developed this technology, applied it to cellular communications that use coded
speech at different rhythms – a technology whereby the cellular device receives simultaneous information
from a number of base stations. This technology ensures the continuity of conversations during movement
from one cell to another.
Adaptive communication
An innovative feature of CDMA technology and other new communication technologies is the close
monitoring of power which enables adaptive communication. This feature allows the cellular device to
vary its power dynamically at any given time. This means that a cellular communication network using
this technology and others may conduct dynamic communications adapted to the conditions of reception
and the quality of communication.
Frequency Reuse
Cellular network is an underlying technology for mobile phones, personal communication systems,
wireless networking etc. The technology is developed for mobile radio telephone to replace high power
transmitter/receiver systems. Cellular networks use lower power, shorter range and more transmitters for
data transmission.
Frequency reusing is the concept of using the same radio frequencies within a given area, that are
separated by considerable distance, with minimal interference, to establish communication.
Frequency reuse offers the following benefits −
● Allows communications within cell on a given frequency
● Limits escaping power to adjacent cells
● Allows re-use of frequencies in nearby cells
● Uses same frequency for multiple conversations
● 10 to 50 frequencies per cell
For example, when N cells are using the same number of frequencies and K be the total number of
frequencies used in systems. Then each cell frequency is calculated by using the formulae K/N.
In Advanced Mobile Phone Services (AMPS) when K = 395 and N = 7, then frequencies per cell on an
average will be 395/7 = 56. Here, cell frequency is 56
Frequency Reuse is the scheme in which allocation and reuse of channels throughout a coverage region
is done. Each cellular base station is allocated a group of radio channels or Frequency sub-bands to be
used within a small geographic area known as a cell. The shape of the cell is Hexagonal. The process of
selecting and allocating the frequency sub-bands for all of the cellular base station within a system is
called Frequency reuse or Frequency Planning.
Cell with the same letter uses the same set of channels group
or frequencies sub-band.
To find the total number of channel allocated to a cell:
S = Total number of duplex channels available to
use k = Channels allocated to each cell (k<S)
N = Total number of cells or Cluster Size
Then Total number of channels (S) will be,
S = kN
That capacity equation assumes one transmitter and one receiver, though multiple antennas can be
used in diversity scheme on the receiving side. The formula will be revisited for multi-antenna systems in
The equation singles out two fundamentally important aspects: bandwidth and SNR. Bandwidth reflects
how much spectrum a wireless system uses, and explains why the spectrum considerations seen in
are so important: they have a direct impact on system capacity. SNR of course reflects the quality of
the propagation channel, and will be dealt with in numerous ways: modulation, coding, error correction,
and important design choices such as cell sizes and reuse patterns.
2.2.2 Cellular Capacity
Practical capacity of many wireless systems are far from the Shannon’s limit (although recent standards
are coming close to it); and practical capacity is heavily dependent on implementation and standard
choices.
Digital standards deal in their own way with how to deploy and optimize capacity. Most systems are
limited by channel width, time slots, and voice coding characteristics. CDMA systems are interference
limited, and have tradeoffs between capacity, coverage, and other performance metrics (such as dropped
call rates or voice quality).
Cellular analog capacity:
Fairly straight forward, every voice channel uses a 30 kHz frequency channel, these frequencies may be
reused according to a reuse pattern, the system is FDMA. The overall capacity simply comes from the
total amount of spectrum, the channel width and the reuse pattern.
TDMA/FDMA capacity:
In digital FDMA systems, capacity improvements mainly come from the voice coding and elaborate
schemes (such as frequency hopping) to decrease reuse factor. The frequency reuse factor hides a lot of
complexity; its value depends greatly on the signal to interference levels acceptable to a given cellular
system ([1] ch. 3.2, and 9.7). TDMA systems combine multiple time slots per channels.
CDMA capacity:
a usual capacity equation for CDMA systems may be fairly easily derived as follows (for the reverse
link): first examine a base station with N mobiles, its noise and interference power spectral density dues
to all mobiles in that same cell is ISC = (N - 1)Sα, where S is the received power density for each mobile,
and α is the voice activity factor. Other cell interferences IOC are estimated by a reuse fraction β of the
same cell interference level, such that IOC = βISC; (usual values of β are around 1⁄2). The total noise and
interference at the base is therefore Nt = ISC(1 + β). Next assume the mobile signal power density received
at the base station is S = REb⁄W. Eliminating ISC, we derive:
(2.
5)
where
● W is the channel bandwidth (in Hz),
● R is the user data bit rate (symbol rate in symbol per second),
● Eb⁄Nt is the ratio of energy per bit by total noise (usually given in dB Eb⁄Nt ≈ 7dB),
● α is the voice activity factor (for the reverse link), typically 0.5,
● and β is the interference reuse fraction, typically around 0.5, and represents the ratio of
interference level from the cell in consideration by interferences due to other cells. (The number 1
+ β is sometimes called reuse factor, and 1⁄(1 + β) reuse efficiency)
This simple equation (2.5) gives us a number of voice channels in a CDMA frequency channel 2.
We can already see some hints of CDMA optimization and investigate certain possible improvement for a
3G system. In particular: improving α can be achieved with dim and burst capabilities, β with interference
mitigation and antenna downtilt considerations, R with vocoder rate, W with wider band CDMA, Eb⁄Nt
with better coding and interference mitigation techniques.
Some aspects however are omitted in this equation and are required to quantify other capacity
improvements mainly those due to power control, and softer/soft handoff algorithms.
Of course other limitations come into play for wireless systems, such as base station (and mobile)
sensitivity, which may be incorporated into similar formulas; and further considerations come into play
such as: forward power limitations, channel element blocking, backhaul capacity, mobility, and handoff.
2.3Modulation and Coding
Modulation techniques are a necessary part of any wireless system, without them, no useful information
can be transmitted. Coding techniques are almost as important, and combine two important aspects: first
to transmit information efficiently, and second to deal with error correction (to avoid retransmissions).
2.3.1Modulation
A continuous wave signal (at a carrier frequency fc) in itself encodes and transmits no information. The
bits of information are encoded in the variations of that signal (in phase, amplitude, or a combination
thereof). These variations cause the occupied spectrum to increase, thus occupying a bandwidth around fc;
and the optimal use of that bandwidth is an important part of a wireless system. Various modulation
schemes and coding schemes are used to maximize the use of that spectrum for different applications
(voice or high speed data), and in various conditions of noise, interference, and RF channel resources in
general.
Classic modulation techniques are well covered in several texts and we simply recall here a few
important aspects of digital modulations (that will be important in link budgets). The main digital
modulations used in modern wireless systems are outlined in table
Modulation Bits encoded by: Examples
Quadrature Ampl. Mod. Both phase and amplitude 16, 64, 256
QAM
Modulation is a powerful and efficient tool used to encode information; a few simple definitions are
commonly used:
Symbol
denotes the physical encoding of information, over a specific symbol time (or period) Ts, during which the
system transmits a modulated signal containing digital information.
Bit
denotes a logical bit (0 or 1) of information; one or more bits are encoded by a modulation scheme in a
symbol.
Higher order modulations can encode multiple bits in a symbol, and require higher SNR to decode
error-free. Figure illustrates how multiple phases and amplitudes are used to combine multiple bits into
one symbol transmission. The tradeoff between bits encoded per symbol is often referred to as a measure
in bits per Hertz (b/Hz), its relation to SNR is bounded by Shannon’s theorem seen earlier
Explain the coverage and capacity improvement techniques for cellular systems.
There is a performance criterion of cellular mobile systems like:
a)Voice quality.
b)Service Quality like coverage and quality of service.
c)Number of Dropped calls.
d)Special features like call forwarding, call diverting, call barring.
As the demand for wireless service increases, the number of channels assigned to cell becomes
insufficient to support required number of users.
At this point, cellular design techniques are needed to provide more channels per unit coverage area.
There are 3 techniques for improving cell capacity in cellular system, namely:
● Cell Splitting.
● Sectoring.
● Coverage Zone Approach.
A)CELL SPLITTING:
● It is process of subdividing a congested cell into smaller cells, each with its own base station and
a corresponding reduction in antenna height and transmitter power.
● Cell splitting increases capacity of cellular system since it increases number of times that
channels are reused, it preserves frequency reuse plan.
● It defines new cells which have smaller radius than original cells and by installing these smaller
cells called microcells between existing cells, that is radius will be half of the original cell.
● Thus capacity increases due to additional number of channels per unit area, but does not disturb
the channel allocation scheme required to maintain the minimum co-channel reuse ratio Q
between co-channel cells.
B)SECTORING:
● This is another method to increase cellular capacity and coverage by keeping cell radius
unchanged and decreasing D/R ratio.
● In this approach, capacity improvement is achieved by reducing the number of cells in a cluster
and thus increasing the frequency reuse.
● The co-channel interference in a cellular system may be decreased by replacing a single
Omni-directional antenna at the base station by several directional antennas, each radiating within
a specified sector.
● The factor by which the co-channel interference is reduced depends on the amount of sectoring
used.
a) 1200 sectoring b) 600 sectoring
Advantages:
● Improvement in Signal capacity.
● Improvement in signal to interference ratio.
● Increases frequency reuse.
Disadvantages:
● Increase in number of handoffs.
● Increase in number of antenna at each base station.
C)COVERAGE ZONE/ MICROCELL ZONE CONCEPT:
● This approach was presented by Lee to solve the problem of an increased load on the switching
and control link elements of the mobile system due to sectoring.
● It is based on a microcell concept for 7 cell reuse.
● In this scheme, each of the three zone sites are connected to a single base station and share the
same radio equipment.
● Multiple zones and a single base station make up a cell. As a mobile travels within the cell, it is
served by the zone with the strongest signal.
● This approach is superior to sectoring since antennas are placed at the outer edges of the cell, and
any base station channel may be assigned to any zone by the base station.
GSM Architecture:
1. Base Station System (BSS)-
2. Network Switching Subsystem (NSS)-
3. Public Network
Subsystem:
The BTS corresponds to the transceivers and antennas used in each cell of the network. A BTS is usually
placed in the center of a cell. Its transmitting power defines the size of a cell. Each BTS has between 1
and 16 transceivers, depending on the density of users in the cell. Each BTS serves as a single cell. It
also includes the following functions:
● Encoding, encrypting, multiplexing, modulating, and feeding the RF signals to the antenna
● Transcoding and rate adaptation
● Time and frequency synchronizing
● Voice through full- or half-rate services
● Decoding, decrypting, and equalizing received signals
● Random access detection
● Timing advances
● Uplink channel measurements
The Base Station Controller (BSC)
The BSC manages the radio resources for one or more BTSs. It handles radio channel setup, frequency
hopping, and handovers. The BSC is the connection between the mobile and the MSC. The BSC also
translates the 13 Kbps voice channel used over the radio link to the standard 64 Kbps channel used by
the Public Switched Telephone Network (PSDN) or ISDN.
It assigns and releases frequencies and time slots for the MS. The BSC also handles intercell
handover. It controls the power transmission of the BSS and MS in its area. The function of the BSC is
to allocate the necessary time slots between the BTS and the MSC. It is a switching device that handles
the radio resources. Additional functions include:
● Control of frequency hopping
● Performing traffic concentration to reduce the number of lines from the MSC
● Providing an interface to the Operations and Maintenance Center for the BSS
● Reallocation of frequencies among BTSs
● Time and frequency synchronization
● Power management
● Time-delay measurements of received signals from the MS
GSM frame structure or frame hierarchy
In GSM frequency band of 25 MHz is divided into 200 KHz of smaller bands, each carry one RF carrier,
this gives 125 carriers.As one carrier is used as guard channel between GSM and other frequency bands
124 carriers are useful RF channels.This division of frequency pool is called FDMA. Now each RF
carrier will have eight time slots. This division time wise is called TDMA. Here each RF carrier
frequency is shared between 8 users hence in GSM system, the basic radio resource is a time slot with
duration of about 577 microsec. As mentioned each time slot has 15/26 or 0.577ms of time duration. This
time slot carries 156.25 bits which leads to bit rate of 270.833 kbps. This is explained below in TDMA
gsm frame structure. For E-GSM number of ARFCNs are 174, for DCS1800 ARFNCs are 374.
GSM frame structure is designated as hyperframe, superframe, multiframe and frame. The minimum
unit being frame (or TDMA frame) is made of 8 time slots.
One GSM hyperframe composed of 2048 superframes.
Each GSM superframe composed of multiframes (either 26 or 51 as described below).
Each GSM multiframe composed of frames (either 51 or 26 based on multiframe type).
Each frame composed of 8 time slots.
Hence there will be total of 2715648 TDMA frames available in GSM and the same cycle continues.
Fig. GSM Frame Structure
As shown in the figure below, there are two varients to multiframe structure.
1. 26 frame multiframe - Called traffic multiframe,composed of 26 bursts in a duration of 120ms,
out of these 24 are used for traffic, one for SACCH and one is not used.
2.51 frame multiframe- Called control multiframe,composed of 51 bursts in a duration of 235.4 ms.
This type of multiframe is divided into logical channels. These logical channels are time sheduled by
BTS. Always occur at beacon frequency in time slot 0, it may also take up other time slots if required by
system for example 2,4,6.
As shown in fig 3. each ARFCN or each channel in GSM will have 8 time slots TS0 to TS7. During
network entry each GSM mobile phone is allocated one slot in downlink and one slot in uplink. Here in
the figure GSM Mobile is allocated 890.2 MHz in the uplink and 935.2 MHz in the downlink. As
mentioned TS0 is allocated which follows either 51 or 26 frame multiframe structure. Hence if at start 'F'
is depicted which is FCCH after 4.615 ms ( which is 7 time slot duration) S(SCH) will appear then after
another 7 slots B(BCCH) will appear and so on till end of 51 frame Multiframe structure is completed and
cycle continues as long as connection between Mobile and base station is active. similarly in the uplink,
26 frame multiframe structure follow, where T is TCH/FS (Traffic channel for full rate speech), and S is
SACCH. The gsm frame structure can best be understood as depicted in the figure below with respect to
downlink(BTS to MS) and uplink (MS to BTS) directions.
GPRS Architecture:
GPRS architecture works on the same procedure like GSM network, but, has additional entities that
allow packet data transmission. This data network overlaps a second-generation GSM network providing
packet data transport at the rates from 9.6 to 171 kbps. Along with the packet data transport the GSM
network accommodates multiple users to share the same air interface resources concurrently.
Following is the GPRS Architecture diagram:
GPRS Support Nodes (GSNs) The deployment of GPRS requires the installation of
new core network elements called the serving GPRS
support node (SGSN) and gateway GPRS support
node (GGSN).
Databases (HLR, VLR, etc.) All the databases involved in the network will require
software upgrades to handle the new call models and
functions introduced by GPRS.
GPRS Mobile Stations
New Mobile Stations (MS) are required to use GPRS services because existing GSM phones do not
handle the enhanced air interface or packet data. A variety of MS can exist, including a high-speed
version of current phones to support high-speed data access, a new PDA device with an embedded GSM
phone, and PC cards for laptop computers. These mobile stations are backward compatible for making
voice calls using GSM.
GPRS Base Station Subsystem
Each BSC requires the installation of one or more Packet Control Units (PCUs) and a software upgrade.
The PCU provides a physical and logical data interface to the Base Station Subsystem (BSS) for packet
data traffic. The BTS can also require a software upgrade but typically does not require hardware
enhancements.
When either voice or data traffic is originated at the subscriber mobile, it is transported over the air
interface to the BTS, and from the BTS to the BSC in the same way as a standard GSM call. However, at
the output of the BSC, the traffic is separated; voice is sent to the Mobile Switching Center (MSC) per
standard GSM, and data is sent to a new device called the SGSN via the PCU over a Frame Relay
interface.
GPRS Support Nodes
Following two new components, called Gateway GPRS Support Nodes (GSNs) and, Serving GPRS
Support Node (SGSN) are added:
Gateway GPRS Support Node (GGSN)
The Gateway GPRS Support Node acts as an interface and a router to external networks. It contains
routing information for GPRS mobiles, which is used to tunnel packets through the IP based internal
backbone to the correct Serving GPRS Support Node. The GGSN also collects charging information
connected to the use of the external data networks and can act as a packet filter for incoming traffic.
Serving GPRS Support Node (SGSN)
The Serving GPRS Support Node is responsible for authentication of GPRS mobiles, registration of
mobiles in the network, mobility management, and collecting information on charging for the use of the
air interface.
Internal Backbone
The internal backbone is an IP based network used to carry packets between different GSNs. Tunnelling
is used between SGSNs and GGSNs, so the internal backbone does not need any information about
domains outside the GPRS network. Signalling from a GSN to a MSC, HLR or EIR is done using SS7.
Routing Area
GPRS introduces the concept of a Routing Area. This concept is similar to Location Area in GSM,
except that it generally contains fewer cells. Because routing areas are smaller than location areas, less
radio resources are used While broadcasting a page message.
1 GSM stands for Global Systems for Mobile. GPRS stands for General Packet Radio
Service.
2 GSM is a cellular standard for mobile phone GPRS is an up-gradation of GSM features
communications to cater to voice services and over the basic features to obtain much higher
data delivery using digital modulation where data speeds and simple wireless access to
SMS has a profound effect on society. packet data networks than standard GSM.
4 The frequency bands used in the GSM system The frequency bands used in the system are
are 900 and 1800 MHz. 850, 900, 1800 and 1900 MHZ.
6 It provides data rates of 9.6 kbps. It provides data rates of 14.4 to 115.2 kbps.
7 In GSM billing is based on the duration of the In GPRS billing is based on the features
connection. amount of data transferred.
8 It does not allow direct connection to the It allows direct connection to the internet.
internet.
10 In GSM, single time slot is allotted to a single In GPRS, multiple time slots can be allotted to
user. a single user.
12 In this location area concept is used. In this routing area concept is used.
13 SMS (Short Messaging Service) is one of the MMS (Multimedia Messaging Service) is one
popular features. of the popular features.
EDGE:
What is EDGE(Enhanced Data Rate for GSM Evolution)?
● Last Updated : 10 May, 2020
EDGE (Enhanced Data Rate For GSM Evolution) provides a higher rate of data transmission than normal
GSM. It uses a backward-compatible extension of GSM of digital mobile technology.EDGE has a pre-3G
radio technology and uses part of ITU’s 3G definition. It can work on any network deployed with GPRS
(with necessary upgrades).
In order to increase data transmission speed, EDGE was deployed on the GSM network in 2003 by
Cingular in the USA.
Working
It uses 8PSK modulation in order to achieve a higher data transmission rate. The modulation format is
changed to 8PSK from GMSK. This provides an advantage as it is able to convey 3 bits per symbol, and
increases the maximum data rate. However, this upgrade required a change in the base station.
Fig. EDGE in GSM
Features
● It provides an evolutionary migration path from GPRS to UMTS.
● It is standardized by 3GPP.
● EDGE is used for any packet switched application,like an Internet connection.
● EDGE delivers higher bit-rates per radio channel and it increase the capacity and
performance.
Advantage
● It has higher speed.
● It is an “always-on” connection
● It is more reliable and efficient
● It is cost efficient
Disadvantage
● It consumes more battery.
● hardware needs upgradation.
Fig. GPRS and EDGE architecture
General Packet Radio Service (GPRS): The first big step in the move to 3G happened through the
launching of GPRS. The cellular services, mixed with GPRS resulted to 2.5G. GPRS was capable of
giving data rates ranging from 56 kbbps up to a maximum of 114 kbps. This can be used for services like
Wireless Application Protocol access, Multimedia Messaging Service (MMS), Short Message Service
(SMS) and internet communication services like World Wide Web access and email. The data transfer of
GPRS is usually charged for each megabyte of traffic being transferred, while the data communication via
the usual circuit switching is charged by the minute of connection period, regardless of whether the
consumer actually used the capability or is just in idle mode. GPRS is a top-effort packet switched
service, compared to circuit switching, where there is a given Quality of Service (QoS) is certified during
the connection for non-mobile users. It gives medium speed data transfer, via the use of idle Time
division multiple access (TDMA) channels. 2.2 Enhanced Data rates for GSM Evolution (EDGE) Further
enhancements to GSM network is provided by EDGE technology, which provides up to three times the
data capacity of GPRS. Using EDGE, operators can handle three times more subscribers than GPRS,
triple their data rate per subscriber, or add extra capacity to their voice communication. EDGE allows the
delivery of advanced mobile services such as the downloading of video and music clips, multimedia
messaging, high-speed Internet access and e-mail on the move [14]. EDGE is essentially a GSW GPRS
radio interface with a set of enhancements to support higher peak data rates than GPRS and to provide
better data throughput than GPRS. According to the US operators intending to use EDGE, it is mainly
intended to provide 3G-type data services in a combined GSM and TIAIEIA-136 footprint in all of the
existing 800/900/1800/1900 MHz frequency bands [16]. The GPRS networks have changed significantly
to EDGE networks, through the presentation of 8PSK encoding. Enhanced information rates for EDGE or
GSM Evolution, IMT Single Carrier or IMT-SC and Enhanced GPRS is a reversecompatible digital
mobile phone technology, allowing improved data transmission rates, as an extension over the standard
GSM. EDGE can be counted as a 3G radio technology, involved in ITU's 3G description, but is frequently
referred to as 2.75G. It was launched on GSM networks, starting in 2003, by Cingular. 3GPP standardized
EDGE as it belonged in the GSM group. The specification gets bigger data rates by altering to very
sophisticated processes of coding, particularly 8PSK, inside the GSM timeslots [1]. Fig. shows the GPRS
and EDGE architecture. GPRS is a 2.5G solution that provides medium speed packet data service for a
wireless network.
General Packet Radio Service / Enhanced Data rates for Global Evolution
GSM is a circuit-switched network; ideal for the delivery of voice but with limitations for sending data.
However, the standard for GSM was designed to evolve and in 2000 the introduction of General Packet
Radio Service (GPRS) added packet-switched functionality, kick-starting the delivery of the Internet on
mobile handsets.
EDGE… almost 3G
The next advance in GSM radio access technology was EDGE (Enhanced Data rates for Global
Evolution), or Enhanced GRPS.
With a new modulation technique yielding a three-fold increase in bit rate (8PSK replacing GMSK) and
new channel coding for spectral efficiency, EDGE was successfully introduced without disrupting the
frequency re-use plans of existing GSM deployments.
The increase in data speeds to 384Kbps placed EDGE as an early pre-taste of 3G, although it
was labeled 2.75G by industry watchers.
EDGE+
Ongoing standards work in 3GPP has delivered EDGE Evolution, which is designed to complement
high-speed packet access (HSPA) coverage.
EDGE Evolution has:
● Improved spectral efficiency with reduced latencies down to 100ms
● Increased throughput speeds to 1.3Mbps in the downlink and 653Kbps in the uplink
GPRS (Release 97) and EDGE (Release 98) are largely maintained in the RAN6 Working Group of 3GPP,
which succeeded TSG GERAN when it was closed in 2016.
Reading should start with the 44 series and 45 series of the 3GPP specifications.
UMTS:
UMTS or Universal Mobile Telecommunications Framework, is the 3G successor to the GSM family
of measures counting GPRS and EDGE. 3G UMTS employments a completely diverse radio interface
based around the utilize of Coordinate Grouping Spread Range as CDMA or Code Division Different
Access. Although 3G UMTS employments a completely distinctive radio get to standard, the center
arrange is the same as that utilized for GPRS and EDGE to carry partitioned circuit exchanged voice and
bundle data.
UMTS employments a wideband adaptation of CDMA possessing a 5 MHz wide channel. Being more
extensive than its competition CDMA2000 which as it was utilized a 1.25MHz channel, the tweak
conspire was known as wideband CDMA or WCDMA/W-CDMA. This title was regularly utilized to
allude to the total framework. It could be a frame of media transmission utilized for remote gathering and
transmission. It is an advancement in speed boost from the more seasoned 2G standard of transmission
speed and can increment information transmission times between gadgets and servers.
UMTS Applications
● Streaming / Download (Video, Audio)
● Videoconferences.
● Fast Internet / Intranet.
● Mobile E-Commerce (M-Commerce)
● Remote Login
● Background Class applications
● Multimedia-Messaging, E-Mail
● FTP Access
● Mobile Entertainment (Games)
Features of UMTS
● UMTS could be a component of IMT-2000 standard of the Universal Broadcast
communications Union (ITU), created by 3GPP.
● It employments wideband code division multiple access (W-CDMA) discuss interface.
● It gives transmission of content, digitized voice, video and multimedia.
● It gives tall transmission capacity to portable operators.
● It gives a tall information rate of 2Mbps.
● For High-Speed Downlink Parcel Get to (HSDPA) handsets, the data-rate is as tall as 7.2
Mbps within the downlink connection.
● It is additionally known as Flexibility of Mobile Multimedia Access (FOMA).
Advantages of UMTS
● UMTS could be a successor to 2G based GSM advances counting GPRS and EDGE .
Gaining a 3rd title 3GSM since it could be a 3G relocation for GSM
● Support 2Mbit/s information rates.
● Higher Information rates at lower incremental costs.
● Benefits of programmed universal wandering also necessarily security and charging
capacities, permitting administrators emigrate from 2G to 3G whereas holding numerous of
their existing back-office frameworks
● Gives administrators the adaptability to present unused mixed media administrations to trade
clients and buyers
● This not as it were gives client a valuable phone but moreover deciphers higher incomes for
the administrator.
Disadvantages of UMTS
● It is more expensive than GSM.
● Universal Mobile Telecommunication System has poor video experience.
● Universal Mobile Telecommunication System still not broadband.
Fig. UMTS architecture
As shown in the figure there are three main components in UMTS network architecture, User
Equipments is composed of Mobile Equipment (ME) and USIM. Radio Access Network is composed of
NodeB and RNC. Core Network is composed of circuit switched and packet switched functional modules.
For Circuit switched (CS) operations MSC and GMSC along with database modules such as VLR, HLR
will be available. For packet switched (PS) operations SGSN and GGSN will serve the purpose. GMSC
will be connected with PSTN/ISDN in CS case. GGSN is connected with Packet data Network (PDN) for
PS case. Interfaces between these entities are summarized below.
Uu interface between UE and NodeB
Iub interface between NodeB and RNC
Iur interface between RNC and RNC
Iu-CS interface between RNC and MSC
Iu-PS interface between RNC and
SGSN
The USIM also contains a short message storage area that allows messages to stay with the
user even when the phone is changed. Similarly "phone book" numbers and call information
of the numbers of incoming and outgoing calls are stored.
The UE can take a variety of forms, although the most common format is still a version of a "mobile
phone" although having many data capabilities. Other broadband dongles are also being widely used.
CDMA2000
CDMA2000 is a code division multiple access (CDMA) version of IMT-2000 specifications developed by
International Telecommunication Union (ITU).
It includes a group of standards for voice and data services −
● Voice − CDMA2000 1xRTT, 1X Advanced
● Data − CDMA2000 1xEV-DO (Evolution-Data Optimized)
Features
● CDMA2000 is a family of technology for 3G mobile cellular communications for transmission of
voice, data and signals.
● It supports mobile communications at speeds between 144Kbps and 2Mbps.
● It has packet core network (PCN) for high speed secured delivery of data packets.
● It applies multicarrier modulation techniques to 3G networks. This gives higher data rate, greater
bandwidth and better voice quality. It is also backward compatible with older CDMA versions.
● It has multi-mode, multi-band roaming features.
Fig. CDMA 2000
The Radio Access Network (RAN) consists of multiple base stations, called Base station Transceiver
Systems (BTS). Each BTS is connected to a Base Station Controller (BSC). The Selection and
Distribution Unit (SDU) makes it possible for a BSC to connect to the Core Network (CN). Just like
UMTS, both CS and PS service domains are supported by CDMA2000.
The PS traffic is distributed by the SDU via interfaces A8 and A9 to Packet Control Function
(PCF) and then to Packet Data Serving Node (PDSN). The A8 interface provides data and A9 supports
signaling between PCF and SDU respectively.
Similarly, data and signaling is supported by A10 and A11 interfaces between PCF and PDSN. A
PDSN connects to one or more BSCs, which establishes, maintains and terminates link layer sessions to
MS. PDSN supports compression and packet filtering on the basis of Point to Point Protocol (PPP) whose
parameters can be negotiated between PDSN and Mobile Node (MN).
PDSN is also associated with an Authentication, Authorization and Accounting (AAA) server in
the service provider network.
CDMA2000 also supports four QoS classes (Conversational, Interactive, Streaming and
Background) in the same manner as UMTS, conversational being the most delay sensitive traffic, while
background being the least delay sensitive trafficA DiffServ Domain implementing LLQ The DiffServ
domain that we have selected for our example CDMA2000 network consists of DiffServ routers
implemented with Low Latency Queueing (LLQ).
LLQ is evolved from Class Based Weighted Fair Queueing (CBWFQ). Therefore, we briefly
explain CBWFQ before moving on to LLQ. CBWFQ is a combination of Custom Queueing (CQ) and
Weighted Fair Queueing (WFQ).
With CBWFQ as with CQ, we can specify the specific number of bytes for each queue to reserve
a minimum bandwidth and we also have the option of reserving bandwidth in terms of actual percentage
of traffic. CBWFQ behaves like WFQ in that CBWFQ can use WFQ inside one particular queue (called
class default queue), but it differs from WFQ in that; it does not keep up with flows for all traffic .
CBWFQ can classify packets on any marking scheme, e.g. DSCP, MPLS etc. The drop policy available is
tail drop or WRED, configurable per queue. The maximum number of queues available at each output
interface is 64 (one is class default queue) each queue having a maximum length of 64 packets. However,
it is possible to configure the number of queues according to the specific requirement at each output
interface. The output scheduler simply serves the configured number of queues and skips other queues.
The scheduling inside each queue is FIFO except class default queue, where one can select FIFO or
WFQ. LLQ is not a separate queueing tool, but rather an option of CBWFQ applied to one or more
classes. CBWFQ treats these classes as strict priority queues. CBWFQ always services packets in these
classes if a packet is waiting, just as PQ does for the highest priority queue. However, an important aspect
of LLQ is that it always serves high priority queues within the policed bandwidth. It is also possible to
have one low-latency queue inside a single policy map and it is also possible to have more than one LLQ
in a single policy map . Queueing does not differ when comparing using a single LLQ with multiple
low-latency queues in a single policy map. The scheduler always serves low latency queues first as
compared to higher latency queues but it does not re order the packets between various low-latency
queues, which means it serves them in FIFO logic .
C. CDMA2000-to-IP QoS Mapping In CDMA2000, QoS is based on DiffServ policies from AAA
profiles and parameters from HLR. For a mobile station there can be multiple DiffServ QoS profiles from
PDSN into the IP network. If the mobile station marks its traffic to PDSN with a DiffServ class indicator,
the PDSN can accept this classification or having the option to overwrite the marking with another
DiffServ class based on the AAA profile. If a mobile station does not mark its data traffic to PDSN, the
PDSN may optionally classify and mark the traffic with a suitable DiffServ class based on the AAA
profile [44]. In our example, we perform mapping as follows. Because we have selected a DiffServ
domain in which routers are implemented with low-latency queueing, we configure 4 queues out of 64
available queues 19 at the output interface. We use two low-latency queues and two high-latency queues.
We mark the traffic according to the marking rules of DiffServ domain based on the priority. We mark
conversational traffic with DSCP EF and assign it to the first low-latency queue (queue no. 1). The
interactive traffic is marked with DSCP AF41 and we put it into queue number 2, which is also a low
latency queue. We mark the streaming traffic with DSCP AF31 and put it into the 3rd queue and finally,
we mark background traffic with DSCP BE and assign it to the 4th queue.
Scheduler logic of Low Latency Queueing D. Mechanism to Build the Function Matrix for CDMA2000
In this section, we build the function matrix for a CDMA2000 network based on a DiffServ domain with
low-latency queueing. We will calculate end-to-end delay and throughput for each traffic class passing
through this domain as in the UMTS case using the same assumptions.
The high-level network architecture of LTE is comprised of following three main components:
● The User Equipment (UE).
● The Evolved UMTS Terrestrial Radio Access Network (E-UTRAN).
● The Evolved Packet Core (EPC).
The evolved packet core communicates with packet data networks in the outside world such as the
internet, private corporate networks or the IP multimedia subsystem. The interfaces between the different
parts of the system are denoted Uu, S1 and SGi as shown below:
Below is a brief description of each of the components shown in the above architecture:
● The Home Subscriber Server (HSS) component has been carried forward from UMTS and GSM
and is a central database that contains information about all the network operator's subscribers.
● The Packet Data Network (PDN) Gateway (P-GW) communicates with the outside world ie.
packet data networks PDN, using SGi interface. Each packet data network is identified by an
access point name (APN). The PDN gateway has the same role as the GPRS support node
(GGSN) and the serving GPRS support node (SGSN) with UMTS and GSM.
● The serving gateway (S-GW) acts as a router, and forwards data between the base station and the
PDN gateway.
● The mobility management entity (MME) controls the high-level operation of the mobile by
means of signalling messages and Home Subscriber Server (HSS).
● The Policy Control and Charging Rules Function (PCRF) is a component which is not shown in
the above diagram but it is responsible for policy control decision-making, as well as for
controlling the flow-based charging functionalities in the Policy Control Enforcement Function
(PCEF), which resides in the P-GW.
The interface between the serving and PDN gateways is known as S5/S8. This has two slightly different
implementations, namely S5 if the two devices are in the same network, and S8 if they are in different
networks.
Functional split between the E-UTRAN and the EPC
Following diagram shows the functional split between the E-UTRAN and the EPC for an LTE network:
SGSN/PDSN-FA S-GW
GGSN/PDSN-HA PDN-GW
HLR/AAA HSS
VLR MME
SS7-MAP/ANSI-41/RADIUS Diameter
MIP PMIP
Network Architecture
Many existing deployed networks utilize a mesh network architecture. In a mesh network, the individual
end-nodes forward the information of other nodes to increase the communication range and cell size of
the network. While this increases the range, it also adds complexity, reduces network capacity, and
reduces battery lifetime as nodes receive and forward information from other nodes that is likely
irrelevant for them. Long range star architecture makes the most sense for preserving battery lifetime
when long-range connectivity can be achieved.
In a LoRaWAN™ network nodes are not associated with a specific gateway. Instead, data transmitted by
a node is typically received by multiple gateways. Each gateway will forward the received packet from
the end-node to the cloud-based network server via some backhaul (either cellular, Ethernet, satellite, or
Wi-Fi).
The intelligence and complexity is pushed to the network server, which manages the network and will filter
redundant received packets, perform security checks, schedule acknowledgments through the optimal
gateway, and perform adaptive data rate, etc. If a node is mobile or moving there is no handover needed from
gateway to gateway, which is a critical feature to enable asset tracking applications–a major target
application vertical for IoT.
Battery Lifetime
The nodes in a LoRaWAN™ network are asynchronous and communicate when they have data ready to
send whether event-driven or scheduled. This type of protocol is typically referred to as the Aloha
method. In a mesh network or with a synchronous network, such as cellular, the nodes frequently have to
‘wake up’ to synchronize with the network and check for messages. This synchronization consumes
significant energy and is the number one driver of battery lifetime reduction. In a recent study and
comparison done by GSMA of the various technologies addressing the LPWAN space, LoRaWAN™
showed a 3 to 5 times advantage compared to all other technology options.
Network Capacity
In order to make a long range star network viable, the gateway must have a very high capacity or
capability to receive messages from a very high volume of nodes. High network capacity in a
LoRaWAN™ network is achieved by utilizing adaptive data rate and by using a multichannel
multi-modem transceiver in the gateway so that simultaneous messages on multiple channels can be
received. The critical factors effecting capacity are the number of concurrent channels, data rate (time on
air), the payload length, and how often nodes transmit.
Since LoRa® is a spread spectrum based modulation, the signals are practically orthogonal to each other
when different spreading factors are utilized. As the spreading factor changes, the effective data rate also
changes.
The gateway takes advantage of this property by being able to receive multiple different data rates on the
same channel at the same time. If a node has a good link and is close to a gateway, there is no reason for it
to always use the lowest data rate and fill up the available spectrum longer than it needs to.
By shifting the data rate higher, the time on air is shortened opening up more potential space for other
nodes to transmit. Adaptive data rate also optimizes the battery lifetime of a node. In order to make
adaptive data rate work, symmetrical up link and down link is required with sufficient downlink capacity.
These features enable a LoRaWAN™
network to have a very high capacity and make the network scalable.
A network can be deployed with a minimal amount of infrastructure, and as capacity is needed, more
gateways can be added, shifting up the data rates, reducing the amount of overhearing to other gateways,
and scaling the capacity by 6-8x.
Other LPWAN alternatives do not have the scalability of LoRaWAN™ due to technology trade-offs,
which limit downlink capacity or make the downlink range asymmetrical to the uplink range.
Device Classes – Not All Nodes Are Created Equal
End-devices serve different applications and have different requirements. In order to optimize a variety of
end application profiles, LoRaWAN™ utilizes different device classes. The device classes trade off
network downlink communication latency versus battery lifetime. In a control or actuator-type
application, the downlink communication latency is an important factor.
Security
It is extremely important for any LPWAN to incorporate security. LoRaWAN™ utilizes two layers of
security: one for the network and one for the application.
The network security ensures authenticity of the node in the network while the application layer of
security ensures the network operator does not have access to the end user’s application data. AES
encryption is used with the key exchange utilizing an IEEE EUI64 identifier.
There are trade-offs in every technology choice but the LoRaWAN™ features in network architecture,
device classes, security, scalability for capacity, and optimization for mobility address the widest variety
of potential IoT applications.
Module No. 3: Wireless Metropolitan and Local Area Networks
Point-to-multipoint bridge
This topology is used to connect three or more LANs that may be located on different floors in a building
or across buildings(as shown in the following image).
Wireless Technologies
Wireless technologies can be classified in different ways depending on their range. Each wireless
technology is designed to serve a specific usage segment. The requirements for each usage segment are
based on a variety of variables, including Bandwidth needs, Distance needs and Power.
Wireless Wide Area Network (WWAN)
This network enables you to access the Internet via a wireless wide area network (WWAN) access card
and a PDA or laptop.
These networks provide a very fast data speed compared with the data rates of mobile
telecommunications technology, and their range is also extensive. Cellular and mobile networks based on
CDMA and GSM are good examples of WWAN.
Wireless Personal Area Network (WPAN)
These networks are very similar to WWAN except their range is very limited.
Wireless Local Area Network (WLAN)
This network enables you to access the Internet in localized hotspots via a wireless local area network
(WLAN) access card and a PDA or laptop.
It is a type of local area network that uses high-frequency radio waves rather than wires to
communicate between nodes.
These networks provide a very fast data speed compared with the data rates of mobile
telecommunications technology, and their range is very limited. Wi-Fi is the most widespread and popular
example of WLAN technology.
Wireless Metropolitan Area Network (WMAN)
This network enables you to access the Internet and multimedia streaming services via a wireless region
area network (WRAN).
These networks provide a very fast data speed compared with the data rates of mobile
telecommunication technology as well as other wireless network, and their range is also extensive.
Issues with Wireless Networks
There are following three major issues with Wireless Networks.
● Quality of Service (QoS) − One of the primary concerns about wireless
data delivery is that, unlike the Internet through wired services,
QoS is inadequate. Lost packets and atmospheric interference are
recurring problems of the wireless protocols.
● Security Risk − This is another major issue with a data transfer over a wireless network.
Basic network security mechanisms like the service set identifier (SSID) and Wireless
Equivalency Privacy (WEP); these measures may be adequate for residences and small
businesses, but they are inadequate for the entities that require stronger security.
● Reachable Range − Normally, wireless network offers a range of about
100 meters or less. Range is a function of antenna design and power.
Now a days the range of wireless is extended to tens of miles so this
should not be an issue any more.
Wireless Broadband Access (WBA)
Broadband wireless is a technology that promises high-speed connection over the air. It uses radio waves
to transmit and receive data directly to and from the potential users whenever they want it. Technologies
such as 3G, Wi-Fi, WiMAX and UWB work together to meet unique customer needs.
WBA is a point-to-multipoint system which is made up of base station and subscriber equipment.
Instead of using the physical connection between the base station and the subscriber, the base station uses
an outdoor antenna to send and receive high-speed data and voice-to-subscriber equipment.
WBA offers an effective, complementary solution to wireline broadband, which has become
globally recognized by a high percentage of the population.
● 802.11b could transfer data at rates of between 1.5 and 54 Mbps uses 2.4GHz band
● 802.11i: Security
● Further releases of the took place as time progressed, each one providing improved performance
or different capabilities, the major ones being: 802.11g (2003); 802.11n (2009), 802.11ac (2013),
802.11ax (2019)
WiFi Hotspots
A WiFi hotspot is created by installing an access point to an internet connection. The access point
transmits a wireless signal over a short distance. It typically covers around 300 feet. When a WiFi enabled
device such as a Pocket PC encounters a hotspot, the device can then connect to that network wirelessly.
Most hotspots are located in places that are readily accessible to the public such as airports, coffee shops,
hotels, book stores, and campus environments. 802.11b is the most common specification for hotspots
worldwide. The 802.11g standard is backwards compatible with .11b but .11a uses a different frequency
range and requires separate hardware such as an a, a/g, or a/b/g adapter. The largest public WiFi networks
are provided by private internet service providers (ISPs); they charge a fee to the users who want to
access the internet.
Fig. WiFi Application
Hotspots are increasingly developing around the world. In fact, T-Mobile USA controls more than 4,100
hotspots located in public locations such as Starbucks, Borders, Kinko's, and the airline clubs of Delta,
United, and US Airways. Even select McDonald's restaurants now feature WiFi hotspot access.
Any notebook computer with integrated wireless, a wireless adapter attached to the motherboard
by the manufacturer, or a wireless adapter such as a PCMCIA card can access a wireless network.
Furthermore, all Pocket PCs or Palm units with Compact Flash, SD I/O support, or built-in WiFi, can
access hotspots.
Some Hotspots require a WEP key to connect, which is considered as private and secure. As for
open connections, anyone with a WiFi card can have access to that hotspot. So in order to have internet
access under WEP, the user must input the WEP key code.
BSS is the basic building block of WLAN. It is made of wireless mobile stations and an optional central
base station called Access Point.
Stations can form a network without an AP and can agree to be a part of a BSS.
A BSS without an AP cannot send data to other BSSs and defines a standalone network. It is called
Ad-hoc network or Independent BSS(IBSS).i.e A BSS without AP is an ad-hoc network.
A BSS with AP is infrastructure network.
The figure below depicts an IBSS, BSS with the green coloured box depicting an AP.
ESS is made up of 2 or more BSSs with APs. BSSs are connected to the distribution system via their APs.
The distribution system can be any IEEE LAN such as Ethernet.
The topmost green box represents the distribution system and the other 2 green boxes represent the APs of
2 BSSs.
Wi-Fi wireless connectivity is an established part of everyday life. All smartphones have Wi-Fi
technology incorporated as one of the basic elements of the phone enabling low cost connectivity to be
provided. In addition to this, computers, laptops, tablets, cameras and very many other devices use Wi-Fi.
Wi-Fi access is available in many places via Wi-Fi access points or small DSL / Ethernet routers. Homes,
offices, shopping centres, airports, coffee shops and many more places offer Wi-Fi access.
Wi-Fi is now one of the major forms of communication for many devices and with home automation
increasing, even more devices are using it. Home Wi-Fi is a big area of usage of the technology with most
homes that use broadband connections to the Internet using Wi-Fi access as a key means of
communication.
Local area networks of all forms use Wi-Fi as one of the main forms of communication along with
Ethernet. For the home, office and many other areas, Wi-Fi is a major carrier of data.
To enable different items incorporating wireless technology like this to communicate with each other,
common standards are needed. The standard for Wi-Fi is IEEE 802.11. The different variants like 802.11n
or 802.11ac are different standards within the overall series and they define different variants. By
releasing updated variants, the overall technology has been able to keep pace with the ever growing
requirements for more data and higher speeds, etc. Technologies including gigabit Wi-Fi are now widely
used.
Fig.How a Wi-Fi Access Point may be connected on an office local area network
Public Wi-Fi access points are typically used to provide local Internet access often on items like
smartphones or other devices without the need for having to use more costly mobile phone data. They are
also often located within buildings where the mobile phone signals are not sufficiently strong.
Home Wi-Fi systems often use an Ethernet router: this provides the Wi-Fi access point as well as Ethernet
communications for desk top computers, printers and the like as well as the all important link to the
Internet via a firewall. Being an Ethernet router it transcribes the IP addresses to provide a firewall
capability.
Although Wi-Fi links are established on either of the two main bands, 2.4 GHz and 5GHz, many Ethernet
routers and Wi-Fi access points provide dual band Wi-Fi connectivity and they will provide 2.4 GHz and
5 GHz Wi-Fi. This enables the best Wi-Fi links to be made regardless of usage levels and interference on
the bands.
There will typically be a variety of different Wi-Fi channels that can be used. The Wi-Fi access point or
Wi-Fi router will generally select the optimum channel to be used. If the access point or router provides
dual band Wi-Fi capability, a selection of the band will also be made. These days, this selection is
normally undertaken by the Wi-Fi access point or router, without user intervention so there is no need to
select 2.4 GHz or 5 GHz Wi-Fi as on older systems.
Fig. Home wifi
In order to ensure the the local area network to which the Wi-Fi access point is connected remains secure,
a password is normally required to be able to log on to the access point. Even home Wi-Fi networks use a
password to ensure that unwanted users do not access the network.
Many types of device can connect to Wi-Fi networks. Today devices like smartphones, laptops
and the like expect to use Wi-Fi and therefore it is incorporated as part of the product - no need to do
anything apart from connect. A lot of other devices also have Wi-Fi embedded in them: smart TVs,
cameras and many more. Their set up is also very easy.
Occasionally some devices may need a little more attention. These days, most desktop PCs will
come ready to use with Ethernet, and often they have Wi-Fi capability included. Some may not have
Wi-Fi incorporated and therefore that may need additional hardware if they are required to use Wi-Fi
links. An additional card in the PC, or an external dongle should suffice for this.
In general, most devices that need to communicate data electronically will have a Wi-Fi capability.
WiFi network types
Although most people are familiar with the basic way that a home Wi-Fi network might work, it is not the
only format for a WiFi network.
Essentially there are two basic types of Wi-Fi network:
● Local area network based network: This type of network may be loosely termed a LAN based
network. Here a Wi-Fi Access Point, AP is linked onto a local area network to provide wireless as well
as wired connectivity, often with more than one Wi-Fi hotspot.
The infrastructure application is aimed at office areas or to provide a "hotspot". The office may even
work wirelessly only and just have a Wireless Local Area Network, WLAN. A backbone wired network
is still required and is connected to a server. The wireless network is then split up into a number of cells,
each serviced by a base station or Access Point (AP) which acts as a controller for the cell. Each Access
Point may have a range of between 30 and 300 metres dependent upon the environment and the location
of the Access Point.
More normally a LAN based network will provide both wired and wireless access. This is the type of
network that is used in most homes, where a router which has its own firewall is connected to the
Internet, and wireless access is provided by a Wi-Fi access point within the router,. Ethernet and often
USB connections are also provided for wired access.
● Ad hoc network: The other type of Wi-Fi network that may be used is termed an Ad-Hoc network.
These are formed when a number of computers and peripherals are brought together. They may be
needed when several people come together and need to share data or if they need to access a printer
without the need for having to use wire connections. In this situation the users only communicate with
each other and not with a larger wired network.
As a result there is no Wi-Fi Access Point and special algorithms within the protocols are used to enable
one of the peripherals to take over the role of master to control the Wi-Fi network with the others acting
as slaves.
This type of network is often used for items like games controllers / consoles to communicate.
WiFi hotspots
One of the advantages of using WiFi IEEE 802.11 is that it is possible to connect to the Internet when out
and about. Public WiFi access is everywhere - in cafes, hotels, airports, and very many other places.
Sometimes all that is required is to select a network and press the connect button. Others require a
password to be entered.
When looking at what is WiFi, there are some key topics to look at. There are both the theoretical and
practical issues to looking at dependent upon what is needed:
● Wi-Fi variants & standards: There are several different forms of Wi-Fi. The first that were widely
available were IEEE802.11a and 802.11b. These have long been superseded with a variety of variants
offering much higher speeds and generally better levels of connectivity. There are many different Wi-Fi
standard which have been used, each one with different levels of performance. IEEE 802.11a, 802.11b,
g, n, 802.11ac, 802.11ad Gigabit Wi-Fi, 11af White-Fi, ah, ax etc.
● Positioning a Wi-Fi router: The performance of a Wi-Fi router can be very dependent upon its
location. Place it badly and it will not be able to perform as well. By locating a router in the best
position, much better performance can be gained.
The location of the Wi-Fi access point or router is key to providing good performance. Locating it in the
right position can enable it to give much better service over more of the intended area.
● Using Hotspots securely: Wi-Fi hotspots are everywhere, and they are very convenient to use
providing cheap access to data services. But public Wi-Fi hotspots are not particularly secure - some are
very open and can open up the unwary user to having credentials and other secure details being obtained
or computers hacked, etc.
When using public Wi-Fi, great care must be taken and several rules should be followed to ensure the
malicious users do not take advantage. Wi-Fi security is always a major issue.
When using a Wi-Fi link that could be monitored by someone close by, for example when in a coffee
shop, etc, make sure that the link is secure along with the website being browsed, i.e. only visit https
sites. It is always wise not to expose credit card details or login passwords, etc when on a public Wi-Fi
link, even if the Wi-Fi link is secure. It is all too easy for details to be gathered, and saved for use later.
If using a smartphone, it is far, far safer to use the mobile network itself. If necessary when using a
laptop or tablet, link this to the smartphone as personal hotspot as this will have a password (remember
to choose a safe one) and this is much less likely to be hacked.
Wi-Fi is now an essential part of the connectivity system working alongside mobile communications,
local area wired connectivity and much more. With the growing use of various forms of wireless
connectivity for devices like smartphone and laptops as well as connected televisions, security system and
a host more, the use of Wi-Fi will only grow. In fact with the Internet of Things now being a reality and
its use increasing, the use of Wi-Fi will also continue to grow.
As new standards are developed its performance will improve, both for office, local hotspots and home
Wi-Fi. For the future, not only will speeds improve, with the introduction of aspects like Gigabit Wi-Fi,
but also the methods of use and its flexibility. In this way, Wi-Fi will remain a chosen technology for short
range connectivity.
IEEE 802.11 protocol stack:
802.11g
802.11g uses the OFDM modulation methods of 802.11a, but operates in 2.4GHz ISM band [1, P. 302].
It has the same rates as 802.11a, as well as compatibility with 802.11b devices .
802.11n
802.11n was ratified in 2009. The aim of 802.11n was throughput of 100Mb/s after transmission
overheads were removed.
To meet the goal:
● Channels were doubled from 20MHz to 40MHz.
● Frame overhead was reduced by allowing a group of frames to be sent together.
● Up to four streams could be transmitted at a time using four antennas.
In 802.11, the stream signals interfere at the receiver, but they can be separated using MIMO (Multiple
Input Multiple Output) techniques.
The MAC sublayer protocol
The 802.11 MAC sublayer is different from the Ethernet MAC sublayer for two reasons:
● Radios are almost always half duplex
● Transmission ranges of different stations might be different
802.11 uses the CSMA/CA (CSMA with Collision Avoidance) protocol. CSMA/CA is similar
to ethernet CSMA/CD. It uses channel sensing and exponential backoff after collisions, but
instead of entering backoff once a collision has been detected, CSMA/CA uses backoff
immediately (unless the sender has not used the channel recently and the channel is idle) [1, P.
303].
The algorithm will backoff for a number of slots, for example 0 to 15 in the case of the of the OFDM
physical layer. The station waits until the channel is idle by sensing that there is no signal for a short
period of time. It counts down idle slots, pausing when frames are sent. When its counter reaches 0, it
sends its frames [1, P. 303].
Acknowledgements “are used to infer collisions because collisions cannot be detected”
This way of operating is called DCF (Distributed Coordination Function). in DCF each station is
acting independently, without a central control.
The other problem facing 802.11 protocols is transmission ranges differing between stations. It’s possible
for transmissions in one part of a cell to not be received in another part of the cell, which can make it
impossible for a sender to sense a busy channel, resulting in collisions.-
802.11 defines channel sensing to consist of physical and virtual sensing. Physical sensing “checks the
medium to see if there is a valid signal”.
With virtual sensing, each station keeps a record of what channel is in use. It does this with the NAV
(Network Allocation Vector). Each frame includes a NAV field that contains information on how long
the sequence that the frame is part of will take to complete [1, P. 305].
802.11 is designed to:
● Be reliable.
● Be power-saving.
● Provide quality of service.
The main strategy for reliability is to lower the transmission rate if too many frames are unsuccessful.
Lower transmission rates use more robust modulations. If too many frames are lost, a station can lower its
rate. If frames are successfully delivered, a station can test a higher rate to see if should upgrade.
Another strategy for successful transmissions is to send shorter frames. 802.11 allows frames to be split
into fragments, with their own checksum. The fragment size can be adjusted by the AP. Fragments are
numbered and sent using a stop-and-wait protocol.
802.11 uses beacon frames. Beacon frames are broadcast periodically by the AP. The frames advertise
the presence of the AP to clients and carry system parameters, such as the identifier of the AP, the time,
how long until the next beacon, and security settings”.
Clients can set a power-management bit in frames that are sent to the AP to alert it that the client is
entering power-save mode. In power-save mode, the client rests and the AP buffers traffic intended for it.
The client wakes up for every beacon, and checks a traffic map that’s sent with the beacon. The traffic
map tells the client whether there is buffered traffic. If there is, the client sends a poll to the AP, and the
AP sends the buffered traffic .
802.11 provides quality of service by extending CSMA/CA with defined intervals between frames.
Different kinds of frames have different time intervals. The interval between regular data frame is called
the DIFS (DCF InterFrame Spacing). Any station can attempt to acquire a channel after the channel has
been idle for DIFS].
The shortest interval is SIFS (Short InterFrame Spacing). SIFS is used to send an ACK, other control
frames like RTS, or for sending another fragment (which prevents another station from transmitting
during the middle of a frame) .
Different priorities of traffic are determined with different AIFS (Arbitration InterFrame Space)
intervals. A short AIF can allow the AP to send higher priority traffic. An AIF that is longer than DIFS
means the traffic will be sent after regular traffic .
Another quality of service mechanism is transmission opportunity. Previously, CSMA/CA allowed only
one frame to be sent at a time. This slowed down stations with significantly faster rates. Transmission
opportunities make it so each station has equal airtime, not an equal number of sent frames .
802.11 frame structure
There are three different classes of frames used in the air:
● Data
● Control
● Management
The second field in the data frame is the Duration field. This describes how long the frame and its
acknowledgements will occupy the channel (measured in microseconds). It’s included in all frames,
including control frame.
The addresses to and from an AP follow the standard IEEE 802 format. The Address 1 is the receiver,
Address 2 is the transmitter, Address 3 is the address of the endpoint that originally sent the frame via the
AP
The Sequence 16-bit field numbers frames so that duplicates can be detected. The first 4 bits identify the
fragment, the last 12 contain a number that’s incremented on each transmission
The Data field contains the payload. It can be up to 2312 bytes. The first bytes of the payload are for the
LLC layer to identify the higher-layer protocol that the data .
The final part of the frame is the Frame Check Sequence field, containing a 32-bit CRC for validating the
frame
“Management frames have the same format as data frames, plus a format for the data portion that varies
with the subtype (e.g. parameters in beacon frames)”
Control frames contain Frame Control, Duration, and Frame Check Sequence fields, but they might only
have one address and no Data section.
Services
802.11 defines a number of services that must be provided by conformant wireless LANs.
Mobile stations use the association service to connect to APs. Usually, the service is used just after a
station has moved within range of an AP. When the station is within range, it learns the identity and
capabilities of the AP through either beacon frames, or by asking the AP directly. The station sends a
request to associate with the AP, which the AP can either accept or reject .
The reassociation service is used to let a station change its preferred AP. If correctly used, there should
be no data loss between the handover. The station or the AP can also disassociate. The station should use
this before shutting down .
Stations should authenticate before sending frames via the AP. Authentication is handled differently
depending on the security scheme. If the network is open, anyone can use it. Otherwise credentials are
required. WPA2 (WiFi Protected Access 2) is the recommended approach that implements security
defined in the 802.11i standard. With WPA2, the AP communicates with an authentication server that
“has a username and password database to determine if the station is allowed to access the network”. A
password can also be configured (known as a pre-shared key)
The distribution service determines how to route frames from the AP. If the destination is local, the
frames are sent over the air. If they are not, they are forwarded over the wired network.
The integration service handles translation for frames to be sent outside the 802.11 LAN.
The data delivery service lets stations transmit and receive data using the protocols outlined in this
section
A privacy service manages encryption and decryption. The encryption algorithm for WPA2 is based on
AES (Advanced Encryption Standard). The encryption keys are determined during authentication The
QOS traffic scheduling is used to handle traffic with different priorities. It uses the protocols described
in The MAC sublayer protocol section
“The transmit power control service gives stations the information they need to meet regulatory limits
on transmit power that vary from region to region”.
“The dynamic frequency selection service give stations the information they need to avoid transmitting
on frequencies in the 5-GHz band that are being used for radar in the proximity”
WiMax (Worldwide Interoperability of Microwave Access):
WiMAX is one of the hottest broadband wireless technologies around today. WiMAX systems are
expected to deliver broadband access services to residential and enterprise customers in an economical
way.
Loosely, WiMax is a standardized wireless version of Ethernet intended primarily as an
alternative to wire technologies (such as Cable Modems, DSL and T1/E1 links) to provide broadband
access to customer premises.
More strictly, WiMAX is an industry trade organization formed by leading communications,
component, and equipment companies to promote and certify compatibility and interoperability of
broadband wireless access equipment that conforms to the IEEE 802.16 and ETSI HIPERMAN standards.
WiMAX would operate similar to WiFi, but at higher speeds over greater distances and for a greater
number of users. WiMAX has the ability to provide service even in areas that are difficult for wired
infrastructure to reach and the ability to overcome the physical limitations of traditional wired
infrastructure.
WiMAX was formed in April 2001, in anticipation of the publication of the original 10-66 GHz
IEEE 802.16 specifications. WiMAX is to 802.16 as the WiFi Alliance is to 802.11.
WiMAX is
● Acronym for Worldwide Interoperability for Microwave Access.
● Based on Wireless MAN technology.
● A wireless technology optimized for the delivery of IP centric services over a wide area.
● A scalable wireless platform for constructing alternative and complementary broadband networks.
● A certification that denotes interoperability of equipment built to
the IEEE 802.16 or compatible standard. The IEEE 802.16 Working
Group develops standards that address two types of usage models −
○ A fixed usage model (IEEE 802.16-2004).
○ A portable usage model (IEEE 802.16e).
What is 802.16a ?
WiMAX is such an easy term that people tend to use it for the 802.16 standards and technology
themselves, although strictly it applies only to systems that meet specific conformance criteria laid down
by the WiMAX Forum.
The 802.16a standard for 2-11 GHz is a wireless metropolitan area network (MAN) technology
that will provide broadband wireless connectivity to Fixed, Portable and Nomadic devices.
It can be used to connect 802.11 hot spots to the Internet, provide campus connectivity, and
provide a wireless alternative to cable and DSL for last mile broadband access.
WiMax Speed and Range
WiMAX is expected to offer initially up to about 40 Mbps capacity per wireless channel for both fixed
and portable applications, depending on the particular technical configuration chosen, enough to support
hundreds of businesses with T-1 speed connectivity and thousands of residences with DSL speed
connectivity. WiMAX can support voice and video as well as Internet data.
WiMax developed to provide wireless broadband access to buildings, either in competition to
existing wired networks or alone in currently unserved rural or thinly populated areas. It can also be used
to connect WLAN hotspots to the Internet. WiMAX is also intended to provide broadband connectivity to
mobile devices. It would not be as fast as in these fixed applications, but expectations are for about 15
Mbps capacity in a 3 km cell coverage area.
With WiMAX, users could really cut free from today's Internet access arrangements and be able
to go online at broadband speeds, almost wherever they like from within a MetroZone.
WiMAX could potentially be deployed in a variety of spectrum bands: 2.3GHz, 2.5GHz, 3.5GHz, and
5.8GHz
Why WiMax ?
● WiMAX can satisfy a variety of access needs. Potential applications include extending broadband
capabilities to bring them closer to subscribers, filling gaps in cable, DSL and T1 services, WiFi,
and cellular backhaul, providing last-100 meter access from fibre to the curb and giving service
providers another cost-effective option for supporting broadband services.
● WiMAX can support very high bandwidth solutions where large spectrum deployments (i.e. >10
MHz) are desired using existing infrastructure keeping costs down while delivering the
bandwidth needed to support a full range of high-value multimedia services.
● WiMAX can help service providers meet many of the challenges they face due to increasing
customer demands without discarding their existing infrastructure investments because it has the
ability to seamlessly interoperate across various network types.
● WiMAX can provide wide area coverage and quality of service capabilities for applications
ranging from real-time delay-sensitive voice-over-IP (VoIP) to real-time streaming video and
non-real-time downloads, ensuring that subscribers obtain the performance they expect for all
types of communications.
● WiMAX, which is an IP-based wireless broadband technology, can be integrated into both
wide-area third-generation (3G) mobile and wireless and wireline networks allowing it to become
part of a seamless anytime, anywhere broadband access solution.
Ultimately, WiMAX is intended to serve as the next step in the evolution of 3G mobile phones, via a
potential combination of WiMAX and CDMA standards called 4G.
Definition Wifi stands for Wireless Fidelity. WiMax stands for Wireless
1 Inter-operability for
Microwave Access.
Network Range Wifi network ranges at max 100 WiMax network ranges to
6
meters. max 90 kms.
Frequency Band Licensed/Unlicensed 2.4 GHz ISM 2.4 GHz ISM (g)
2 G to 11 GHz 5 GHz U-NII (a)
IP-based Architecture
The WiMAX Forum has defined a reference network architecture that is based on an all-IP platform. All
end-to-end services are delivered over an IP architecture relying on IP-based protocols for end-to-end
transport, QoS, session management, security, and mobility.
Mesh Mode:
WMNs can be seen as one type of MANETs. An ad-hoc network (possibly mobile) is a set of
network devices that want to communicate, but have no fixed infrastructure available and no
predetermined pattern of available communication links. The individual nodes of the network are
responsible for a dynamic discovery of the other nodes that can communicate directly with them, i.e. what
are their neighbors (forming a multi-hop network). Ad-hoc networks are chosen so that they can be used
in situations where the infrastructure is not available or unreliable, or even in emergency situations. A
mesh network is composed of multiple nodes / routers, which starts to behave like a single large network,
enabling the client to connect to any of them. In this way it is possible to transmit messages from one
node to another in different ways. Mesh type networks have the advantage of being low cost, easy to
deploy and reasonably fault tolerant.
In another analogy, a wireless mesh network can be regarded as a set of antennas, which are
spaced a certain distance from each other so that each covers a portion or area of a goal or region. A first
antenna covers an area, the second antenna covers a continuous area after the first and so on, as if it were
a tissue cell, or a spider web that interconnects various points and wireless clients. What is inside these
cells and covers the span of the antennas, can take advantage of the network services, provided that the
client has a wireless card with the interface technology.
Mesh networks are networks with a dynamic topology that show a variable and constant change
with growth or decline, and consist of nodes whose communication at the physical level occurs through
variants of the IEEE 802.11 and IEEE 802.16 standard, and whose routing is dynamic. The image below
shows an example of a mesh network. In mesh networks, the access point / base stations area is usually
fixed.
When half-duplex SSs are used, the bandwidth controller does not allocate an uplink bandwidth for a
half-duplex SS at the same time as the latter is expected to receive data on the downlink channel,
including allowance for the propagation delay uplink/downlink transmission shift delays.
Wimax architecture:
Worldwide Interoperability of Microwave Access (WiMAX) is a fast-emerging wide-area wireless
broadband technology that shows great promise as the last mile solution for delivering high-speed Internet
access to the masses. It represents an inexpensive alternative to digital subscriber lines (DSL) and cable
broadband access, the installation costs for a wireless infrastructure based on IEEE 802.16 being far less
than today’s wired solutions, which often require laying cables and ripping up buildings and streets.
Wireless broadband access is set up like cellular systems, using base stations that service a radius
of several miles/kilometres. Base stations do not necessarily have to reside on a tower. More often than
not, the base station antenna will be located on a rooftop of a tall building or other elevated structure such
as a grain silo or water tower. A customer premise unit, similar to a satellite TV setup, is all it takes to
connect the base station to a customer. The signal is then routed via standard Ethernet cable either directly
to a single computer, or to an 802.11 hot spot or a wired Ethernet LAN.
The original 802.16 standard operates in the 10-66GHz frequency band and requires line-of-sight
towers. The 802.16a extension, ratified in January 2003, uses a lower frequency of 2-11GHz, enabling
nonline-of-sight connections. This constitutes a major breakthrough in wireless broadband access,
allowing operators to connect more customers to a single tower and thereby substantially reduce service
costs.
The IEEE 802.16-2004 standard subsequently revised and replaced the IEEE 802.16a and 802.16REVd
versions. This is designed for fixed-access usage models. This standard may be referred to as fixed
wireless because it uses a mounted antenna at the subscriber’s site. The antenna is mounted to a roof or
mast, similar to a satellite television dish. IEEE 802.16-2004 also addresses indoor installations, in which
case it may not be as robust as in outdoor installations.
The IEEE 802.16e standard is an amendment to the 802.16-2004 base specification and targets
the mobile market by adding portability and the ability for mobile clients with appropriate adapters to
connect directly to a WiMAX network.
WLL configuration:
BTS (base Transceiver Station)
FSU(Fixed Subscriber Unit)
Loop: . In telephone, loop is a circuit line from a subscriber’s phone to a line-terminating equipment at a
central office.
• Implementation of a local loop especially in rural areas used to remain a risk for many operators due to
less users and increased cost of materials. The loop lines are copper wires which require more
investments.
• However today with Wireless local loop (WLL) has been introduced which solves most of these
problems.
• As WLL is wireless, the labor-charges and time-consuming investments are no longer relevant.
• WLL systems can be based on one of the four below technologies:
1. Satellite-based systems.
2. Cellular-based systems.
3. Microcellular-based Systems
4. Fixed Wireless Access Systems
Deployment Issues:
• To compete with other local loop technologies WLL needs tom [provide sufficient coverage and
capacity, high circuit quality and efficient data services.
• Moreover the WLL cost should be competitive with its wireline counterpart.
• Various issues are considered in WLL development which include:
1. Spectrum: The implementation of WLL should be flexible to accommodate different flexible
bands as well as non-continuous bands. More these bands are licensed by government.
2. Service quality: Customer expects that the quality of service should be better than the wireline
counterpart. The quality requirements include link quality, reliability and fraud immunity.
3. Network Planning: Unlike Mobile System, WLL assumes that user is stationary, not moving.
Also the network penetration should be greater than 90%. Therefore WLL should be installed
based on parameters like Population Density etc.
4. Economics: Major cost here is electronic equipment’s. In current scenario, the cost of such
electronic equipment is reducing periodically.
• In traditional telephone networks, your phone would be connected to the nearest exchange through a
pair of copper wires.
• Wireless local loop (WLL) technology simply means that the subscriber is connected to the nearest
exchange through a radio link instead of through these copper wires.
Fig. WLL configuration
Advantages of WLL:
○ It eliminates the first mile or last mile construction of the network connection.
○ Low cost due to no use of conventional copper wires.
○ Much more secure due to digital encryption techniques used in wireless communication.
○ Highly scalable as it doesn’t require the installation of more wires for scaling it.
Features of WLL:
○ Internet connection via modem
○ Data service
○ Voice service
○ Fax service
1. Introduction
Wireless communication technologies have transformed the way devices and networks
exchange information, enabling innovative applications across diverse domains—from
consumer electronics to industrial automation and vehicular communications. This chapter
provides a comprehensive overview of four key areas in short-range wireless and ad hoc
networking:
2. IEEE 802.15.4 (ZigBee): A popular Low-Rate Wireless Personal Area Network (LR-WPAN)
specification optimized for low-power, low-data-rate sensor and control applications.
4. Ad Hoc Networks: Focusing on Mobile Ad Hoc Networks (MANETs) and Vehicular Ad Hoc
Networks (VANETs), as well as the emerging Electrical Vehicular Ad Hoc Networks (E-
VANET).
Each section delves into the technical details, operational mechanisms, and contemporary
developments, emphasizing their significance in modern communication systems. By the end of
this chapter, readers will have an in-depth understanding of how these technologies function,
the challenges they address, and the avenues they open for future innovations.
Bluetooth, standardized under IEEE 802.15.1, is a short-range wireless technology designed for
low-power, low-cost communication among devices. Its core value lies in simplifying the
exchange of information—such as audio, data, and control signals—over short distances,
typically within 10 meters (Class 2 devices) or 100 meters (Class 1 devices). Since its inception,
Bluetooth has evolved through multiple versions (e.g., Bluetooth 5.x) to address demands for
higher throughput and improved energy efficiency.
2.1 Piconet
A piconet is the fundamental network topology in Bluetooth. It comprises one device acting as a
master and up to seven active slave devices. This master-slave relationship is central to how
devices coordinate access to the shared medium.
1. Definition and Structure:
o The master is responsible for timing and control. It defines the frequency-
hopping sequence and timing structure that all slaves must follow.
o Up to eight devices can be actively involved in the piconet (one master plus
seven slaves). Additional devices may be parked or held in low-power states,
waiting for scheduling.
2. Formation:
o Inquiry and Paging: When a device wants to join or create a piconet, it performs
an inquiry to discover nearby devices. The paging procedure follows to establish
a synchronized connection.
o Synchronization: The master sends out signals (frequency hops and timing
beacons), which the slaves use to synchronize their clocks and frequency
hopping.
3. Operational Mechanism:
o Frequency Hopping Spread Spectrum (FHSS): Bluetooth operates in the 2.4 GHz
ISM band and uses adaptive frequency hopping to mitigate interference. The
piconet hops through 79 channels (in most regions) at a rate of 1600 hops per
second.
2.2 Scatternet
A scatternet is formed when multiple piconets overlap or interconnect. While the piconet
structure is straightforward, building a scatternet adds complexity and requires devices to
operate in multiple piconets concurrently.
o Multiple Piconets: A device in one piconet may act as a slave in another piconet
or even take on the role of master. This multi-role capability allows Bluetooth
networks to scale beyond eight devices.
o Bridging: Devices that participate in more than one piconet are called bridge
devices. They forward data between piconets, effectively linking them into a
scatternet.
2. Interoperability:
o Common Protocol Stack: All devices adhere to the same Bluetooth protocol
stack. This uniformity enables interoperability as long as roles and timing are
managed correctly.
o Scheduling Complexity: A bridge device must synchronize with two or more sets
of frequency-hopping sequences. This often leads to scheduling challenges that
require careful time-slot allocation to avoid collisions.
3. Challenges:
The Bluetooth protocol stack is typically divided into core protocols, cable replacement and
telephony control protocols, and adopted protocols. This layered architecture provides a
structured way to manage everything from low-level radio frequency operations to high-level
application interactions.
1. Radio Layer:
2. Baseband Layer:
o A serial port emulation protocol that enables legacy applications to run over
Bluetooth as if they were using a standard serial link.
o Often used for dial-up networking, data transfer between PCs and phones, etc.
Designed for low-power, low-data-rate applications, IEEE 802.15.4 underpins the ZigBee
standard, offering a robust foundation for sensor networks, industrial control, and home
automation. ZigBee extends the baseline physical (PHY) and medium access control (MAC)
specifications in IEEE 802.15.4 with a defined network layer and application framework, making
it a popular choice for wireless monitoring and control solutions.
ZigBee devices operate in Low-Rate Wireless Personal Area Networks (LR-WPANs). Their
architecture is optimized to use minimal power and handle small data bursts typical of sensing
or control signals.
1. Node Types:
2. Functional Blocks:
o Radio Transceiver: Compliant with IEEE 802.15.4 PHY, typically using DSSS (Direct
Sequence Spread Spectrum) in the 2.4 GHz ISM band or sub-GHz bands (868 MHz
in Europe, 915 MHz in North America).
o Microcontroller (MCU): Runs the ZigBee stack (network and application layers)
and handles local processing.
o Sensors/Actuators: Interface with the physical environment (e.g., temperature
sensor, LED actuator).
3. Communication Model:
o Star Topology: A coordinator acts as the central node, with end devices
connecting to it directly.
ZigBee’s protocol stack, built on top of IEEE 802.15.4, consists of a Physical layer, MAC layer,
Network layer, and Application layer (including application support sub-layer and the ZigBee
Device Object, ZDO).
o Typical data rates: 250 kbps at 2.4 GHz, lower in sub-GHz bands.
2. MAC Layer:
o Responsible for channel access, beaconing, frame validation, and
acknowledgments.
3. Network Layer:
o Routing Protocol: Often utilizes AODV (Ad hoc On-Demand Distance Vector) or
table-driven variants adapted for low-power mesh topologies.
o Acts as an interface between the Network layer and the Application layer.
o Manages device roles (e.g., coordinator, router, end device) and network
functions (e.g., discovery of other devices, initiating or joining a network).
6. Application Framework:
The ZigBee Alliance maintains and updates specifications (e.g., ZigBee Pro, ZigBee 3.0), focusing
on interoperability and backward compatibility. In modern IoT ecosystems, ZigBee competes
with other low-power standards like Thread and BLE (Bluetooth Low Energy), but it remains
widely adopted in large-scale sensor and control networks.
4. Wireless Sensor Networks (WSN)
Wireless Sensor Networks (WSNs) are distributed networks of sensor nodes that autonomously
monitor environmental or system parameters (e.g., temperature, vibration, chemical
concentrations) and communicate the collected data to a central sink or base station. These
networks find applications in critical areas such as industrial automation, agriculture, defense,
and healthcare, where large-scale, real-time monitoring is essential.
1. Energy Efficiency:
o Duty Cycling: Nodes periodically switch between active and sleep modes to
conserve energy. Protocols must coordinate wake-up schedules to ensure data
collection and delivery.
2. Scalability:
3. Reliability:
o Fault Tolerance: A node or link failure should not collapse the network. Mesh
connectivity and dynamic rerouting help maintain service continuity.
4. Data Aggregation:
o Temporal and Spatial Correlation: Sensors close to each other often produce
correlated data, which can be compressed or filtered before transmission.
1. Power Constraints:
o Efficient MAC and Routing Protocols: Protocols like S-MAC, T-MAC, or duty-
cycling mechanisms are specifically designed to reduce collision and idle
listening.
2. Security Concerns:
o Unreliable Links: Wireless links in WSNs can be prone to high bit error rates or
interference. Retransmission and error correction schemes should be optimized
for energy usage.
o Prioritizing Critical Data: Certain data (e.g., alarm signals) may require higher
priority and guaranteed delivery.
4. Coverage Gaps:
o Aggregate and forward data from a group of nodes to the base station.
o Often equipped with more computational resources and energy reserves than
end devices.
o Connects WSN data to the backend network, e.g., the internet or a local server
for data storage and analysis.
4. Topology Types:
o Mesh Topology: Nodes dynamically forward data toward the base station via
multiple hops. Offers fault tolerance and scalability.
4.4 Applications
1. Industrial Automation:
o Process Control: Sensors gather real-time data from assembly lines, adjusting
production parameters to optimize yield and quality.
2. Environmental Monitoring:
o Wildlife Tracking: WSNs help track animal migration and habitat conditions with
minimal human intervention.
3. Healthcare:
o Patient Monitoring: Wearable sensors transmit vitals (heart rate, blood pressure)
in real-time to a central system.
4. Military Applications:
WSNs will continue to evolve with the integration of machine learning and edge computing,
enabling more autonomous and intelligent sensor networks capable of local decision-making
while minimizing communication overhead.
5. Ad Hoc Networks
Mobile Ad Hoc Network (MANET) is a network of mobile devices forming a temporary network
without any fixed infrastructure or centralized administration. Each node can function as both
an end device and a router, discovering routes dynamically as topology changes.
Characteristics of MANETs
2. Dynamic Topology: Frequent node mobility changes link availability and routing paths.
3. Multi-hop Communication: Data travels through multiple nodes before reaching its
destination.
3. Temporary Events: Large gatherings or conferences can use MANETs to provide localized
communication services.
Vehicular Ad Hoc Network (VANET) focuses on communication among vehicles and between
vehicles and roadside infrastructure. It leverages wireless interfaces (commonly IEEE 802.11p or
Cellular V2X) to enable applications such as collision warnings, traffic condition updates, and
infotainment.
Characteristics of VANETs
1. High Mobility: Vehicles move rapidly, causing frequent changes in network topology.
2. Predictable Patterns: Vehicle movement often follows road layouts, offering some
degree of predictability in routing.
3. Low Communication Latency Requirements: Safety applications require fast data
exchange (e.g., braking or hazard alerts).
Applications of VANETs
2. Real-Time Vehicular Safety Systems: Vehicles share speed, brake status, and sensor data
to prevent collisions.
1. Advantages:
The drive toward sustainable transportation has introduced Electric Vehicles (EVs) into the
vehicular ecosystem. As VANET technology evolves, the concept of Electrical Vehicular Ad Hoc
Networks (E-VANET) has emerged, integrating EV-specific needs such as charging infrastructure
communication and range management into the ad hoc network model.
1. Charging Station Awareness: Nodes (EVs) must locate nearby charging stations and
verify availability or waiting times. E-VANET protocols can broadcast or request charging
station status to optimize route planning.
2. Energy Constraints: EVs have limited battery capacity, making efficient route selection
and recharging strategies critical for seamless travel.
o Vehicle-to-Grid (V2G): EVs can potentially feed power back into the grid during
peak demand or store excess renewable energy. E-VANET ensures real-time
coordination for these transactions.
2. EV Route Optimization:
o Multi-Criteria Routing: Routing decisions consider not just traffic but also battery
level, charging station density, and expected queue times.
E-VANET stands at the intersection of smart mobility and sustainable energy. Although it
inherits many technical foundations from conventional VANETs, its specialized focus on energy
management and charging infrastructure communication positions it as a key enabler for the
widespread adoption of EVs.
7. Conclusion
Short-range wireless technologies and ad hoc networks form the backbone of modern
distributed communication systems, enabling an array of applications from simple data
exchange between personal devices to complex, mission-critical functionalities like industrial
process control and vehicular safety.
• IEEE 802.15.1 (Bluetooth) remains a critical standard for consumer electronics, wearable
devices, and personal area networks, employing piconets and scatternets to extend
scalability.
• Wireless Sensor Networks (WSNs) build upon low-power hardware and distributed
intelligence to sense and act upon environments, finding utility in industrial,
environmental, healthcare, and military applications. While they offer significant
benefits in real-time monitoring, they also face design challenges related to energy
constraints, security, and reliability.
• Ad Hoc Networks, particularly MANETs and VANETs, showcase the power of self-
organizing, decentralized communication in scenarios where infrastructure is unavailable
or infeasible. They facilitate rapid deployment and flexible connectivity while grappling
with issues like routing complexity, security, and QoS management.
Future innovations will likely converge these technologies, leveraging the strengths of each
while addressing lingering limitations. Advanced security frameworks, machine learning–driven
route optimization, and enhanced energy management schemes will drive next-generation
wireless systems toward increasingly autonomous, scalable, and resilient networks. Engineers
and researchers in the field must remain vigilant about evolving standards, cross-technology
interoperability, and the overarching goal of efficiency and reliability for both current and future
applications.
Module No. 1
Interconnection of systems, people or things with the help of a communication media can be
referred as network. The type of communication in which use electromagnetic waves as
communication media for transmitting and receiving data or voice is called wireless
communication.
Wireless communication is a broad term that incorporates all procedures and forms of
connecting and communicating between two or more devices using a wireless signal through
wireless communication technologies and devices. Wireless communication involves the
transmission of information over a distance without the help of wires, cables or any other
forms of electrical conductors.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
● The transmitted distance can be anywhere between a few meters (for example, a
television's remote control) and thousands of kilometers (for example, radio
communication).
● Wireless communication can be used for cellular telephony, wireless access to the
internet, wireless home networking, and so on.
● Other examples of applications of radio wireless technology include GPS units, garage
door openers, wireless computer mice, keyboards and headsets, headphones, radio
receivers, satellite television, broadcast television and cordless telephones.
Wireless - Advantages
Wireless communication involves transfer of information without any physical connection between two
or more points. Because of this absence of any 'physical infrastructure', wireless communication has
certain advantages. This would often include collapsing distance or space.
Wireless communication has several advantages; the most important ones are discussed below −
● Cost effectiveness
Wired communication entails the use of connection wires. In wireless networks, communication does not
require elaborate physical infrastructure or maintenance practices. Hence the cost is reduced.
Example − Any company providing wireless communication services does not incur a lot of costs, and
as a result, it is able to charge cheaply with regard to its customer fees.
The cost of installing wires, cables and other infrastructure is eliminated in wireless communication and
hence lowering the overall cost of the system compared to wired communication system. Installing wired
networks in building, digging up the Earth to lay the cables and running those wires across the streets is
extremely difficult, costly and time consuming job.
In historical buildings, drilling holes for cables is not a best idea as it destroys the integrity and
importance of the building. Also, in older buildings with no dedicated lines for communication, wireless
communication like Wi-Fi or Wireless LAN is the only option.
● Flexibility
Wireless communication enables people to communicate regardless of their location. It is not necessary
to be in an office or some telephone booth in order to pass and receive messages.
Miners in the outback can rely on satellite phones to call their loved ones, and thus, help improve their
general welfare by keeping them in touch with the people who mean the most to them.
● Convenience
Wireless communication devices like mobile phones are quite simple and therefore allow anyone to use
them, wherever they may be. There is no need to physically connect anything in order to receive or pass
messages.Example − Wireless communications services can also be seen in Internet technologies such as
Wi-Fi. With no network cables hampering movement, we can now connect with almost anyone,
anywhere, anytime.
● Mobility
As mentioned earlier, mobility is the main advantage of wireless communication system. It offers the
freedom to move around while still connected to network.
● Speed
Improvements can also be seen in speed. The network connectivity or the accessibility were much
improved in accuracy and speed. Example − A wireless remote can operate a system faster than a wired
one. The wireless control of a machine can easily stop its working if something goes wrong, whereas
direct operation can’t act so fast.
● Accessibility
The wireless technology helps easy accessibility as the remote areas where ground lines can’t be
properly laid, are being easily connected to the network.
Example − In rural regions, online education is now possible. Educators no longer need to travel to
far-flung areas to teach their lessons. Thanks to live streaming of their educational modules.
● Constant connectivity
Constant connectivity also ensures that people can respond to emergencies relatively quickly.
Example − A wireless mobile can ensure you a constant connectivity though you move from place to
place or while you travel, whereas a wired land line can’t.
● Reliability
Since there are no cables and wires involved in wireless communication, there is no chance of
communication failure due to damage of these cables, which may be caused by environmental conditions,
cable splice and natural diminution of metallic conductors.
● Disaster Recovery
In case of accidents due to fire, floods or other disasters, the loss of communication infrastructure in
wireless communication system can be minimal.
● Ease of Installation
The setup and installation of wireless communication network’s equipment and infrastructure is very easy
as we need not worry about the hassle of cables. Also, the time required to setup a wireless system like a
Wi-Fi network for example, is very less when compared to setting up a full cabled network.
Disadvantages of Wireless Communication
Even though wireless communication has a number of advantages over wired communication, there are a
few disadvantages as well. The most concerning disadvantages are Interference, Security and Health.
Interference
Wireless Communication systems use open space as the medium for transmitting signals. As a result,
there is a huge chance that radio signals from one wireless communication system or network might
interfere with other signals.
The best example is Bluetooth and Wi-Fi (WLAN). Both these technologies use the 2.4GHz frequency
for communication and when both of these devices are active at the same time, there is a chance of
interference.
Security
One of the main concerns of wireless communication is Security of the data. Since the signals are
transmitted in open space, it is possible that an intruder can intercept the signals and copy sensitive
information.
Health Concerns
Continuous exposure to any type of radiation can be hazardous. Even though the levels of RF energy that
can cause the damage are not accurately established, it is advised to avoid RF radiation to the maximum.
Basic Elements of a Wireless Communication System
A typical Wireless Communication System can be divided into three elements: the Transmitter, the
Channel and the Receiver. The following image shows the block diagram of wireless communication
system.
The Channel
The channel in Wireless Communication indicates the medium of transmission of the signal i.e. open
space. A wireless channel is unpredictable and also highly variable and random in nature. A channel
maybe subject to interference, distortion, noise, scattering etc. and the result is that the received signal
may be filled with errors.
The Reception Path
The job of the Receiver is to collect the signal from the channel and reproduce it as the source signal. The
reception path of a Wireless Communication System comprises of Demultiplexing , Demodulation,
Channel Decoding, Decryption and Source Decoding. From the components of the reception path it is
clear that the task of the receiver is just the inverse to that of transmitter.
The signal from the channel is received by the Demultiplexer and is separated from other signals. The
individual signals are demodulated using appropriate Demodulation Techniques and the original message
signal is recovered. The redundant bits from the message are removed using the Channel Decoder.
Since the message is encrypted, Decryption of the signal removes the security and turns it into simple
sequence of bits. Finally, this signal is given to the Source Decoder to get back the original transmitted
message or signal.
Types of Wireless Communication Systems
Today, people need Mobile Phones for many things like talking, internet, multimedia etc. All these
services must be made available to the user on the go i.e. while the user is mobile. With the help of these
wireless communication services, we can transfer voice, data, videos, images etc.
Wireless Communication Systems also provide different services like video conferencing, cellular
telephone, paging, TV, Radio etc. Due to the need for variety of communication services, different types
of Wireless Communication Systems are developed. Some of the important Wireless Communication
Systems available today are:
● Television and Radio Broadcasting
● Satellite Communication
● Radar
● Mobile Telephone System (Cellular Communication)
● Global Positioning System (GPS)
● Infrared Communication
● WLAN (Wi-Fi)
● Bluetooth
● ZigBee
● Paging
● Cordless Phones
● Radio Frequency Identification (RFID)
There are many other system with each being useful for different applications. Wireless Communication
systems can be again classified as Simplex, Half Duplex and Full Duplex. Simplex communication is one
way communication. An example is Radio broadcast system.
Half Duplex is two way communication but not simultaneous one. An example is walkie – talkie (civilian
band radio). Full Duplex is also two way communication and it is a simultaneous one. Best example for
full duplex is mobile phones.
The devices used for Wireless Communication may vary from one service to other and they may have
different size, shape, data throughput and cost. The area covered by a Wireless Communication system is
also an important factor. The wireless networks may be limited to a building, an office campus, a city, a
small regional area (greater than a city) or might have global coverage.
We will see a brief note about some of the important Wireless Communication Systems.
Television and Radio Broadcasting
Radio is considered to be the first wireless service to be broadcast. It is an example of a Simplex
Communication System where the information is transmitted only in one direction and all the users
receiving the same data.
Satellite Communication
Satellite Communication System is an important type of Wireless Communication. Satellite
Communication Networks provide worldwide coverage independent to population density.
Satellite Communication Systems offer telecommunication (Satellite Phones), positioning and navigation
(GPS), broadcasting, internet, etc. Other wireless services like mobile, television broadcasting and other
radio systems are dependent of Satellite Communication Systems.
Mobile Telephone Communication System
Perhaps, the most commonly used wireless communication system is the Mobile Phone Technology. The
development of mobile cellular device changed the World like no other technology. Today’s mobile
phones are not limited to just making calls but are integrated with numerous other features like Bluetooth,
Wi-Fi, GPS, and FM Radio.
The latest generation of Mobile Communication Technology is 5G (which is indeed successor to the
widely adapted 4G). Apart from increased data transfer rates (technologists claim data rates in the order of
Gbps), 5G Networks are also aimed at Internet of Things (IoT) related applications and future
automobiles.
Global Positioning System (GPS)
GPS is solely a subcategory of satellite communication. GPS provides different wireless services like
navigation, positioning, location, speed etc. with the help of dedicated GPS receivers and satellites.
Bluetooth
Bluetooth is another important low range wireless communication system. It provides data, voice and
audio transmission with a transmission range of 10 meters. Almost all mobile phones, tablets and laptops
are equipped with Bluetooth devices. They can be connected to wireless Bluetooth receivers, audio
equipment, cameras etc.
Paging
Although it is considered an obsolete technology, paging was a major success before the wide spread use
of mobile phones. Paging provides information in the form of messages and it is a simplex system i.e. the
user can only receive the messages.
Wireless Local Area Network (WLAN)
Wireless Local Area Network or WLAN (Wi-Fi) is an internet related wireless service. Using WLAN,
different devices like laptops and mobile phones can connect to an access point (like a Wi-Fi Router) and
access internet.
Wi-Fi is one of the widely used wireless network, usually for internet access (but sometimes for data
transfer within the Local Area Network). It is very difficult to imagine the modern World without Wi-Fi.
Infrared Communication
Infrared Communication is another commonly used wireless communication in our daily lives. It uses the
infrared waves of the Electromagnetic (EM) spectrum. Infrared (IR) Communication is used in remote
controls of Televisions, cars, audio equipment etc.
protocols are required for sharing data on non dedicated channels. Multiple access protocols can be subdivided
further as –
1. Random Access Protocol: In this, all stations have same superiority that is no station has more
priority than another station. Any station can send data depending on medium’s state( idle or busy). It has
two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending
data The Random access protocols are further subdivided as:
(a) ALOHA – It was designed for wireless LAN but is also applicable for shared medium. In this,
multiple stations can transmit data at the same time and can hence lead to collision and data being
garbled.
● Pure Aloha:
When a station sends data it waits for an acknowledgement. If the acknowledgement doesn’t
come within the allotted time then the station waits for a random amount of time called
back-off time (Tb) and re-sends the data. Since different stations wait for different amount of
time, the probability of further collision decreases.
Vulnerable Time = 2* Frame transmission
time Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
● Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed
only at the beginning of these slots. If a station misses out the allowed time, it must wait for
the next slot. This reduces the probability of collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
For more information on ALOHA refer – LAN Technologies
(b) CSMA – Carrier Sense Multiple Access ensures fewer collisions as the station is required to first
sense the medium (for idle or busy) before transmitting data. If it is idle then it sends data, otherwise it
waits till the channel becomes idle. However there is still chance of collision in CSMA due to propagation
delay. For example, if station A wants to send data, it will first sense the medium.If it finds the channel
idle, it will start sending data. However, by the time the first bit of data is transmitted (delayed due to
propagation delay) from station A, if station B requests to send data and senses the medium it will also
find it idle and will also send data. This will result in collision of data from station A and B.
CSMA access modes-
● 1-persistent: The node senses the channel, if idle it sends the data, otherwise it continuously
keeps on checking the medium for being idle and transmits unconditionally(with 1
probability) as soon as the channel gets idle.
● Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it checks the
medium after a random amount of time (not continuously) and transmits when found idle.
● P-persistent: The node senses the medium, if idle it sends the data with p probability. If the
data is not transmitted ((1-p) probability) then it waits for some time and checks the medium
again, now if it is found idle then it send with p probability. This repeat continues until the
frame is sent. It is used in Wifi and packet radio systems.
● O-persistent: Superiority of nodes is decided beforehand and transmission occurs in that
order. If the medium is idle, node waits for its time slot to send data.
(c) CSMA/CD – Carrier sense multiple access with collision detection. Stations can terminate
transmission of data if collision is detected.
(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The process of collisions
detection involves the sender receiving acknowledgement signals. If there is just one signal(its own) then
the data is successfully sent but if there are two signals(its own and the one with which it has collided)
then it means a collision has occurred. To distinguish between these two cases, collision must have a lot
of impact on received signal. However it is not so in wired networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:
1. Interframe space – Station waits for medium to become idle and if found idle it does not
immediately send data (to avoid collision due to propagation delay) rather it waits for a
period of time called Interframe space or IFS. After this time it again checks the medium for
being idle. The IFS duration depends on the priority of station.
2. Contention Window – It is the amount of time divided into slots. If the sender is ready to send
data, it chooses a random number of slots as wait time which doubles every time medium is
not found idle. If the medium is found busy it does not restart the entire process, rather it
restarts the timer when the channel is found idle again.
3. Acknowledgement – The sender re-transmits the data if acknowledgement is not received
before time-out.
2.Controlled Access:
In this, the data is sent by that station which is approved by all other stations. For further details refer –
Controlled Access Protocols
3.Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code to multiple stations to
access channel simultaneously.
● Frequency Division Multiple Access (FDMA) – The available bandwidth is divided into
equal bands so that each station can be allocated its own band. Guard bands are also added so
that no two bands overlap to avoid crosstalk and noise.
● Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between multiple
stations. To avoid collision time is divided into slots and stations are allotted these slots to
transmit data. However there is a overhead of synchronization as each station needs to know
its time slot. This is resolved by adding synchronization bits to each slot. Another issue with
TDMA is propagation delay which is resolved by addition of guard bands.
● Code Division Multiple Access (CDMA) – One channel carries all transmissions
simultaneously. There is neither division of bandwidth nor division of time. For example, if
there are many people in a room all speaking at the same time, then also perfect reception of
data is possible if only two person speak the same language. Similarly, data from different
stations can be transmitted simultaneously in different code languages.
A satellite’s service is present at a particular location on the earth station and sometimes it is not present.
That means, a satellite may have different service stations of its own located at different places on the
earth. They send carrier signal for the satellite.
In this situation, we do multiple access to enable satellite to take or give signals from different stations at
time without any interference between them. Following are the three types of multiple access techniques.
● FDMA (Frequency Division Multiple Access)
● TDMA (Time Division Multiple Access)
● CDMA (Code Division Multiple Access)
FDMA
In this type of multiple access, we assign each signal a different type of frequency band (range). So, any
two signals should not have same type of frequency range. Hence, there won’t be any interference
between them, even if we send those signals in one channel.
One perfect example of this type of access is our radio channels. We can see that each station has been
given a different frequency band in order to operate.
Let’s take three stations A, B and C. We want to access them through FDMA technique. So we assigned
them different frequency bands.
As shown in the figure, satellite station A has been kept under the frequency range of 0 to 20 HZ.
Similarly, stations B and C have been assigned the frequency range of 30-60 Hz and 70-90 Hz
respectively. There is no interference between them.
The main disadvantage of this type of system is that it is very burst. This type of multiple access is not
recommended for the channels, which are dynamic and uneven. Because, it will make their data as
inflexible and inefficient.
Advantages of FDMA
As FDMA systems use low bit rates (large symbol time) compared to average delay spread, it offers the
following advantages −
● Reduces the bit rate information and the use of efficient numerical codes increases the capacity.
● It reduces the cost and lowers the inter symbol interference (ISI)
● Equalization is not necessary.
● An FDMA system can be easily implemented. A system can be configured so that the
improvements in terms of speech encoder and bit rate reduction may be easily incorporated.
● Since the transmission is continuous, less number of bits are required for synchronization and
framing.
Disadvantages of FDMA
Although FDMA offers several advantages, it has a few drawbacks as well, which are listed below −
● It does not differ significantly from analog systems; improving the capacity depends on the
signal-to-interference reduction, or a signal-to-noise ratio (SNR).
● The maximum flow rate per channel is fixed and small.
● Guard bands lead to a waste of capacity.
● Hardware implies narrowband filters, which cannot be realized in VLSI and therefore increases
the cost.
TDMA
As the name suggests, TDMA is a time based access. Here, we give certain time frame to each channel.
Within that time frame, the channel can access the entire spectrum bandwidth
Each station got a fixed length or slot. The slots, which are unused will remain in idle stage.
Suppose, we want to send five packets of data to a particular channel in TDMA technique. So, we should
assign them certain time slots or time frame within which it can access the entire bandwidth.
In above figure, packets 1, 3 and 4 are active, which transmits data. Whereas, packets 2 and 5 are idle
because of their non-participation. This format gets repeated every time we assign bandwidth to that
particular channel.
Although, we have assigned certain time slots to a particular channel but it can also be changed
depending upon the load bearing capacity. That means, if a channel is transmitting heavier loads, then it
can be assigned a bigger time slot than the channel which is transmitting lighter loads. This is the biggest
advantage of TDMA over FDMA. Another advantage of TDMA is that the power consumption will be
very low.
Note − In some applications, we use the combination of both TDMA and FDMA techniques. In this
case, each channel will be operated in a particular frequency band for a particular time frame. In this
case, the frequency selection is more robust and it has greater capacity over time compression.
Time Division Multiple Access (TDMA) : a digital wireless telephony transmission technique. TDMA
allocates each user a different time slot on a given frequency. TDMA divides each cellular channel into
three time slots in order to increase the amount of data that can be carried.
TDMA technology was more popular in Europe, Japan and Asian countries, where as CDMA is widely
used in North and South America. But now a days both techologies are very popular through out of the
world.
Advantages of TDMA:
● TDMA can easily adapt to transmission of data as well as voice communication.
● TDMA has an ability to carry 64 kbps to 120 Mbps of data rates.
● TDMA allows the operator to do services like fax, voice band data, and SMS as well as
bandwidth-intensive application such as multimedia and video conferencing.
● Since TDMA technology separates users according to time, it ensures that there will be no
interference from simultaneous transmissions.
● TDMA provides users with an extended battery life, since it transmits only portion of the time
during conversations.
● TDMA is the most cost effective technology to convert an analog system to digital.
Disadvantages of TDMA
● Disadvantage using TDMA technology is that the users has a predefined time slot. When moving
from one cell site to other, if all the time slots in this cell are full the user might be disconnected.
● Another problem in TDMA is that it is subjected to multipath distortion. To overcome this
distortion, a time limit can be used on the system. Once the time limit is expired the signal is
ignored.
CDMA
In CDMA technique, a unique code has been assigned to each channel to distinguish from each other. A
perfect example of this type of multiple access is our cellular system. We can see that no two persons’
mobile number match with each other although they are same X or Y mobile service providing
company’s customers using the same bandwidth.
In CDMA process, we do the decoding of inner product of the encoded signal and chipping sequence.
Therefore, mathematically it can be written as
Encodedsignal=Orginaldata×chippingsequence
Encodedsignal=Orginaldata×chippingsequence
The basic advantage of this type of multiple access is that it allows all users to coexist and use the entire
bandwidth at the same time. Since each user has different code, there won’t be any interference.
In this technique, a number of stations can have number of channels unlike FDMA and TDMA. The best
part of this technique is that each station can use the
entire spectrum at all time.
Suppose there are four stations M, N, O, and P individually transmitting 1, 0, 1, 1. And each one is having
a unique code sequence (C1, C2, C3, C4) where the codes are of orthogonal nature.
To represent data bits and code bits we will use polar signaling thus,
● Binary 0 will be represented as -1 and
● Binary 1 will be represented as +1 (or 1)
Thus, data vector i.e., (M, N, O, P) will be(1, -1, 1, 1).
The complete bit sequence to be transmitted will be produced by adding the bits according to their positional
sequence:
The sequence transmitted over the channel will be: 2, 2, 2, -2.
Reception: The receiver will get the above sequence. Now, to retrieve the actual information from this
received (coded form) data, each receiving station must have the code sequence of their respective
transmitting station.
Here each receiver will get the original data sequence by multiplying the received bit sequence with its
respective code stream.
R1 = (2, 2, -2, 2)
R2 = (2, -2, -2, -2)
R3 = (2, 2, 2, -2)
R4 = (2, -2, 2, 2)
Hence, by summing every bit of the sequence and dividing it will the total number of transmitting stations
one can get the originally transmitted data bit. So, calculating for each receiving station, we will get:
R1 = [2 + 2 + (-2) + 2]/Number of stations = 4/4 = 1
R2 = [2 + (-2) + (-2) + (-2)]/Number of stations = -4/4 = -1
R3 = [2 + 2 + 2 + (-2)]/Number of stations = 4/4 = 1
R4 = [2 + (-2) + 2 + 2]/Number of stations = 4/4 = 1
According to polar signalling 1 denotes binary 1 and -1 denotes binary 0. Therefore, the data bits received
at each receiving station will be 1, 0, 1, 1.
It can be clearly checked that the received bits are exactly the same as the one which was transmitted
from the transmitting stations. Hence, in this way CDMA can be implemented.
Code Division Multiple Access (CDMA: a digital wireless technology that uses spread-spectrum
techniques. CDMA does not assign a specific frequency to each user. Instead, every channel uses the full
available spectrum. Individual conversations are encoded with a pseudo-random digital sequence.
CDMA consistently provides better capacity for voice and data communications than other commercial
mobile technologies, allowing more subscribers to connect at any given time, and it is the common
platform on which 3G technologies are built.
Advantages of CDMA
● One of the main advantages of CDMA is that dropouts occur only when the phone is at least
twice as far from the base station. Thus, it is used in the rural areas where GSM cannot cover.
● Another advantage is its capacity; it has a very high spectral capacity that it can accommodate
more users per MHz of bandwidth.
Disadvantages of CDMA
● Channel pollution, where signals from too many cell sites are present in the subscriber. s phone
but none of them is dominant. When this situation arises, the quality of the audio degrades.
● When compared to GSM is the lack of international roaming capabilities.
● The ability to upgrade or change to another handset is not easy with this technology because the
network service information for the phone is put in the actual phone unlike GSM which uses SIM
card for this.
● Limited variety of the handset, because at present the major mobile companies use GSM
technology.
After the signal is created by the source, the spreading process uses a spreading code and spreads the
bandwidth. The figure shows the original bandwidth B and the spreaded bandwidth Bss. The spreading
code is a series of numbers that look random, but are actually a pattern.
There are two techniques to spread the bandwidth:
The Frequency Hopping Spread Spectrum (FHSS) technique uses M different carrier frequencies that are
modulated by the source signal. At one moment, the signal modulates one carrier frequency; at the next
moment, the signal modulates another carrier frequency. Although the modulation is done using one
carrier frequency at a time, M frequencies are used in the long run. The bandwidth occupied by a source
after spreading is BpHSS >> B.
The following figure shows the general layout for FHSS. A pseudorandom code generator, called
pseudorandom noise (PN), creates a k-bit pattern for every hopping period Th•
The frequency table uses the pattern to find the frequency to be used for this hopping period and passes it
to the frequency synthesizer. The frequency synthesizer creates a carrier signal of that frequency, and the
source signal modulates the carrier signal.
For Example M is no. of patterns= 8 and k= no. of bits is 3. The pseudorandom code generator will create
eight different 3-bit patterns. These are mapped to eight different frequencies in the frequency table as
shown in the following figure.
The pattern for this station is 101, 111, 001, 000, 010, all, 100. Note that the pattern is pseudorandom it is
repeated after eight hoppings. This means that at hopping period 1, the pattern is 101. The frequency
selected is 700 kHz, the source signal modulates this carrier frequency.
The second k-bit pattern selected is 111, which selects the 900-kHz carrier; the eighth pattern is 100, the
frequency is 600 kHz. After eight hoppings, the pattern repeats, starting from 101 again.
If there are many k-bit patterns and the hopping period is short, a sender and receiver can have privacy. If
an intruder tries to intercept the transmitted signal, she can only access a small piece of data because she
does not know the spreading sequence to quickly adapt herself to the next hop. The scheme has also an
anti-jamming effect. A malicious sender may be able to send noise to jam the signal for one hopping
period (randomly), but not for the whole period.
Bandwidth Sharing
If the number of hopping frequencies is M, we can multiplex M channels into one by using the same Bss
bandwidth. This is possible because a station uses just one frequency in each hopping period; M - 1 other
frequencies can be used by other M - 1 stations. In other words, M different stations can use the same Bss
if an appropriate modulation technique such as multiple FSK (MFSK) is used.
As an example, let us consider the sequence used in a wireless LAN, the famous Barker sequence where n
is 11. We assume that the original signal and the chips in the chip generator use polar NRZ encoding. The
following figure shows the chips and the result of multiplying the original data by the chips to get the
spread signal.
In the figure, the spreading code is 11 chips having the pattern 10110111000 (in this case). If the original
signal rate is N, the rate of the spread signal is 11N. This means that the required bandwidth for the spread
signal is 11 times larger than the bandwidth of the original signal. The spread signal can provide privacy if
the intruder does not know the code. It can also provide immunity against interference if each station uses
a different code.
Evolution of wireless generations –
1G to 5G
Mobile wireless communication system has gone through several evolution stages in the past few decades
after the introduction of the first generation mobile network in early 1980s. Due to huge demand for more
connections worldwide, mobile communication standards advanced rapidly to support more users. Let’s
take a look on the evolution stages of wireless technologies for mobile communication.
History of wireless technology
Marconi, an Italian inventor, transmitted Morse code signals using radio waves wirelessly to a distance of
3.2 KMs in 1895. It was the first wireless transmission in the history of science. Since then, engineers and
scientists were working on an efficient way to communicate using RF waves.
Telephone became popular during the mid of 19th century. Due to wired connection and restricted
mobility, engineers started developing a device which doesn’t requires wired connection and transmit
voice using radio waves.
Every successive generation of wireless standards – abbreviated to “G” – have introduced dizzying
advances in data-carrying capacity and decreases in latency, and 5G will be no exception. Although formal
5G standards are yet to be set, 5G is expected to be at least three times faster than current 4G standards.
To truly understand how we got here, it’s useful to chart the unstoppable rise of wireless standards from the
first generation (1G) to where we are today, on the cusp of a global 5G rollout.
1G: Where it all began
With 4G coverage so low in some areas, why has the focus shifted to 5G already?
5G has actually been years in the making.
During an interview with TechRepublic, Kevin
Ashton described how he coined the term "the
Internet of Things" – or IoT for short – during a
PowerPoint presentation he gave in the 1990s
to convince Procter & Gamble to start using
RFID tag technology.
The phrase caught on and IoT was soon touted
as the next big digital revolution that would see
billions of connected devices seamlessly share
data across the globe. According to Ashton, a
mobile phone isn’t a phone, it’s the IoT in your
pocket; a number of network-connected
sensors that help you accomplish everything
from navigation to photography to
communication and more. The IoT will see
data move out of
server centers and into what are known as ‘edge devices’ such as Wi-Fi-enabled appliances like fridges,
washing machines, and cars.
By the early 2000s, developers knew that 3G and even 4G networks wouldn’t be able to support such a
network. As 4G’s latency of between 40ms and 60ms is too slow for real-time responses, a number of
researchers started developing the next generation of mobile networks.
In 2008, NASA helped launch the Machine-to-Machine Intelligence (M2Mi) Corp to develop IoT and M2M
technology, as well as the 5G technology needed to support it. In the same year, South Korea developed a 5G
R&D program, while New York University founded the 5G-focused NYU WIRELESS in 2012.
The superior connectivity offered by 5G promised to transform everything from banking to healthcare. 5G
offers the possibility of innovations such as remote surgeries, telemedicine and even remote vital sign
monitoring that could save lives.
Three South Korean carriers – KT, LG Uplus and SK Telecom – rolled out live commercial 5G services last
December and promise a simultaneous March 2019 launch of 5G across the country.
5G: Future –
The race for 5G deployment is led by companies like Qualcomm, Huawei, and Intel. Worldwide
commercial launch is expected in 2020. Initial launch and testing has been done by companies like AT&T
and Verizon in four U.S cities.
5G’s range is lesser than supported by 4G LTE or 3G networks due to its frequency which makes the
waves to travel less distance.Hence more base station (Signal Towers) are needed to be installed for good
connectivity, this maybe considered as a disadvantage. Therefore setup of 5G will take time and people
cant expect this amazing revolution in the near future.
Designed to work with both TDD and FDD Designed for TDD operation to
Duplexing mode
operation exploit channel reciprocity
Since massive MIMO uses many more antennas than the number of UEs in the cell, the beam is much
narrower, enabling the base station to deliver RF energy to the UE more precisely and efficiently. The
antenna's phase and gain are controlled individually, with the channel information remaining with the base
station, simplifying UE without adding multiple receiver antennas. Installation of a large number of base
station antennas will increase the signal-to-noise ratio in the cell, which leads to higher cell site capacity
and throughput. Since 5G massive MIMO implementation is on mmWave frequencies, the antennas
required are small and easy to install and maintain.
Still, for device designers, MIMO and beamforming at mmWave frequencies introduce many new
challenges. 5G NR standards provide the physical-layer frame structure, new reference signal, and new
transmission modes to support 5G enhanced mobile broadband (eMBB) data rates. Designers must
understand the 3D beam patterns and ensure the beams can connect to the base station and deliver the
desired performance, reliability, and user experience. Because massive MIMO, beamforming, and beam
steering represent such significant changes in how 5G NR devices connect across sub-6 GHz and
mmWave operating bands, validating the device quality of experience and performance on the network
becomes even more critical.
To implement MIMO and beamforming on 5G base stations, designers must carefully select hardware and
software tools to simulate, design, and test highly complex systems containing tens or even hundreds of
antenna elements.
Engineers will use active phased array antennas to implement MIMO and beamforming in base stations
and devices. Not only are active antennas essential to overcome signal propagation issues such as higher
path loss at mmWave frequencies, they also provide the ability to dynamically shape and steer beams to
specific users. Active antennas offer more flexibility and improve the performance of 5G
communications.
But deploying active phased array antennas in commercial wireless communications represents a major
change from the passive antennas used in previous generations. MIMO and beamforming technologies
increase capacity and coverage in a cell. For 5G devices and base stations, multi-antenna techniques
require support across multiple frequency bands — from sub-6 GHz to mmWave frequencies — and
across many scenarios, including massive IoT connections and extreme data throughput.
Aerospace and defense radar and satellite communications have long used active phased array antennas,
but these antenna arrays tend to be large and very expensive. Applying this technology to commercial
wireless — where the antenna arrays will need to be much smaller and less costly — introduces many
new challenges. There is a long list of 3GPP required tests for base stations, including radiated transmitter
tests and radiated receiver tests. Depending on the base station configuration, some FR1 tests require
radiated tests, and all FR2 tests require radiated tests.
Nearly all 5G MIMO testing requires over-the-air (OTA) testing. Early in development, OTA test
solutions need to characterize the 3D beam performance across the range of the antenna, including
aspects such as antenna gain, sidelobe, and null depth for the full range of 5G frequencies and
bandwidths.
Estimated by several researches, about 5% of service providers would start offering 5G wireless service,
representing big progress from 5G proofs of concepts (POCs) in 2018. 5G, as the next-generation cellular
standard after 4G (LTE), has been defined across several global standard bodies: ITU (International
Telecommunication Union), 3GPP (Third Generation Partnership Project), ETSI (European
Telecommunications Standards Institute). The official ITU specification, International Mobile
Telecommunications-2020 (IMT-2020), targets maximum downlink and uplink throughputs of 20 Gbps
and 10 Gbps, respectively, and latency below 5 ms (milliseconds) and massive scalability.
5G will not be able to achieve IMT-2020 requirements, such as 20 Gbps, without some major
breakthroughs. At this moment, it’s not yet clear which technologies will do the most for 5G in the long
run, but a few early favorites have emerged. The front runners include millimeter waves, small cells, full
duplex, beanforming…and of course, massive MIMO.
Telecoms have already been adopting massive MIMO on existing 4G LTE networks, especially TD-LTE
(Time-Division LTE) networks (for example, SoftBank in 2016 and China Mobile in 2017). However,
FDD-LTE (Frequency-Division-Duplex LTE) massive MIMO comes later because TD-LTE has the
advantage of using the same frequency for both downlink and uplink, and the uplink channel quality
information could be used for the downlink as well. FDD-LTE, on the other hand, requires another radio’s
resources to obtain the feedback information that is neccessary to implement beanforming for the
downlink communication. This indicates that FDD massive MIMO requires bigger overhead and is not as
efficient as TD-LTE massive MIMO. It wasn’t until 2018 that Verizon started massive MIMO trials of 96
antenna elements.
With 5G up and coming, commercial networks almost certainly have to adopt massive MIMO, and a
typical 5G massive MIMO plans are 64 or 128 arrays at 3.5GHz and more than 128 arrays at 28GHz or
above.
What Are the Key Factors in Driving 5G’s Massive MIMO Adoption?
● Coverage:
In general, 5G will use higher radio spectrums than 2G/3G/4G, including centimeter waves
and millimeter waves such as 3.5GHz and 28GHz. Its radio propagation loss is much bigger
than previous sub-1GHz and around 2GHz. Also, 5G radio propagation can be strongly
affected by the surrounding environment, such as building shadowing, reflection from walls,
human bodies and rain attenuation. This sensitivity would make massive MIMO’s coverage
enhancement ability stand out.
● Capacity:
As we have mentioned, both beamforming and MU-MIMO can increase single-user
throughput and total network capacity per basestation. Massive MIMO becomes far more
practical at higher frequencies, such as those planned for many 5G deployments.
● Early Differentiation:
Both 4G and 5G are mainly based on 3GPP standardization, and 5G service providers would
have trouble differentiating its network from other competitors (like today’s 4G landscape).
However, at this junction of transition, adopting massive MIMO first could potentially offer
a better 5G service (than others) to consumers. The better user experience could lead to user
migration, and with proper marketing campaign, the migration could eventually held steady
through the 5G period.
However, there are some obstacles on 5G’s path of adopting massive MIMO.
● Huge Antenna:
General LTE base stations used to adopt 2x2 MIMO architecture, and these antenna
elements have to be located at least a half wavelength shifted to decrease antenna elements’
mutual coupling and multipath channel’s spatial correlation. In the case of 2.6GHz, its
wavelength is about 11 centimeters, so only about 5.5 centimeters distance (in other words,
half the wavelength) between antenna elements is desired. However, the more number of
antenna elements are deployed, the bigger the antenna size will be. For example, the
maximum length of 128x128 massive MIMO on 2.6GHz could reach 1 meter, which
obviously cannot fit existing sites. Also, the weight could equal tens of kilograms or more,
and a normal pole might not be able to handle the weight. Of course, the high-functioning
and heavy massive MIMO antennas can cost much more than existing ones, which could be
an another factor postponing its deployment.
● Device Capability:
4x4 MIMO is the current mainstream technology with chipset support since 2016
(Qualcomm, Huawei’s HiSilicon, etc.). However, for massive MIMO, including flexible
precoding and MU-MIMO, to prove its merit, its device capability must be better than
existing Transmission Mode (TM) 3 and 4, as current TM3/TM4 support is not enough.
TM3/TM4 devices cannot decode Channel State Information Reference Signal (CSI-RS)
and User Equipment-specific Reference Signal (UE-specific RS), and these devices cannot
provide feedback channel state information (which is necessary for massive MIMO
beamforming) based on the measurement of CSI-RS. TM9 and TM10 specified in 3GPP
Release 10 and 11 can solve this issue, but there are only several devices that activate TM9
commercially up to now. Without devices’ support, service providers would likely delay its
adoption…and device suppliers will likely delay its support until massive MIMO is heading
mainstream…creating a vicious spiral that could potentially kill 5G’s massive MIMO
adoption in its infancy.
● Trade War:
The adoption of massive MIMO requires close collaboration between network equipment
makers and device suppliers. The largest network equipment makers happen to come from
the co-target of current iteration of trade war: China. Huawei is trying to solve the
bottleneck of macro basestations and generally considered the leader in massive MIMO
technology. The ongoing trade war and the concern on national security would drive major
telecoms away from Huawei, and have to wait for other non-China equipment makers to
provide their matre massive MIMO solutions.
Introduction to Cisco Unified Wireless Network
Historically, wireless networks evolved from simple setups to complex architectures supporting
mobility, quality of service (QoS), and strong security. Early designs relied heavily on
autonomous access points that managed their own configurations independently, leading to
inconsistent policy enforcement and a lack of centralized visibility. As enterprises grew in size
and complexity, the need arose for a more streamlined solution. Cisco’s Unified Wireless
Network addressed these challenges by offering a centralized point of control (the WLC) to
manage a fleet of lightweight access points.
1. Centralized Management: Administrators can configure, manage, and monitor all access
points from a single interface.
4. Mobility and Roaming: CUWN supports advanced roaming features that allow client
devices to seamlessly transition between access points and locations.
5. Visibility and Analytics: Integrated platforms such as Cisco Prime Infrastructure and
Cisco DNA Center provide deep visibility into client behavior, traffic patterns, and
network health.
Cisco’s journey into unified wireless networking can be traced back to its acquisition of Aironet
in 1999, which brought significant expertise in wireless LAN technologies. Over the years, Cisco’s
WLAN solutions evolved from standalone autonomous access points to more integrated
platforms.
• Autonomous Access Points: Early Cisco Aironet devices had to be individually configured
with SSIDs, security parameters, and channels. This model, while sufficient for small
deployments, became unwieldy in larger networks.
• Unified Access and Converged Access: Cisco integrated wireless functions into other
platforms such as switches and routers, supporting embedded controllers that simplified
deployments in branch offices or distributed environments.
• Cisco DNA (Digital Network Architecture): With the introduction of DNA Center, Cisco
moved toward a software-defined approach, providing advanced analytics, automation,
and assurance capabilities across both wired and wireless networks.
The evolution of Cisco’s Unified Wireless Network has been driven by the pressing demands of
scalability, security, and simplicity. This framework offers network engineers a robust platform
for delivering reliable wireless access in enterprise environments of all sizes.
A primary goal of the Cisco Unified Wireless Network is to unify multiple aspects of wireless
infrastructure—radio management, security, mobility, and troubleshooting—under a single
umbrella. By separating the data plane from the control plane, Cisco’s solution allows access
points to handle user traffic efficiently while relying on the WLC for centralized intelligence. This
distributed model fosters both operational simplicity and high performance.
• Centralized Visibility: Network managers gain insights into performance, client behavior,
and potential interference sources, speeding up troubleshooting and capacity planning.
A Lightweight Access Point (LAP) is a device that provides wireless connectivity to end clients
but relies on a centralized Wireless LAN Controller for management and control functions. In
other words, rather than storing network and security configurations locally, the LAP downloads
these settings from the controller during the boot process. The AP maintains a secure tunnel to
the controller for continual updates, real-time configuration changes, and monitoring.
Key functionalities:
1. Forwarding Traffic: LAPs forward client data traffic, often encapsulating it in CAPWAP
tunnels to send back to the WLC (depending on the chosen deployment model).
2. Beaconing and Probe Responses: The LAP broadcasts SSIDs and responds to probe
requests, but the management intelligence behind these actions originates from the
controller.
Compared to the older model of autonomous APs, using lightweight APs confers several
advantages:
2. Simplified Provisioning: Rolling out new APs is straightforward; the device automatically
discovers and registers with the WLC, downloads its configuration, and becomes
operational with minimal manual intervention.
3. Consistent Security Policies: Security settings remain uniform across the entire WLAN,
reducing the risk of misalignment in encryption standards or authentication methods.
4. Scalability: Large networks can include hundreds or even thousands of APs under the
same management umbrella, streamlining updates and maintenance.
Control and Provisioning of Wireless Access Points (CAPWAP) is a standards-based (IETF RFC
5415) protocol that enables communication between LAPs and WLCs. CAPWAP was born out of
Cisco’s earlier Lightweight Access Point Protocol (LWAPP) and is designed to address the needs
of large-scale WLAN deployments.
1. Control Tunnel: Encrypted via DTLS (Datagram Transport Layer Security), it carries
management traffic such as AP configuration, status, and control messages.
2. Data Tunnel: Also encrypted, it carries end-user traffic (depending on the forwarding
mode configured). In some cases, local switching (or FlexConnect mode) may be used,
where data traffic from the AP is bridged onto a local network instead of being tunneled
to the WLC.
• Discovery and Join Process: LAPs discover WLCs through methods like DHCP Option 43,
DNS lookup (cisco-capwap-controller.localdomain), or broadcast. Once discovered, the
AP joins the controller, establishes a CAPWAP tunnel, and downloads the relevant
configuration and firmware if needed.
• Heartbeat and Keepalives: The AP and WLC exchange periodic keepalive messages to
maintain tunnel health.
2. AP Placement: Conduct a thorough site survey to place APs optimally for coverage and
capacity. Avoid physical obstructions, external interference sources, and channel
overlaps.
3. Power and Cabling: In many deployments, APs draw power via Power over Ethernet
(PoE). Make sure switches support the required PoE standard (e.g., 802.3af, 802.3at, or
802.3bt) and have enough power budget.
4. Network Segmentation: Place APs on dedicated VLANs or subnets, and configure DHCP
option 43 or DNS to help APs discover controllers quickly.
5. Security Configurations: Always use secure management protocols and encryption
standards (e.g., WPA2/WPA3 Enterprise) to mitigate wireless threats.
A Wireless LAN Controller (WLC) is the cornerstone of the Cisco Unified Wireless Network,
responsible for the centralized management and control of all connected lightweight APs.
Rather than distributing WLAN configuration across multiple devices, the WLC consolidates
these tasks into a single platform, simplifying operations and ensuring network-wide
consistency.
1. Manage Configurations: Administrators can create or modify WLAN profiles, SSIDs, and
security parameters at one centralized console.
2. Enforce Policies: Security policies, QoS rules, and VLAN mappings are all controlled from
one location.
3. Monitor Health and Performance: The WLC provides real-time visibility into radio
metrics, client connectivity statistics, and bandwidth usage across the wireless network.
4. Handle Radio Optimization: Leveraging features such as RRM, the WLC constantly
evaluates environmental factors and adapts power and channel assignments for optimal
performance.
5. Streamline Troubleshooting: Because the WLC is aware of all connected APs and clients,
it can expedite troubleshooting, log collection, and fault isolation.
Cisco WLCs offer a range of sophisticated features that address the demands of enterprise
WLANs:
1. Centralized Policy Enforcement: Administrators can define Access Control Lists (ACLs),
firewall rules, and role-based access policies that are universally enforced.
2. High Availability (HA): WLCs support redundancy models, allowing multiple controllers
to back each other up. In active-standby scenarios, APs can failover seamlessly.
3. Advanced Security: Integration with the Cisco Identity Services Engine (ISE) for 802.1X
authentication, posture assessment, and guest portal capabilities.
4. Mobility Management: Coordinated roaming policies that ensure minimal disruption
when wireless clients move across subnets or geographic locations.
5. QoS and Traffic Prioritization: The WLC can mark or prioritize traffic for latency-sensitive
applications such as voice and video.
6. Mesh Networking Support: Some WLC platforms support outdoor mesh deployments,
where certain APs communicate wirelessly with each other to extend coverage to hard-
to-reach areas.
7. Application Visibility and Control (AVC): Deep packet inspection to classify and control
application traffic, aiding in bandwidth management and security enforcement.
Cisco offers various Wireless LAN Controller models and deployment options to suit different
organizational sizes and use cases. Common categories include:
1. Physical Appliances: Purpose-built hardware devices such as the Cisco 3504, 5508, 5520,
8540, and the newer Catalyst 9800 series. These appliances often vary in capacity
(number of supported APs and client devices) and performance metrics (throughput,
CPU, memory).
2. Virtual Controllers: Cisco offers virtualized WLCs (vWLC or Catalyst 9800-CL) that can run
on hypervisors like VMware ESXi or in cloud environments. This approach helps
organizations leverage existing virtualization infrastructure.
3. Embedded Solutions: Some Cisco switches and routers (e.g., Catalyst 3850/9300 series,
5760 Wireless Controller) have built-in controller capabilities. This converged approach
can simplify deployments in branch offices or distributed environments.
1. Initial Setup: Assign an IP address, default gateway, and management VLAN to the WLC.
This can be done through the console port or a web-based setup wizard.
2. Management Interfaces: Configure interfaces and VLANs on the WLC to map to specific
traffic types (e.g., management, AP manager, and dynamic interfaces for WLANs).
3. WLAN Creation: Define SSIDs and associate them with specific security settings and
VLANs. This includes selecting encryption modes (WPA2, WPA3), key management (PSK,
802.1X), and setting advanced parameters.
4. RF Profiles and RRM Settings: Adjust parameters for channels, power, and coverage
thresholds if needed, or rely on Cisco’s default RRM configuration for automated
adjustments.
5. AP Registration and Grouping: APs discover the WLC, join it, and automatically
download the relevant configuration. Administrators can then group APs (AP groups,
FlexConnect groups) for more granular control.
1. Centralized (Local Mode): All AP traffic is tunneled back to the controller. This model
simplifies policy enforcement but can increase WAN bandwidth utilization in remote site
scenarios.
2. FlexConnect (Local Switching): APs switch data traffic locally at the remote site but still
rely on the WLC for control and management. This reduces WAN load and offers
resiliency if the WLC connection is temporarily lost.
3. Converged Access: The WLC function resides on a Catalyst switch (e.g., 3850, 9300) or
router, integrating wireless and wired policies under a single infrastructure device.
4. Cloud-Managed (Meraki): While not strictly part of the CUWN, this model pushes most
control-plane functionality into the cloud, with simple on-site APs.
Designing Wireless Networks with LAPs and WLCs
Proper planning and site surveys are foundational for a robust wireless network design. The goal
is to determine the appropriate quantity, placement, and configuration of APs to meet coverage
and capacity requirements.
1. Identify Coverage Areas and Requirements: Define the physical space, the number of
users, types of devices, and typical throughput needs.
2. Predictive Modeling: Utilize software tools (e.g., Ekahau, AirMagnet) to model coverage
based on floor plans, wall attenuation values, and antenna patterns.
3. On-Site Survey: Validate predictive models with actual measurements, testing signal
strength (RSSI), signal-to-noise ratio (SNR), and interference sources.
Radio Frequency (RF) design aims to balance coverage with capacity while minimizing
interference:
1. Coverage vs. Capacity: Overlapping coverage ensures seamless roaming, but excessive
overlap can cause co-channel interference (CCI). Striking the right balance is crucial.
2. Channel Planning: In the 2.4 GHz band, only three non-overlapping channels exist (1, 6,
11 in most regions). The 5 GHz band offers many more channels, enabling better channel
reuse. Automated channel assignment from Cisco’s RRM can significantly simplify
management.
3. Transmit Power Control (TPC): TPC algorithms adjust AP transmit power based on
feedback from neighboring APs and client devices, balancing coverage with interference
mitigation.
4. Load Balancing: Controllers can steer clients from congested APs/channels to less-
utilized ones. Band steering can encourage dual-band clients to use 5 GHz for improved
performance.
5. Antenna Selection and Orientation: Depending on the environment, use directional
antennas to target specific coverage areas or omnidirectional antennas for broad
coverage. Antenna orientation and gain must match the site survey plan.
Integrating LAPs and WLCs into an existing enterprise network involves both physical and logical
considerations:
1. Physical Network Integration: Ensure each AP can reach the WLC via Layer 2 or Layer 3.
PoE-capable switches simplify AP deployment.
2. VLAN and IP Subnetting: Map SSIDs to VLANs to segregate traffic types (e.g., employee
vs. guest). The WLC itself may have multiple interfaces for management, AP traffic, and
guest services.
3. Security Integration: Deploy 802.1X for corporate WLANs in conjunction with Cisco ISE
or other RADIUS servers. Use WPA2-Enterprise or WPA3-Enterprise encryption for
maximum security.
4. Guest Access Design: Isolate guest traffic either via tunneling to a DMZ or by using a
dedicated VLAN and captive portal.
5. Policy Enforcement: Leverage ACLs on the WLC or the downstream firewall to restrict
unauthorized traffic.
• Minimize SSIDs: Each SSID beacon adds overhead. Best practice typically recommends
three to four SSIDs per band to avoid excessive management traffic.
• SSID to VLAN Mappings: Associate each SSID with a unique VLAN to segment traffic
(e.g., corporate, guest, BYOD).
• Security Mechanisms:
o Pre-Shared Keys (PSK): Suitable for small or less critical networks, but not
advisable for large enterprises where unique user or device credentials are
preferred.
o Captive Portals: Commonly used for guest access, requiring users to accept
terms of service or authenticate via a web page.
• Integration with NAC (Network Access Control): Tools like Cisco ISE can perform posture
assessments and dynamically assign VLANs or ACLs based on user/device profiles.
1. N+1 Redundancy: Have an extra WLC on standby. In case the active WLC fails, APs
automatically rejoin the standby, preserving wireless service.
2. SSO (Stateful Switchover): In certain controller pairs (e.g., Catalyst 9800 series), stateful
switchover allows client sessions to persist without reauthentication in the event of a
controller failover.
Cisco CleanAir technology leverages specialized silicon within APs to continuously monitor the
RF environment for potential interference sources—such as microwave ovens, Bluetooth
devices, or rogue APs. When interference is detected, CleanAir identifies the source, quantifies
its severity, and relays this data to the WLC. Based on this intelligence, the WLC can trigger
automated responses such as channel changes or power adjustments through RRM. This helps
maintain optimal performance and reduces the manual workload for network administrators.
RRM is the umbrella feature in Cisco’s WLC software that optimizes the use of RF resources. It
encompasses several sub-features:
1. Dynamic Channel Assignment (DCA): Automatically selects the best channel for each AP
by analyzing noise levels, interference, and AP density.
2. Transmit Power Control (TPC): Adjusts AP transmit power to balance coverage and
mitigate interference.
3. Coverage Hole Detection and Correction (CHDC): Identifies areas where clients receive
weak signals and increases power (or flags administrators) to remedy coverage gaps.
4. 802.11 Band Steering: Encourages dual-band clients to associate with 5 GHz for less
congestion.
Seamless roaming is essential for latency-sensitive applications like voice over Wi-Fi (VoWLAN)
and real-time video:
1. 802.11r (Fast BSS Transition): Speeds up roaming by allowing clients and APs to cache
and reuse security credentials, reducing the time needed for reauthentication.
2. 802.11k (Radio Resource Measurement): Enables clients to query APs for optimized
roaming decisions, providing details like neighbor AP channels and signal strength.
3. 802.11v (Wireless Network Management): Allows the network to guide clients towards
better APs based on load, signal, or application type.
Together, these protocols substantially reduce roaming latency, resulting in smoother voice calls
and video sessions as users move around.
Application Visibility and Control (AVC) provides deep packet inspection (DPI) at the WLC level,
identifying and classifying application traffic. By recognizing applications such as Skype,
YouTube, Office 365, or custom business apps, administrators can enforce granular policies. For
instance, video streaming traffic can be throttled during peak hours, or business-critical
applications can receive priority. AVC is instrumental in optimizing bandwidth usage and
enforcing corporate compliance requirements.
Cisco Prime Infrastructure and Cisco DNA Center are powerful management solutions that
extend visibility and automation across the entire enterprise network:
1. Cisco Prime Infrastructure: Offers unified management for wired and wireless networks.
It provides features like performance reporting, topology maps, and automated
configuration templates for APs and WLCs.
2. Cisco DNA Center: Represents Cisco’s next-generation approach to software-defined
networking. DNA Center supports intelligent automation, assurance, and analytics for
both wired and wireless networks. Advanced features include:
By integrating the WLC and APs with these platforms, organizations can leverage a holistic,
policy-driven approach to network management and streamline troubleshooting processes.
Even the most thorough design can encounter issues during deployment or ongoing operations.
Common pitfalls include:
1. Misaligned VLANs or Subnets: Incorrect interface mappings on the WLC can lead to
clients receiving IP addresses from the wrong subnet or failing to obtain addresses
altogether.
2. Overlapping SSIDs: Deploying too many SSIDs or overlapping configurations can cause
management overhead and confusion for users.
3. Insufficient Power at APs: If switches do not supply adequate PoE, APs may shut down
radios or fail to operate at full capacity (e.g., 4x4:4 radio might revert to 3x3:3).
4. Controller Discovery Failures: Improper DHCP or DNS settings can lead to APs failing to
discover the WLC.
When CAPWAP tunnels fail to establish, APs cannot join the WLC. Common steps to
troubleshoot include:
1. Check Network Connectivity: Ensure Layer 3 reachability between APs and the WLC.
Verify IP addresses, default gateways, and routing paths.
2. Validate Discovery Options: Confirm that DHCP option 43 or DNS (for cisco-capwap-
controller) is configured correctly.
3. Firewall Filters: CAPWAP uses UDP ports 5246 (control) and 5247 (data). Any firewalls on
the path must allow these ports.
4. Check Certificates: CAPWAP encryption may fail if the AP’s certificate is invalid or
expired, or if the WLC’s trust settings are not updated.
1. Image Mismatch: The WLC may attempt to push a new firmware image to the AP. If the
AP fails to download or install it, registration stalls.
2. Rogue AP Policy: The WLC might detect an AP as rogue if it’s not listed in its MAC filter.
Administrators should verify the AP is approved to join.
3. Exceeded License Count: If the WLC license capacity is full, additional APs cannot
register.
Ensuring peak WLAN performance involves a combination of best practices and continuous
optimization:
1. Regular RRM Reviews: While RRM automates many tasks, periodic reviews of channel
assignments and power levels can reveal anomalies in high-density or dynamic RF
environments.
2. Load Balancing and Band Steering: Encourage dual-band clients to connect at 5 GHz.
Monitor client distribution and usage to avoid overloading any single channel or band.
3. Use of Latest Firmware: Ensure the WLC and APs are on recommended software
versions, taking advantage of performance enhancements and bug fixes.
4. Monitoring Tools: Leverage Cisco Prime or DNA Center to track performance metrics like
throughput, latency, and coverage holes. Use these insights to adjust AP placement or
configuration.
5. QoS Configuration: Prioritize real-time applications such as voice and video to maintain
quality under heavy load.
Conclusion
Designing a robust Cisco Unified Wireless Network requires a careful balance of coverage,
capacity, security, and manageability. By leveraging Lightweight Access Points (LAPs) and
Wireless LAN Controllers (WLCs), organizations gain centralized control, intelligent radio
management, and consistent policy enforcement. Key principles include:
As wireless technology advances, Cisco continues to evolve its portfolio to meet the demands of
future networks:
1. Wi-Fi 6 (802.11ax) and Wi-Fi 6E: Offering improved efficiency, higher throughput, and
access to 6 GHz spectrum (in Wi-Fi 6E), these standards promise better performance in
dense environments.
2. Cisco Catalyst 9800 Series: Represents the next generation of wireless controllers,
offering powerful hardware and advanced software features, including full integration
with Cisco DNA Center.
By staying abreast of these trends and continually refining designs, network engineers can
ensure their Cisco Unified Wireless Networks are positioned to support emerging applications,
devices, and business needs.
1. Introduction
Wireless communication has revolutionized the way individuals and organizations connect and
exchange information. From the early days of analog cellular networks to today’s sophisticated,
high-speed broadband cellular and short-range wireless connections, the demand for mobility,
convenience, and rapid data transfer has consistently risen. However, this accelerated progress
has also introduced numerous security challenges. Cyber threats to wireless communications
have become more pervasive, demanding robust protocols, stronger encryption methods, and
standardized best practices to protect data integrity, confidentiality, and user privacy.
This chapter provides an in-depth exploration of key wireless security protocols and standards,
tracing their evolution over time and highlighting the measures implemented to guard against
emerging threats. We begin with the foundational cellular network standard, GSM, examining
its architecture, encryption, and known vulnerabilities. We then move to UMTS, illustrating how
it improved upon GSM, particularly in areas of mutual authentication and confidentiality. Next,
we turn our attention to Bluetooth—a short-range wireless technology—discussing various
versions, pairing mechanisms, common attack vectors, and mitigation strategies. We then delve
into Wi-Fi security standards, beginning with WEP (Wired Equivalent Privacy) and moving on to
WPA2 (Wi-Fi Protected Access 2), outlining encryption methods, known attacks, and best
practices for deployment.
Through detailed technical explanations, real-world case studies, and references to industry
standards, this chapter aims to provide readers with a thorough understanding of both historical
and modern approaches to wireless security. By the end, readers should have a clear
perspective on the evolution of wireless security protocols—from early vulnerabilities to today’s
multifaceted defenses—and gain insight into future considerations as wireless technologies
continue to advance.
The GSM (Global System for Mobile Communications) standard was developed by the European
Telecommunications Standards Institute (ETSI) in the late 1980s and became the dominant 2G
cellular network system worldwide (ETSI, 1992). GSM comprises several key components:
1. Mobile Station (MS): The end-user device, commonly a mobile phone or other cellular-
enabled device. Each MS houses a Subscriber Identity Module (SIM) that stores user
credentials, such as the International Mobile Subscriber Identity (IMSI) and the
authentication key Ki.
2. Base Transceiver Station (BTS): The radio access point that communicates directly with
the MS. The BTS handles the radio link protocols, transmitting and receiving data over
the air interface (commonly known as Um interface).
3. Base Station Controller (BSC): Manages multiple BTSs, handling tasks such as radio
resource allocation, frequency management, and handovers between BTSs.
4. Mobile Switching Center (MSC): Acts as the core switching node for voice calls, SMS,
and other services. It performs functions such as routing calls, managing mobility, and
interfacing with external networks (e.g., PSTN).
5. Home Location Register (HLR): A central database that contains details about each
subscriber, including their IMSI, phone number (MSISDN), subscribed services, and
location information.
6. Visitor Location Register (VLR): A regional database that temporarily stores subscriber
data for MSs currently roaming in its coverage area. It reduces the need for frequent
queries to the HLR.
8. Equipment Identity Register (EIR): Stores the International Mobile Equipment Identity
(IMEI) of mobile equipment and classifies devices as white-listed, blacklisted, or gray-
listed based on their status.
Together, these components create a robust, hierarchical system enabling seamless voice and
data services. However, as with many early standards, GSM’s security mechanisms were
designed under certain assumptions that did not foresee modern threat landscapes.
2.2 GSM Encryption Mechanisms
This diagram provides a concise but complete view of how GSM (Global System for Mobile
Communications) authentication and encryption works at a high level. Below is a step-by-step
breakdown of each component and how they interact:
o A unique secret key permanently stored on the SIM (Subscriber Identity Module).
o The same secret key is also stored in the GSM operator’s Authentication Center
(AuC) database.
3. A3 Algorithm (Authentication)
5. A5 Algorithm (Encryption/Decryption)
o A stream cipher used to encrypt and decrypt user voice/data over the air
interface in GSM.
o The diagram shows the SIM (in the mobile phone) and the base station
(representing the GSM network’s radio base station and backend authentication
system working together).
o The GSM network (via its Base Station) sends a random number
RAND\text{RAND}RAND to the mobile station.
o Inside the phone, the SIM uses the A3 algorithm with inputs
RAND\text{RAND}RAND and KiK_iKi.
o If the two match, the network concludes that the subscriber is genuine.
o In parallel (or immediately after computing SRES), the mobile station’s SIM and
the GSM network both use the A8 algorithm to generate the cipher key KcK_cKc.
2. Outcome:
4. Encryption/Decryption (A5)
o Once both the mobile SIM and the network have derived KcK_cKc, they have a
shared secret key for that session.
2. Using A5
▪ The phone’s A5 takes KcK_cKc and the user’s data (voice/data traffic) and
encrypts it before transmission.
▪ The Base Station’s A5 algorithm decrypts the incoming data using the
same KcK_cKc.
3. Purpose
3. Encrypted Communication: Protecting user data (voice, SMS, and basic data services in
2G GSM) with A5 encryption on both ends.
In practice, the operator’s Home Location Register (HLR) and Authentication Center (AuC) store
each subscriber’s secret key KiK_iKi. Whenever a mobile device requests service, the network
issues RAND\text{RAND}RAND, calculates SRES and KcK_cKc, and checks the device’s response.
If correct, the network and phone can then encrypt traffic, ensuring confidentiality and integrity
over the radio link.
• Privacy: By encrypting the over-the-air traffic, GSM ensures that casual eavesdroppers
can’t simply tune in to phone conversations.
• Simplicity: Using symmetric keys (stored on the SIM and in the AuC) and relatively
straightforward algorithms (A3, A8, A5) made early GSM networks practical.
Notably, GSM encryption only protects the air interface. Once data reaches the base station, it
may traverse network segments in plaintext, depending on the operator’s infrastructure. This
lack of end-to-end encryption beyond the radio link is a known shortcoming in GSM’s design.
GSM’s authentication process is one-sided: the network authenticates the subscriber, but the
subscriber does not authenticate the network (3GPP TS 03.20). This is achieved through a
challenge-response mechanism:
2. Response Calculation: The SIM computes the Signed Response (SRES) by applying the A3
algorithm (often operator-specific) to the RAND and the subscriber’s secret key (Ki).
3. Verification: The network checks the received SRES against the value computed in the
AUC. If they match, the subscriber is granted access.
While effective at preventing unauthorized access to the network, the lack of mutual
authentication exposes GSM to rogue base station or IMSI-catcher (commonly called “Stingray”)
attacks, where attackers mimic a legitimate network to intercept or manipulate user traffic.
3. Lack of End-to-End Encryption: Data is typically unencrypted once it leaves the BTS,
making it susceptible to eavesdropping within the operator’s infrastructure if additional
security measures are not in place.
4. Replay Attacks: Because GSM authentication relies on single challenges that can
potentially be replayed, attackers with knowledge of keys could replay certain messages
in specific scenarios. However, the use of fresh RAND values typically mitigates simple
replay attacks, unless operators reuse RAND or do not maintain robust random
generation.
5. Downgrade Attacks: In some implementations, devices can be forced or tricked into
using weaker encryption algorithms (e.g., from A5/3 down to A5/1 or A5/2), thus
simplifying cryptanalysis.
GSM has undergone enhancements with the introduction of 3G (UMTS) and later 4G (LTE)
standards. Some key improvements include:
While GSM remains in use, the gradual global shift to UMTS (3G), LTE (4G), and now 5G
networks has reduced the window of opportunity for exploiting older GSM vulnerabilities.
Nonetheless, many developing regions still rely on GSM extensively, and the risks outlined
remain pertinent in those contexts.
UMTS, often referred to as 3G, introduced significant security enhancements over GSM.
Standardized by the 3rd Generation Partnership Project (3GPP), UMTS aimed to address the
known weaknesses of GSM, particularly the lack of mutual authentication and the
vulnerabilities in the A5 encryption family (3GPP TS 33.102).
1. Mutual Authentication: Both the network and the user authenticate each other,
mitigating rogue base station attacks.
2. Longer Cryptographic Keys: UMTS uses 128-bit keys, offering stronger protection against
brute-force or time-memory trade-off attacks.
3. Integrity Protection: Integrity checks on signaling messages ensure they are neither
tampered with nor replayed.
The UMTS Authentication and Key Agreement (AKA) process is a cornerstone of 3G security
(3GPP TS 33.102). It involves the following steps:
1. Authentication Vector Generation: The Home Environment (HE), which may still be
referred to as the HLR/AUC in some architectures, generates an authentication vector
containing five elements: RAND (random challenge), XRES (expected response), CK
(ciphering key), IK (integrity key), and AUTN (authentication token).
2. Device Verification: The device (USIM) checks the AUTN to verify the network’s
authenticity. If valid, the device generates a response (RES) using the RAND and the
shared secret key (Ki) through the MILENAGE algorithms.
3. Network Verification: The network compares the RES with XRES. If they match, the
device is authenticated.
4. Session Key Establishment: Both the device and the network derive encryption and
integrity keys (CK and IK) to protect subsequent communication.
UMTS introduces separate keys for encryption and integrity. Signaling data from the mobile
device to the network is protected by message authentication codes, ensuring the messages
cannot be altered in transit. Once integrity is verified, ciphering is applied to protect
confidentiality. The standard ciphers used in UMTS include:
• UEA1 (KASUMI-based): Derived from the MISTY1 block cipher, optimized for use in
UMTS.
• UEA2 (SNOW 3G-based): A stream cipher offering improved performance and security
compared to KASUMI in certain implementations.
By separating the integrity key (IK) from the ciphering key (CK), UMTS ensures that the
compromise of one does not necessarily lead to the compromise of the other. This layered
approach significantly improves security over GSM.
Layered Architecture
o AN (Access Network): The radio access network responsible for getting user data
from the mobile device into the core network.
o HE (Home Environment): The user’s home network (the operator with which the
user has a subscription).
o User Application: Any software or service the subscriber uses on the device (e.g.,
an app that the user directly interacts with).
o Provider Application: The counterpart on the provider’s side (e.g., the service
logic and backend).
The diagram labels several communication flows with roman numerals (I), (II), (III), and (IV).
While exact labeling may differ among references, typical UMTS security interfaces work along
these lines:
o ME ↔ AN: The device sends encrypted data over the radio interface to the
access network.
2. (II) SN ↔ HE
o This path is where subscriber profile data or authentication vectors are sent from
the home network to the serving network.
3. (III) ME ↔ USIM
o This is the interface between the physical handset and the SIM/USIM card inside
it.
o The USIM holds the subscriber’s secret key and is in charge of secure operations
such as generating response tokens during authentication.
o High-level application traffic (e.g., user data sessions, IP-based services) can be
protected independently at this layer (for instance, end-to-end encryption over
TLS).
o This sits above the standard UMTS authentication and encryption in the lower
layers.
• USIM:
• ME (Mobile Equipment):
• AN (Access Network):
o Manages radio resources (the base stations, RNC in older 3G systems, etc.).
• SN (Serving Network):
o Applies the security context (keys, algorithms) for the ongoing session.
• HE (Home Environment):
o The user’s home operator, which contains the AuC (Authentication Center) and
HLR/HSS (subscriber databases).
o Generates authentication vectors for each user request and sends them to the
serving network.
o May apply end-to-end security measures (e.g., SSL/TLS) in addition to the UMTS
security below.
o The device and USIM produce a response, and if the network receives the
expected token, the authentication is successful.
o Session keys are derived to encrypt and protect integrity of traffic over the radio
link.
• Integrity Protection:
• Encryption:
• Mutual Authentication:
o Unlike older GSM systems, UMTS introduced mutual authentication. The network
checks the user’s credentials, and the user also verifies the network is legitimate.
2. Authentication Exchange
o The SN sends a challenge to the ME, which passes it to the USIM over (III).
o The USIM calculates a response with a secret key and returns it to the SN.
4. Roaming Scenarios
o The user sees a seamless experience, but behind the scenes, the serving network
and home network cooperate to authenticate the device securely.
1. Downgrade Attacks: In areas where both GSM and UMTS coexist, malicious base
stations can force devices to switch to GSM, exposing them to older vulnerabilities.
2. Implementation Flaws: Real-world security often depends on proper implementation.
For example, weak random number generation or poor handling of key material can
undermine UMTS’s inherent strengths.
4. Roaming Interfaces: When subscribers roam between networks, the handover process
must ensure consistent security policies. Complex roaming relationships create potential
areas for misconfiguration or incomplete security.
Overall, UMTS represents a substantial leap forward in wireless security architecture compared
to GSM. Its security model laid much of the groundwork for 4G (LTE) and 5G, both of which
extend and refine the mutual authentication and confidentiality concepts.
4. Bluetooth Security
Bluetooth is a short-range wireless technology standard used for a wide range of personal area
network (PAN) applications—ranging from wireless headsets and keyboards to Internet of
Things (IoT) devices and medical sensors. Because of its ubiquity, Bluetooth security has drawn
significant scrutiny. Various protocol versions have been released to address evolving security
concerns and performance requirements (Bluetooth SIG, 2021).
1. Bluetooth Classic (BR/EDR): The original Bluetooth design (Core Specification versions
1.x to 3.x), which focuses on continuous, high-throughput connections for voice and
data.
2. Bluetooth Low Energy (BLE): Introduced in Bluetooth 4.0, BLE is optimized for low-
power applications, making it ideal for IoT and wearable devices.
Both families employ frequency-hopping spread spectrum in the 2.4 GHz ISM band, shifting
among 79 channels (Classic) or 40 channels (BLE) to reduce interference.
• Bluetooth v2.1 + EDR: Introduced Secure Simple Pairing (SSP), which improved the
pairing process and provided protection against passive eavesdropping and man-in-the-
middle attacks when correctly implemented.
• Bluetooth v4.0: Added Bluetooth Low Energy (BLE). Early BLE implementations faced
security challenges, such as limited support for strong encryption due to hardware
constraints on low-power devices.
• Bluetooth v4.2 and v5.x: Increased data rates, extended range (in Bluetooth 5), and
introduced features like LE Secure Connections with Elliptic Curve Diffie-Hellman (ECDH)
for more robust key exchange.
Pairing is the process through which two Bluetooth devices establish shared keys for secure
communication. Common pairing methods include:
1. Just Works: Simplified pairing with no user confirmation, but susceptible to man-in-the-
middle (MITM) if an attacker is within range.
2. PIN Code Entry: One device displays or contains a PIN, which the user inputs on the
other device. Vulnerable to eavesdropping if the PIN is short.
3. Numeric Comparison (Bluetooth 2.1+): Each device displays a six-digit number. Users
confirm the numbers match, significantly reducing MITM risk when users follow correct
procedures.
4. Out of Band (OOB) Pairing: Uses an external channel (e.g., NFC) to securely transmit
cryptographic parameters, often considered the most secure if the OOB channel itself is
secure.
The choice of pairing mechanism often depends on device capabilities and user convenience
requirements.
Once paired, Bluetooth devices use link keys derived from the pairing process to establish an
encrypted channel. Early versions used the E0 stream cipher, which had known weaknesses
under certain conditions. Modern implementations (especially in BLE Secure Connections
mode) rely on AES-CCM (Counter with CBC-MAC) with 128-bit keys, providing strong encryption
and data integrity when properly configured (Bluetooth SIG, 2021).
In BLE Secure Connections, ECDH is used for key agreement, which significantly increases
security by making it computationally infeasible to derive the link key from a passive eavesdrop
or to mount an active MITM without detection.
4. Man-in-the-Middle Attacks: If devices use the “Just Works” pairing method or have no
user interaction, an attacker could intercept or alter data in transit, particularly if they
can trick users into pairing with a rogue device.
5. Battery Drain Attacks: Especially in low-energy devices, attackers can keep forcing
connections or sending requests to drain battery life.
Mitigation Techniques
• Use Secure Pairing Methods: Prefer numeric comparison or OOB pairing over “Just
Works.”
• Regularly Update Firmware: Many Bluetooth security vulnerabilities stem from
outdated implementations in device firmware.
• Monitor for Unusual Activity: Especially in enterprise or medical contexts, logging and
anomaly detection can identify rogue connections.
By adhering to best practices and using updated hardware that supports modern cryptographic
standards, Bluetooth devices can significantly mitigate common attacks.
Wired Equivalent Privacy (WEP) was introduced as part of the original IEEE 802.11 wireless LAN
standard, aiming to provide data confidentiality comparable to traditional wired networks.
Despite its intentions, WEP is now widely recognized as fundamentally flawed (IEEE, 1999). Its
cryptographic weaknesses led to widespread real-world exploits, resulting in its deprecation in
favor of more secure standards like WPA and WPA2.
2. This 24-bit IV is sent (often in the clear) along with the encrypted data so that the
receiver can decrypt properly.
Key Points
• Because it is only 24 bits, the IV space is not very large, which leads to one of WEP’s
well-known weaknesses: frequent IV reuse.
• Each frame (packet) in WEP uses a new IV, but it is trivially small, so collisions (repeated
IVs) are likely in busy networks.
1. The 24-bit IV is concatenated (i.e., appended) with a shared secret key (sometimes
called the WEP key or shared key).
o The shared secret key is typically 40 bits (in older legacy WEP) or 104 bits (in
“128-bit WEP,” which actually has a 104-bit key plus 24-bit IV).
2. This concatenation of [IV || shared key] produces the Per Packet Key.
Key Points
• The same shared secret key is used for many packets, but the IV is supposed to change
with each packet.
• Because the IV is short, it may be reused over time in a busy network, exposing
vulnerabilities.
3. RC4 Algorithm
1. The per packet key—which is the concatenation of IV and shared key—is fed into the
RC4 keystream generator.
2. RC4 outputs a keystream of pseudo-random bytes.
3. This keystream will be the same length as the payload plus integrity check (IC)
combined.
Key Points
• Any weaknesses in the way WEP uses RC4 (such as predictable IV) expose the encryption
to statistical attacks.
1. Separately, the plaintext payload (the actual user data) is passed to the CRC Generation
Algorithm.
2. The result is an Integrity Check field—often referred to as the ICV (Integrity Check
Value)—which is appended to the plaintext payload.
Key Points
• In modern Wi-Fi security (WPA, WPA2), much stronger integrity checks (Michael, CCMP,
etc.) are employed.
1. The keystream from the RC4 algorithm is XORed with the combined data (payload +
ICV).
3. This ciphertext is sent along with the IV (in cleartext) to the receiver.
Key Points
1. Once the ciphertext is formed, the 24-bit IV is prepended to it (sometimes placed in the
header) and transmitted.
2. On the receiver’s side, the IV is used (along with the shared key) to re-generate the same
RC4 keystream.
3. The receiver XORs the received ciphertext with the keystream to recover (payload + ICV).
4. The receiver verifies the ICV to check integrity—though this check can be bypassed in
known attacks.
2. RC4 Key Scheduling Vulnerabilities: The combination of the IV and the static key in RC4’s
Key Scheduling Algorithm (KSA) is susceptible to known statistical attacks (Fluhrer,
Mantin, & Shamir, 2001). By analyzing patterns in how RC4 initializes for different IV
values, attackers can reconstruct the key.
3. Weak Integrity Mechanism: WEP’s ICV is a simple CRC-32, which is linear and does not
provide cryptographic integrity. Attackers can flip bits in ciphertext and then recalculate
a new ICV without knowing the key, leading to message injection or forgery attacks.
4. Static Keys: Many older network configurations used a single WEP key shared among
multiple users. If any user or device is compromised, the entire network is at risk.
Shortly after WEP’s adoption, researchers and hobbyists demonstrated practical tools to crack
WEP keys in mere minutes using readily available hardware:
1. AirSnort and WEPCrack: Early open-source tools that automated the process of
capturing IV collisions and performing cryptanalysis.
2. Fragmentation Attacks: Exploited how 802.11 fragmentation interacts with WEP,
allowing partial decryption and eventual key recovery.
3. ARP Injection Attacks: Leveraged the predictable nature of ARP requests to rapidly
increase IV collection, speeding up key recovery efforts.
These tools underscored that WEP, once thought to provide “wired equivalent” security, could
be quickly and systematically broken.
By the mid-2000s, IEEE had formally deprecated WEP in favor of WPA/WPA2. Key reasons
include:
• Insufficient Key Length and IV Size: 40-bit and 104-bit WEP keys with a 24-bit IV proved
inadequate against modern computing power.
• Lack of Robust Integrity Checks: WEP offers no real protection against tampering and
injection.
In modern networks, WEP is considered obsolete. Regulatory bodies and industry best practices
strongly discourage its use (Wi-Fi Alliance, 2004). Devices supporting only WEP pose a security
liability and often need upgrading to support WPA2 or higher.
WPA2, standardized under IEEE 802.11i, is widely recognized as the benchmark for securing Wi-
Fi networks (IEEE, 2004). It addressed many of the shortcomings of WEP and introduced robust
encryption and authentication mechanisms. Though WPA3 has since emerged, WPA2 remains in
broad use worldwide, making it a focal point for wireless security.
6.1 Improvements Over WEP
1. Strong Encryption (AES-CCMP): WPA2 mandates the use of the Advanced Encryption
Standard (AES) with the Counter Mode with Cipher Block Chaining Message
Authentication Code Protocol (CCMP). This offers 128-bit keys and robust cryptographic
integrity checks.
4. Backward Compatibility with TKIP: While not recommended for new deployments,
WPA2 can support the Temporal Key Integrity Protocol (TKIP) for older hardware,
allowing gradual transition from WEP.
6.2 Key Management and Encryption Methods
WPA2 uses a four-way handshake to establish fresh session keys, also referred to as Pairwise
Transient Keys (PTKs), each time a device joins the network:
1. AP sends ANonce: The Access Point (AP) generates a random number (ANonce).
2. Client sends SNonce and MIC: The client (STA) generates its own random number
(SNonce) and calculates a Message Integrity Check (MIC) using the Pairwise Master Key
(PMK).
3. AP sends Group Key (GTK): The AP securely delivers the Group Temporal Key (GTK), used
for broadcast and multicast traffic, encrypted with the PTK.
4. Client Confirms Key Installation: The client sends a final message indicating it has
installed the keys.
AES-CCMP provides both confidentiality and integrity using a block cipher mode that counters
replay attacks. Each packet has a unique packet number (PN) used in the AES counter,
preventing reuse of the same keystream.
1. KRACK (Key Reinstallation Attack): Discovered by Vanhoef and Piessens (2017), KRACK
targets the four-way handshake. By manipulating and replaying handshake messages, an
attacker can trick a client into reinstalling an already-in-use key with a reset packet
number, effectively decrypting or injecting data. Patches to client devices are critical in
mitigating KRACK.
3. Regularly Update Firmware: Patch vulnerabilities like KRACK and keep devices current
with security updates.
4. Disable WPS (Wi-Fi Protected Setup): WPS PIN-based setups are vulnerable to brute-
force attacks. If required, ensure only push-button or NFC-based pairing is allowed.
5. Monitor and Audit Networks: Conduct regular wireless security assessments (e.g., using
WPA2 handshake capture and offline analysis to ensure passphrase strength).
By following these best practices, organizations and individuals can significantly reduce the risk
of wireless compromise under WPA2 networks.
The progression from GSM to UMTS, from Bluetooth 1.0 to newer versions, and from WEP to
WPA2 reflects a broader narrative in wireless security: as technologies mature and threats
become more sophisticated, protocols must evolve to maintain confidentiality, integrity, and
availability. Early implementations like GSM and WEP focused on basic encryption and
authentication but failed to anticipate large-scale surveillance, advanced cryptanalysis, and the
explosion of connected devices we see today. UMTS introduced mutual authentication,
improving resilience against rogue base stations. Similarly, newer Bluetooth versions leveraged
stronger pairing methods and encryption to address vulnerabilities like Bluejacking,
Bluebugging, and Bluesnarfing. On the Wi-Fi front, WEP’s fundamental flaws gave way to
WPA2’s robust AES-based encryption and dynamic key management.
Key Takeaways
• GSM to UMTS: This transition showcased the shift from unilateral authentication to
mutual authentication, demonstrating the necessity of verifying both network and
subscriber to combat devices that impersonate legitimate network elements.
• Bluetooth Security Evolution: Pairing methods became more sophisticated (e.g., Secure
Simple Pairing, LE Secure Connections), acknowledging that user interaction is often a
critical element in preventing MITM attacks.
• WEP to WPA2: The rapid demise of WEP underscored the importance of cryptographic
robustness and proper key management. WPA2’s AES-CCMP and four-way handshake
significantly raised the bar.
Despite these advances, wireless security remains a moving target. Emerging standards like LTE,
5G, and Wi-Fi 6 (802.11ax) continue to refine authentication procedures, encryption algorithms,
and frequency utilization. The proliferation of IoT devices adds complexity: not all devices can
handle the computational overhead of robust encryption, leaving low-powered sensors and
consumer gadgets exposed if not thoughtfully designed.
Future Considerations
• Device Identity Management: With billions of IoT devices joining networks, robust
methods to identify and authenticate devices at scale—beyond traditional SIM-based
models—are critical.
• User Education and Policy: Even the strongest protocols falter with weak passphrases,
outdated firmware, or user ignorance. Effective training and clear security policies
remain essential.
In conclusion, wireless security is in a constant state of evolution. The lessons learned from the
vulnerabilities and subsequent enhancements in GSM, UMTS, Bluetooth, WEP, and WPA2
continue to inform modern standards and best practices. As technological innovation
accelerates—driven by the demands for higher data rates, lower latency, and massive device
connectivity—the security community must remain vigilant. Establishing robust, future-proof
encryption and authentication schemes, along with user awareness and policy enforcement, will
ensure that wireless communications remain both accessible and secure in the years to come.
8. References
• Babbage, S., & Maximov, A. (2008). An Analysis of the KASUMI Block Cipher. Selected
Areas in Cryptography.
• IEEE. (1999). IEEE 802.11 Standard for Wireless LAN Medium Access Control (MAC) and
Physical Layer (PHY) Specifications.
• IEEE. (2004). IEEE 802.11i-2004: Medium Access Control (MAC) Security Enhancements.
• 3GPP TS 03.20. (n.d.). Security Related Network Functions. 3rd Generation Partnership
Project.
• Nohl, K., & Paget, C. (2010). GSM: SRLabs Security Research Presentations. CCC
Conference.
• Vanhoef, M., & Piessens, F. (2017). Key Reinstallation Attacks: Forcing Nonce Reuse in
WPA2. ACM Conference on Computer and Communications Security (CCS).