0% found this document useful (0 votes)
28 views177 pages

WT All Modules

Module 2 covers the principles and technologies of Wide Area Wireless Networks, focusing on cellular communication, frequency reuse, and various systems such as GSM, CDMA, UMTS, and LTE. It explains the organization of cellular networks, the importance of frequency reuse for capacity and signal quality, and the impact of co-channel interference. Additionally, it discusses modulation and coding techniques essential for effective wireless communication.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views177 pages

WT All Modules

Module 2 covers the principles and technologies of Wide Area Wireless Networks, focusing on cellular communication, frequency reuse, and various systems such as GSM, CDMA, UMTS, and LTE. It explains the organization of cellular networks, the importance of frequency reuse for capacity and signal quality, and the impact of co-channel interference. Additionally, it discusses modulation and coding techniques essential for effective wireless communication.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 177

Module 2:

Wide Area Wireless Networks


Principle of Cellular Communication – Frequency Reuse concept, cluster size and system capacity, co
channel interference and signal quality; GSM – System Architecture, GSM Radio Subsystem, Frame
Structure; GPRS and EDGE – System Architecture; UMTS – Network Architecture; CDMA 2000 –
Network Architecture; LTE – Network Architecture; Overview of LoRa & LoRaWAN.

Principle of Cellular Communication:

Cellular communication is a form of


communication technology that enables the use
of mobile phones. A mobile phone is a
bidirectional radio that enables simultaneous
transmission and reception. Cellular
communication is based on the geographic
division of the communication coverage area
into cells, and within cells. Each cell is
allocated a given number of frequencies (or
channels) that allow a large number of
subscribers to conduct conversations
simultaneously.

Features of Cellular Systems


Wireless Cellular Systems solves the problem of spectral congestion and increases user capacity. The
features of cellular systems are as follows −
●​ Offer very high capacity in a limited spectrum.
●​ Reuse of radio channel in different cells.
●​ Enable a fixed number of channels to serve an arbitrarily large number of users by reusing the
channel throughout the coverage region.
●​ Communication is always between mobile and base station (not directly between mobiles).
●​ Each cellular base station is allocated a group of radio channels within a small geographic area
called a cell.
●​ Neighboring cells are assigned different channel groups.
●​ By limiting the coverage area to within the boundary of the cell, the channel groups may be
reused to cover different cells.
●​ Keep interference levels within tolerable limits.
●​ Frequency reuse or frequency planning.
●​ Organization of Wireless Cellular Network.
Cellular network is organized into multiple low power transmitters each 100w or less.
Shape of Cells
The coverage area of cellular networks are divided into cells, each cell having its own antenna for
transmitting the signals. Each cell has its own frequencies. Data communication in cellular networks is
served by its base station transmitter, receiver and its control unit.
The shape of cells can be either square or hexagon −
Square
A square cell has four neighbors at distance d and four at distance Root 2 d
●​ Better if all adjacent antennas equidistant
●​ Simplifies choosing and switching to new antenna
Hexagon
A hexagon cell shape is highly recommended for its easy coverage and calculations. It offers the following
advantages −
●​ Provides equidistant antennas
●​ Distance from center to vertex equals length of side

The common element of all generations of cellular


communication technologies is the use of defined radio
frequencies (RF), as well as frequency reuse. This enables
the provision of a service to a large number of subscribers
while reducing the number of channels (band width). It also
enables the creation of wide communication networks by
fully integrating the advanced capabilities of the mobile
phone. The increase in demand and consumption, as well as
the development of different types of services, accelerated
the rapid technological development of advanced cellular
communication networks, together with unceasing
improvement of the cellular devices themselves.

Most common types of communication technology


●​ Global Systems for Mobile (GSM) Communications
●​ Code Division Multiple Access (CDMA)
●​ Universal Mobile Telecommunication System (UMTS)
●​ Long​ Term Evolution (LTE) using the Orthogonal Frequency Division Multiplexing
(OFDM)method
●​ Adaptive communication
Global System for Mobile (GSM) Communications
GSM communication technology is based on the GSM standard – the first
to use the cellular protocol that replaced the earlier first-generation
communication standard. This standard was developed by the European
Communications Standards Institute (ETSI), starting from 1982, for the
second generation (2G) of digital cellular communication. This standard,
defined as digital, was based on optimal switching of a communication
network to full duplex speaking telephony, and was subsequently
expanded to include data packet transfer communication. From 1989 the
GSM standard was enhanced to become an international standard, and it
covered up to 90% of the activity of the second-generation phones in 219
SIM card countries and territories. In fact, GSM
(second generation) technology replaced the restrictive analog
communication, and was a technological
turning point, which was followed by the
development of innovative cellular
communication technologies. The second
generation of GSM thus constituted the
foundation for subsequent generations of
cellular communication. In Israel a GSM
cellular system was set up in 1999 by
"Partner", operating under the trade name of
"Orange". In 2001 "Cellcom" joined the
providers of GSM in Israel, after installing a
GSM network parallel to the time division
multiple access (TDMA) network it operated
previously. In 2009 "Pelephone" also began
using GSM technology.

Code Division Multiple Access (CDMA) technology


CDMA technology was originally developed for the US Army during the Vietnam War, as a way of
disguising conversations intended for military purposes. This method separates different conversations by
coding rather than by time sharing (as in the EDMA/GSM technologies) or by frequency sharing (FDMA)
as with the NAMPS technology. The method of separation by coding enables conducting a large number
of conversations simultaneously over the same range of frequencies, with no interference between them.

"Qualcomm", which developed this technology, applied it to cellular communications that use coded
speech at different rhythms – a technology whereby the cellular device receives simultaneous information
from a number of base stations. This technology ensures the continuity of conversations during movement
from one cell to another.

Universal Mobile Telecommunication Systems (UMTS) technology


UTMS technology, based on Wideband Code Division Multiple Access (W-CDMA) technology, is one of
the third-generation (3G) technologies of mobile phone telephony. This technology was designed by the
Third Generation Partnership Project (3GPP), a collaboration between groups of telecommunications
associations to create a globally applicable third-generation mobile phone system, and represents the
European-Japanese counterpart to the International Mobile Telecommunications for the year 2000
(IMT-2000) International Telecommunications Union (ITU) specifications for cellular communication
systems.
Long Term Evolution (LTE) technology, operating according to the method of Orthogonal
Frequency Division Multiplexing (OFDM)
LTE is not only an additional generation in the evolution of cellular technology, but rather one that is
being developed while considering the future requirements of wireless data communication and the
scientific and technological developments in this field. This is due to its ability to transmit data at a rate
of hundreds of megabytes per second, up to a gigabyte per second, at low cost. The rise of LTE today
and in the near future may resemble the revolution caused by the introduction of mobile phone
technology in the 1980s, and even the appearance of Wi-Fi.

Adaptive communication
An innovative feature of CDMA technology and other new communication technologies is the close
monitoring of power which enables adaptive communication. This feature allows the cellular device to
vary its power dynamically at any given time. This means that a cellular communication network using
this technology and others may conduct dynamic communications adapted to the conditions of reception
and the quality of communication.

Frequency Reuse
Cellular network is an underlying technology for mobile phones, personal communication systems,
wireless networking etc. The technology is developed for mobile radio telephone to replace high power
transmitter/receiver systems. Cellular networks use lower power, shorter range and more transmitters for
data transmission.
Frequency reusing is the concept of using the same radio frequencies within a given area, that are
separated by considerable distance, with minimal interference, to establish communication.
Frequency reuse offers the following benefits −
●​ Allows communications within cell on a given frequency
●​ Limits escaping power to adjacent cells
●​ Allows re-use of frequencies in nearby cells
●​ Uses same frequency for multiple conversations
●​ 10 to 50 frequencies per cell
For example, when N cells are using the same number of frequencies and K be the total number of
frequencies used in systems. Then each cell frequency is calculated by using the formulae K/N.
In Advanced Mobile Phone Services (AMPS) when K = 395 and N = 7, then frequencies per cell on an
average will be 395/7 = 56. Here, cell frequency is 56
Frequency Reuse is the scheme in which allocation and reuse of channels throughout a coverage region
is done. Each cellular base station is allocated a group of radio channels or Frequency sub-bands to be
used within a small geographic area known as a cell. The shape of the cell is Hexagonal. The process of
selecting and allocating the frequency sub-bands for all of the cellular base station within a system is
called Frequency reuse or Frequency Planning.

Silent Features of using Frequency Reuse:


●​ Frequency reuse improve the spectral efficiency and signal Quality (QoS).
●​ Frequency reuse classical scheme proposed for GSM systems offers a protection against
interference.
●​ The number of times a frequency can be reused is depend on the tolerance capacity of the
radio channel from the nearby transmitter that is using the same frequencies.
●​ In Frequency Reuse scheme, total bandwidth is divided into different sub-bands that are used
by cells.
●​ Frequency reuse scheme allow WiMax system operators to reuse the same frequencies at
different cell sites.

Cell with the same letter uses the same set of channels group
or frequencies sub-band.
To find the total number of channel allocated to a cell:
S = Total number of duplex channels available to
use k = Channels allocated to each cell (k<S)
N = Total number of cells or Cluster Size
Then Total number of channels (S) will be,
S = kN

Frequency Reuse Factor = 1/N


In the above diagram cluster size is 7 (A,B,C,D,E,F,G) thus
frequency reuse factor is 1/7.
N is the number of cells which collectively use the complete set of available frequencies is called a Cluster.
The value of N is calculated by the following formula:
N = I2 + I*J + J2
Where I,J = 0,1,2,3…
Hence, possible values of N are 1,3,4,7,9,12,13,16,19 and so on.
If a Cluster is replicated or repeated M times within the cellular system, then Capacity, C, will be,
C = MkN = MS
CO-channel Interference:
Co-channel interference or CCI is crosstalk from two different radio transmitters using the same channel.
Co-channel interference can be caused by many factors from weather conditions to administrative and
design issues. Co-channel interference may be controlled by various radio resource management schemes.
In Frequency reuse there are several cells that use the same set of frequencies. These cells are called
Co-Channel Cells. These Co-Channel cells results in interference. So to avoid the Interference cells that
use the same set of channels or frequencies are separated from one another by a larger distance. The
distance between any two Co-Channels can be calculated by the following formula:
D = R * (3 * N)1/2
Where,
R = Radius of a cell
N = Number of cells in a given cluster
System Capacity
Wireless communications deal with at least two main concerns: coverage and capacity.
1. Channel Capacity
One fundamental concept of information theory is one of channel capacity, or how much information can
be transmitted in a communication channel. In the 1940’s Claude Shannon invented formal
characterization of information theory and derived the well-known Shanon’s capacity theorem (Theorem
17 in [15], p.628). That theorem applies to wireless communications. A great presentation of this equation
can be found in [8] p.82; it presents a concise derivation of the equation, and includes a good introduction
to important information theory concepts such as information and entropy. 1
The Shannon capacity equation gives an upper bound for the capacity in a non-faded channel with
added white Gaussian noise:
(2.
4)

where C= capacity (bits/s), W=bandwidth (Hz),


S⁄N= signal to noise (and interference) ratio.

That capacity equation assumes one transmitter and one receiver, though multiple antennas can be
used in diversity scheme on the receiving side. The formula will be revisited for multi-antenna systems in
The equation singles out two fundamentally important aspects: bandwidth and SNR. Bandwidth reflects
how much spectrum a wireless system uses, and explains why the spectrum considerations seen in
are so important: they have a direct impact on system capacity. SNR of course reflects the quality of
the propagation channel, and will be dealt with in numerous ways: modulation, coding, error correction,
and important design choices such as cell sizes and reuse patterns.
2.2.2 Cellular Capacity
Practical capacity of many wireless systems are far from the Shannon’s limit (although recent standards
are coming close to it); and practical capacity is heavily dependent on implementation and standard
choices.
Digital standards deal in their own way with how to deploy and optimize capacity. Most systems are
limited by channel width, time slots, and voice coding characteristics. CDMA systems are interference
limited, and have tradeoffs between capacity, coverage, and other performance metrics (such as dropped
call rates or voice quality).
Cellular analog capacity:
Fairly straight forward, every voice channel uses a 30 kHz frequency channel, these frequencies may be
reused according to a reuse pattern, the system is FDMA. The overall capacity simply comes from the
total amount of spectrum, the channel width and the reuse pattern.
TDMA/FDMA capacity:
In digital FDMA systems, capacity improvements mainly come from the voice coding and elaborate
schemes (such as frequency hopping) to decrease reuse factor. The frequency reuse factor hides a lot of
complexity; its value depends greatly on the signal to interference levels acceptable to a given cellular
system ([1] ch. 3.2, and 9.7). TDMA systems combine multiple time slots per channels.
CDMA capacity:
a usual capacity equation for CDMA systems may be fairly easily derived as follows (for the reverse
link): first examine a base station with N mobiles, its noise and interference power spectral density dues
to all mobiles in that same cell is ISC = (N - 1)Sα, where S is the received power density for each mobile,
and α is the voice activity factor. Other cell interferences IOC are estimated by a reuse fraction β of the
same cell interference level, such that IOC = βISC; (usual values of β are around 1⁄2). The total noise and
interference at the base is therefore Nt = ISC(1 + β). Next assume the mobile signal power density received
at the base station is S = REb⁄W. Eliminating ISC, we derive:
(2.
5)

where
●​ W is the channel bandwidth (in Hz),
●​ R is the user data bit rate (symbol rate in symbol per second),
●​ Eb⁄Nt is the ratio of energy per bit by total noise (usually given in dB Eb⁄Nt ≈ 7dB),
●​ α is the voice activity factor (for the reverse link), typically 0.5,
●​ and β is the interference reuse fraction, typically around 0.5, and represents the ratio of
interference level from the cell in consideration by interferences due to other cells. (The number 1
+ β is sometimes called reuse factor, and 1⁄(1 + β) reuse efficiency)
This simple equation (2.5) gives us a number of voice channels in a CDMA frequency channel 2.
We can already see some hints of CDMA optimization and investigate certain possible improvement for a
3G system. In particular: improving α can be achieved with dim and burst capabilities, β with interference
mitigation and antenna downtilt considerations, R with vocoder rate, W with wider band CDMA, Eb⁄Nt
with better coding and interference mitigation techniques.
Some aspects however are omitted in this equation and are required to quantify other capacity
improvements mainly those due to power control, and softer/soft handoff algorithms.
Of course other limitations come into play for wireless systems, such as base station (and mobile)
sensitivity, which may be incorporated into similar formulas; and further considerations come into play
such as: forward power limitations, channel element blocking, backhaul capacity, mobility, and handoff.
2.3​Modulation and Coding
Modulation techniques are a necessary part of any wireless system, without them, no useful information
can be transmitted. Coding techniques are almost as important, and combine two important aspects: first
to transmit information efficiently, and second to deal with error correction (to avoid retransmissions).
2.3.1​Modulation
A continuous wave signal (at a carrier frequency fc) in itself encodes and transmits no information. The
bits of information are encoded in the variations of that signal (in phase, amplitude, or a combination
thereof). These variations cause the occupied spectrum to increase, thus occupying a bandwidth around fc;
and the optimal use of that bandwidth is an important part of a wireless system. Various modulation
schemes and coding schemes are used to maximize the use of that spectrum for different applications
(voice or high speed data), and in various conditions of noise, interference, and RF channel resources in
general.
Classic modulation techniques are well covered in several texts and we simply recall here a few
important aspects of digital modulations (that will be important in link budgets). The main digital
modulations used in modern wireless systems are outlined in table
Modulation​ Bits encoded by:​ Examples

Amplitude​ Shift Keying Discrete amplitude levels​ On/off


keying

Frequency Shift Keing​ Multiple​ discrete


frequencies

Phase Shift Keying​ Multiple discrete phases​ BPSK,​ QPSK,


8-PSK

Quadrature​ Ampl. Mod. Both phase and amplitude​ 16, 64, 256
QAM

Table 2.1: Digital modulations

Modulation is a powerful and efficient tool used to encode information; a few simple definitions are
commonly used:
Symbol
denotes the physical encoding of information, over a specific symbol time (or period) Ts, during which the
system transmits a modulated signal containing digital information.
Bit
denotes a logical bit (0 or 1) of information; one or more bits are encoded by a modulation scheme in a
symbol.
Higher order modulations can encode multiple bits in a symbol, and require higher SNR to decode
error-free. Figure illustrates how multiple phases and amplitudes are used to combine multiple bits into
one symbol transmission. The tradeoff between bits encoded per symbol is often referred to as a measure
in bits per Hertz (b/Hz), its relation to SNR is bounded by Shannon’s theorem seen earlier

Explain the coverage and capacity improvement techniques for cellular systems.
There is a performance criterion of cellular mobile systems like:
a)​Voice quality.
b)​Service Quality like coverage and quality of service.
c)​Number of Dropped calls.
d)​Special features like call forwarding, call diverting, call barring.
As the demand for wireless service increases, the number of channels assigned to cell becomes
insufficient to support required number of users.
At this point, cellular design techniques are needed to provide more channels per unit coverage area.
There are 3 techniques for improving cell capacity in cellular system, namely:
●​ Cell Splitting.
●​ Sectoring.
●​ Coverage Zone Approach.
A)​CELL SPLITTING:
●​ It is process of subdividing a congested cell into smaller cells, each with its own base station and
a corresponding reduction in antenna height and transmitter power.
●​ Cell splitting increases capacity of cellular system since it increases number of times that
channels are reused, it preserves frequency reuse plan.
●​ It defines new cells which have smaller radius than original cells and by installing these smaller
cells called microcells between existing cells, that is radius will be half of the original cell.
●​ Thus capacity increases due to additional number of channels per unit area, but does not disturb
the channel allocation scheme required to maintain the minimum co-channel reuse ratio Q
between co-channel cells.

B)​SECTORING:
●​ This is another method to increase cellular capacity and coverage by keeping cell radius
unchanged and decreasing D/R ratio.
●​ In this approach, capacity improvement is achieved by reducing the number of cells in a cluster
and thus increasing the frequency reuse.
●​ The co-channel interference in a cellular system may be decreased by replacing a single
Omni-directional antenna at the base station by several directional antennas, each radiating within
a specified sector.
●​ The factor by which the co-channel interference is reduced depends on the amount of sectoring
used.
a) 1200 sectoring b) 600 sectoring
Advantages:
●​ Improvement in Signal capacity.
●​ Improvement in signal to interference ratio.
●​ Increases frequency reuse.
Disadvantages:
●​ Increase in number of handoffs.
●​ Increase in number of antenna at each base station.
C)​COVERAGE ZONE/ MICROCELL ZONE CONCEPT:
●​ This approach was presented by Lee to solve the problem of an increased load on the switching
and control link elements of the mobile system due to sectoring.
●​ It is based on a microcell concept for 7 cell reuse.
●​ In this scheme, each of the three zone sites are connected to a single base station and share the
same radio equipment.
●​ Multiple zones and a single base station make up a cell. As a mobile travels within the cell, it is
served by the zone with the strongest signal.
●​ This approach is superior to sectoring since antennas are placed at the outer edges of the cell, and
any base station channel may be assigned to any zone by the base station.
GSM Architecture:
1.​ Base Station System (BSS)-
2.​ Network Switching Subsystem (NSS)-
3.​ Public Network

Fig. GSM architecture


1.​ Base Station System (BSS)-
●​ The Base Station System (BSS) is the selection of a traditional cellular telephone
network.
●​ It is responsible for handling traffic and signaling between a mobile phone and the
network switching system
●​ Base Station Controller (BSC
●​ Base Transceiver Station (BTS)
2.​Base Station Controller (BSC):
●​ A Base Station Controller BSC is a critical mobile network component that control one
or more base transceiver station (BTS), also known as base stations or cell sites.
3.​Base Transceiver Station (BTS):
●​ The base transceiver station (BTS), is a term used to denote a base station in GSM
terminology.
●​ A BTS consist of an antenna and the radio equipment necessary to communicate by
radio with a mobile station (MS). Each BTS covers a defined area, called as a cell.
4.​ Network Switching Subsystem (NSS)-
●​ Mobile Switching Centre(MSC) Parts are below are :
●​ The home location register (HLR) –
●​ The Visitor Location Register (VLR)
●​ The Authentication Centre (AUC)
●​ The Equipment Identity Register (EIR)
●​ Operation Maintenance Center (OMC)
5.​Mobile Switching Centre (MSC) –
It is a main part of the GSM and CDMA network system which acts as a control center of
a Network Switching Subsystem (NSS). It connects calls between subscribers by
switching the digital voice packets between network paths
6.​The home location registers (HLR) –
●​ Store permanent data about subscriber like profile, location info, status.
●​ Subscription info of registered user PS stored.
7.​The Visitor Location Register (VLR) –
●​ Stores temporary info integrated with MSC & it work in co-ordination with HLR.
8.​The Authentication Centre (AUC) –
●​ Protected database.
●​ Stores a copy of secret key.
●​ Used for authentication.
●​ Protects from different type of fraud.
9.​The Equipment Identity Register (EIR) –
●​ Db that contain list of all valid mobile on network.
●​ IMEI used to indentify each MS.
●​ IMEI is marked as invalid in case of stoles.
10.​Operation Maintenance Center (OMC) –
●​ This dept maintain all telecommunications hardware and network operations with a
particular market.
●​ It also Manage all charging and billing procedures.
●​ It also Manage all mobile equipment in the system.
11.​Public Network
●​ The public switched telephone network (PSTN)
●​ Integrated Services Digital Network (ISDN)
●​ Data Networks
12.​The public switched telephone network (PSTN) –
●​ The public switched telephone network (PSTN) is the worldwide collection on
interconnected public telephone network that was designed primarily for analog calls.
●​ PSTN was only an analog system, but it is now almost entirely digital.
●​ PSTN uses signaling system no. 7, SS7 as signal protocol.
●​ SS7 is used to set up and terminate a telephone call.
13.​Integrated Services Digital Network (ISDN) –
●​ ISDN is a set of international communicate standards designed in 1980’s and improved
in 1990’s.
●​ It is a digital network which is to transmit voice, image, video and text over the existing
circuit – switched PSTN telephone network.
Advantage of GSM:
●​ There are many types of handset and service provides available in market, So that buyer
can choose as per like there is lot of options.
●​ In GSM, There is variety of plans with cheaper call rates, free messaging facility.
●​ The quality of calling in GSM is better and better security as compared to CDMA.
●​ A number of value added services such as GPRS are making GSM a perfect choice.
●​ The consumption of power is less in GSM mobiles.
●​ With the try band GSM, one can use the phone anywhere around the world.
●​ Less signal distortion inside the building.
Disadvantage of GSM:
●​ The per unit charge of GSM is higher than CDMA.
●​ If the SIM gets lost, one can lose all the data, if the same is not saved in the phone.
●​ GSM has fixed max call site range of 120 km, which is imposed by technical limitation.
It is expanded from the old old limit of 35 km.
●​ Signal can be detected easily in GSM as compared to CDM

GSM - The Base Station

Subsystem(BSS) GSM Radio

Subsystem:

The BSS is composed of two parts:


●​ The Base Transceiver Station (BTS)
●​ The Base Station Controller (BSC)
The BTS and the BSC communicate across the specified Abis interface, enabling operations between
components that are made by different suppliers. The radio components of a BSS may consist of four to
seven or nine cells. A BSS may have one or more base stations. The BSS uses the Abis interface
between the BTS and the BSC. A separate high-speed line (T1 or E1) is then connected from the BSS to
the Mobile MSC.
The Base Transceiver Station (BTS)
The BTS houses the radio transceivers that define a cell and handles the radio link protocols with the
MS. In a large urban area, a large number of BTSs may be deployed.

The BTS corresponds to the transceivers and antennas used in each cell of the network. A BTS is usually
placed in the center of a cell. Its transmitting power defines the size of a cell. Each BTS has between 1
and 16 transceivers, depending on the density of users in the cell. Each BTS serves as a single cell. It
also includes the following functions:
●​ Encoding, encrypting, multiplexing, modulating, and feeding the RF signals to the antenna
●​ Transcoding and rate adaptation
●​ Time and frequency synchronizing
●​ Voice through full- or half-rate services
●​ Decoding, decrypting, and equalizing received signals
●​ Random access detection
●​ Timing advances
●​ Uplink channel measurements
The Base Station Controller (BSC)
The BSC manages the radio resources for one or more BTSs. It handles radio channel setup, frequency
hopping, and handovers. The BSC is the connection between the mobile and the MSC. The BSC also
translates the 13 Kbps voice channel used over the radio link to the standard 64 Kbps channel used by
the Public Switched Telephone Network (PSDN) or ISDN.
It assigns and releases frequencies and time slots for the MS. The BSC also handles intercell
handover. It controls the power transmission of the BSS and MS in its area. The function of the BSC is
to allocate the necessary time slots between the BTS and the MSC. It is a switching device that handles
the radio resources. Additional functions include:
●​ Control of frequency hopping
●​ Performing traffic concentration to reduce the number of lines from the MSC
●​ Providing an interface to the Operations and Maintenance Center for the BSS
●​ Reallocation of frequencies among BTSs
●​ Time and frequency synchronization
●​ Power management
●​ Time-delay measurements of received signals from the MS
GSM frame structure or frame hierarchy
In GSM frequency band of 25 MHz is divided into 200 KHz of smaller bands, each carry one RF carrier,
this gives 125 carriers.As one carrier is used as guard channel between GSM and other frequency bands
124 carriers are useful RF channels.This division of frequency pool is called FDMA. Now each RF
carrier will have eight time slots. This division time wise is called TDMA. Here each RF carrier
frequency is shared between 8 users hence in GSM system, the basic radio resource is a time slot with
duration of about 577 microsec. As mentioned each time slot has 15/26 or 0.577ms of time duration. This
time slot carries 156.25 bits which leads to bit rate of 270.833 kbps. This is explained below in TDMA
gsm frame structure. For E-GSM number of ARFCNs are 174, for DCS1800 ARFNCs are 374.

GSM frame structure is designated as hyperframe, superframe, multiframe and frame. The minimum
unit being frame (or TDMA frame) is made of 8 time slots.
One GSM hyperframe composed of 2048 superframes.
Each GSM superframe composed of multiframes (either 26 or 51 as described below).
Each GSM multiframe composed of frames (either 51 or 26 based on multiframe type).
Each frame composed of 8 time slots.
Hence there will be total of 2715648 TDMA frames available in GSM and the same cycle continues.
Fig. GSM Frame Structure
As shown in the figure below, there are two varients to multiframe structure.
1.​ 26 frame multiframe - Called traffic multiframe,composed of 26 bursts in a duration of 120ms,
out of these 24 are used for traffic, one for SACCH and one is not used.
2.​51 frame multiframe- Called control multiframe,composed of 51 bursts in a duration of 235.4 ms.
This type of multiframe is divided into logical channels. These logical channels are time sheduled by
BTS. Always occur at beacon frequency in time slot 0, it may also take up other time slots if required by
system for example 2,4,6.
As shown in fig 3. each ARFCN or each channel in GSM will have 8 time slots TS0 to TS7. During
network entry each GSM mobile phone is allocated one slot in downlink and one slot in uplink. Here in
the figure GSM Mobile is allocated 890.2 MHz in the uplink and 935.2 MHz in the downlink. As
mentioned TS0 is allocated which follows either 51 or 26 frame multiframe structure. Hence if at start 'F'
is depicted which is FCCH after 4.615 ms ( which is 7 time slot duration) S(SCH) will appear then after
another 7 slots B(BCCH) will appear and so on till end of 51 frame Multiframe structure is completed and
cycle continues as long as connection between Mobile and base station is active. similarly in the uplink,
26 frame multiframe structure follow, where T is TCH/FS (Traffic channel for full rate speech), and S is
SACCH. The gsm frame structure can best be understood as depicted in the figure below with respect to
downlink(BTS to MS) and uplink (MS to BTS) directions.

Fig.3 GSM Physical and logical channel concept


Frequencies in the uplink = 890.2 + 0.2 (N-1) MHz
Frequencies in the downlink = 935.2 + 0.2 (N-1) MHz
where, N is from 1 to 124 called ARFCN
As same antenna is used for transmit as well as receive, there is 3 time slots delay introduced between
TS0 of uplink and TSO of downlink frequency. This helps avoid need of simultaneous transmission and
reception by GSM mobile phone. The 3 slot time period is used by the Mobile subscriber to perform
various functions e.g. processing data, measuring signal quality of neighbour cells etc.
Engineers working in GSM should know gsm frame structure for both the downlink as well as uplink.
They should also understand mapping of different channels to time slots in these gsm frame structures.

GPRS Architecture:
GPRS architecture works on the same procedure like GSM network, but, has additional entities that
allow packet data transmission. This data network overlaps a second-generation GSM network providing
packet data transport at the rates from 9.6 to 171 kbps. Along with the packet data transport the GSM
network accommodates multiple users to share the same air interface resources concurrently.
Following is the GPRS Architecture diagram:

Fig. GPRS Architecture


GPRS attempts to reuse the existing GSM network elements as much as possible, but to effectively build
a packet-based mobile cellular network, some new network elements, interfaces, and protocols for
handling packet traffic are required.
Therefore, GPRS requires modifications to numerous GSM network elements as summarized below:
Mobile Station (MS) New Mobile Station is required to access GPRS
services. These new terminals will be backward
compatible with GSM for voice calls.

BTS A software upgrade is required in the existing Base


Transceiver Station(BTS).

BSC The Base Station Controller (BSC) requires a software


upgrade and the installation of new hardware called
the packet control unit (PCU). The PCU directs the
data traffic to the GPRS network and can be a separate
hardware element associated with the BSC.

GPRS Support Nodes (GSNs) The deployment of GPRS requires the installation of
new core network elements called the serving GPRS
support node (SGSN) and gateway GPRS support
node (GGSN).

Databases (HLR, VLR, etc.) All the databases involved in the network will require
software upgrades to handle the new call models and
functions introduced by GPRS.
GPRS Mobile Stations
New Mobile Stations (MS) are required to use GPRS services because existing GSM phones do not
handle the enhanced air interface or packet data. A variety of MS can exist, including a high-speed
version of current phones to support high-speed data access, a new PDA device with an embedded GSM
phone, and PC cards for laptop computers. These mobile stations are backward compatible for making
voice calls using GSM.
GPRS Base Station Subsystem
Each BSC requires the installation of one or more Packet Control Units (PCUs) and a software upgrade.
The PCU provides a physical and logical data interface to the Base Station Subsystem (BSS) for packet
data traffic. The BTS can also require a software upgrade but typically does not require hardware
enhancements.
When either voice or data traffic is originated at the subscriber mobile, it is transported over the air
interface to the BTS, and from the BTS to the BSC in the same way as a standard GSM call. However, at
the output of the BSC, the traffic is separated; voice is sent to the Mobile Switching Center (MSC) per
standard GSM, and data is sent to a new device called the SGSN via the PCU over a Frame Relay
interface.
GPRS Support Nodes
Following two new components, called Gateway GPRS Support Nodes (GSNs) and, Serving GPRS
Support Node (SGSN) are added:
Gateway GPRS Support Node (GGSN)
The Gateway GPRS Support Node acts as an interface and a router to external networks. It contains
routing information for GPRS mobiles, which is used to tunnel packets through the IP based internal
backbone to the correct Serving GPRS Support Node. The GGSN also collects charging information
connected to the use of the external data networks and can act as a packet filter for incoming traffic.
Serving GPRS Support Node (SGSN)
The Serving GPRS Support Node is responsible for authentication of GPRS mobiles, registration of
mobiles in the network, mobility management, and collecting information on charging for the use of the
air interface.
Internal Backbone
The internal backbone is an IP based network used to carry packets between different GSNs. Tunnelling
is used between SGSNs and GGSNs, so the internal backbone does not need any information about
domains outside the GPRS network. Signalling from a GSN to a MSC, HLR or EIR is done using SS7.
Routing Area
GPRS introduces the concept of a Routing Area. This concept is similar to Location Area in GSM,
except that it generally contains fewer cells. Because routing areas are smaller than location areas, less
radio resources are used While broadcasting a page message.

Difference between GSM and GPRS :

S.No. GSM GPRS

1 GSM stands for Global Systems for Mobile. GPRS stands for General Packet Radio
Service.

2 GSM is a cellular standard for mobile phone GPRS is an up-gradation of GSM features
communications to cater to voice services and over the basic features to obtain much higher
data delivery using digital modulation where data speeds and simple wireless access to
SMS has a profound effect on society. packet data networks than standard GSM.

3 System generation is 2G. System generation is 2.5G.

4 The frequency bands used in the GSM system The frequency bands used in the system are
are 900 and 1800 MHz. 850, 900, 1800 and 1900 MHZ.

5 The type of connection is a circuit-switched Here the type of connection is


network. a packet-switched network.

6 It provides data rates of 9.6 kbps. It provides data rates of 14.4 to 115.2 kbps.

7 In GSM billing is based on the duration of the In GPRS billing is based on the features
connection. amount of data transferred.
8 It does not allow direct connection to the It allows direct connection to the internet.
internet.

9 It is based on system TDMA. It is based on system GSM.

10 In GSM, single time slot is allotted to a single In GPRS, multiple time slots can be allotted to
user. a single user.

11 It takes long time to connect. It provides faster connection.

12 In this location area concept is used. In this routing area concept is used.

13 SMS (Short Messaging Service) is one of the MMS (Multimedia Messaging Service) is one
popular features. of the popular features.

EDGE:
What is EDGE(Enhanced Data Rate for GSM Evolution)?
●​ Last Updated : 10 May, 2020
EDGE (Enhanced Data Rate For GSM Evolution) provides a higher rate of data transmission than normal
GSM. It uses a backward-compatible extension of GSM of digital mobile technology.EDGE has a pre-3G
radio technology and uses part of ITU’s 3G definition. It can work on any network deployed with GPRS
(with necessary upgrades).

In order to increase data transmission speed, EDGE was deployed on the GSM network in 2003 by
Cingular in the USA.
Working
It uses 8PSK modulation in order to achieve a higher data transmission rate. The modulation format is
changed to 8PSK from GMSK. This provides an advantage as it is able to convey 3 bits per symbol, and
increases the maximum data rate. However, this upgrade required a change in the base station.
Fig. EDGE in GSM
Features
●​ It provides an evolutionary migration path from GPRS to UMTS.
●​ It is standardized by 3GPP.
●​ EDGE is used for any packet switched application,like an Internet connection.
●​ EDGE delivers higher bit-rates per radio channel and it increase the capacity and
performance.
Advantage
●​ It has higher speed.
●​ It is an “always-on” connection
●​ It is more reliable and efficient
●​ It is cost efficient
Disadvantage
●​ It consumes more battery.
●​ hardware needs upgradation.
Fig. GPRS and EDGE architecture
General Packet Radio Service (GPRS): The first big step in the move to 3G happened through the
launching of GPRS. The cellular services, mixed with GPRS resulted to 2.5G. GPRS was capable of
giving data rates ranging from 56 kbbps up to a maximum of 114 kbps. This can be used for services like
Wireless Application Protocol access, Multimedia Messaging Service (MMS), Short Message Service
(SMS) and internet communication services like World Wide Web access and email. The data transfer of
GPRS is usually charged for each megabyte of traffic being transferred, while the data communication via
the usual circuit switching is charged by the minute of connection period, regardless of whether the
consumer actually used the capability or is just in idle mode. GPRS is a top-effort packet switched
service, compared to circuit switching, where there is a given Quality of Service (QoS) is certified during
the connection for non-mobile users. It gives medium speed data transfer, via the use of idle Time
division multiple access (TDMA) channels. 2.2 Enhanced Data rates for GSM Evolution (EDGE) Further
enhancements to GSM network is provided by EDGE technology, which provides up to three times the
data capacity of GPRS. Using EDGE, operators can handle three times more subscribers than GPRS,
triple their data rate per subscriber, or add extra capacity to their voice communication. EDGE allows the
delivery of advanced mobile services such as the downloading of video and music clips, multimedia
messaging, high-speed Internet access and e-mail on the move [14]. EDGE is essentially a GSW GPRS
radio interface with a set of enhancements to support higher peak data rates than GPRS and to provide
better data throughput than GPRS. According to the US operators intending to use EDGE, it is mainly
intended to provide 3G-type data services in a combined GSM and TIAIEIA-136 footprint in all of the
existing 800/900/1800/1900 MHz frequency bands [16]. The GPRS networks have changed significantly
to EDGE networks, through the presentation of 8PSK encoding. Enhanced information rates for EDGE or
GSM Evolution, IMT Single Carrier or IMT-SC and Enhanced GPRS is a reversecompatible digital
mobile phone technology, allowing improved data transmission rates, as an extension over the standard
GSM. EDGE can be counted as a 3G radio technology, involved in ITU's 3G description, but is frequently
referred to as 2.75G. It was launched on GSM networks, starting in 2003, by Cingular. 3GPP standardized
EDGE as it belonged in the GSM group. The specification gets bigger data rates by altering to very
sophisticated processes of coding, particularly 8PSK, inside the GSM timeslots [1]. Fig. shows the GPRS
and EDGE architecture. GPRS is a 2.5G solution that provides medium speed packet data service for a
wireless network.

General Packet Radio Service / Enhanced Data rates for Global Evolution
GSM is a circuit-switched network; ideal for the delivery of voice but with limitations for sending data.
However, the standard for GSM was designed to evolve and in 2000 the introduction of General Packet
Radio Service (GPRS) added packet-switched functionality, kick-starting the delivery of the Internet on
mobile handsets.

GPRS adds packet-switched functionality to GSM networks


Based on specifications in Release 97, GPRS typically reached speeds of 40Kbps in the downlink and
14Kbps in the uplink by aggregating GSM time slots into one bearer. Enhancements in Releases R’98 and
R’99 meant that GPRS could theoretically reach downlink speeds of up to 171Kbps.

EDGE… almost 3G
The next advance in GSM radio access technology was EDGE (Enhanced Data rates for Global
Evolution), or Enhanced GRPS.
With a new modulation technique yielding a three-fold increase in bit rate (8PSK replacing GMSK) and
new channel coding for spectral efficiency, EDGE was successfully introduced without disrupting the
frequency re-use plans of existing GSM deployments.
The increase in data speeds to 384Kbps placed EDGE as an early pre-taste of 3G, although it
was labeled 2.75G by industry watchers.
EDGE+
Ongoing standards work in 3GPP has delivered EDGE Evolution, which is designed to complement
high-speed packet access (HSPA) coverage.
EDGE Evolution has:
●​ Improved spectral efficiency with reduced latencies down to 100ms
●​ Increased throughput speeds to 1.3Mbps in the downlink and 653Kbps in the uplink
GPRS (Release 97) and EDGE (Release 98) are largely maintained in the RAN6 Working Group of 3GPP,
which succeeded TSG GERAN when it was closed in 2016.
Reading should start with the 44 series and 45 series of the 3GPP specifications.

UMTS:
UMTS or Universal Mobile Telecommunications Framework, is the 3G successor to the GSM family
of measures counting GPRS and EDGE. 3G UMTS employments a completely diverse radio interface
based around the utilize of Coordinate Grouping Spread Range as CDMA or Code Division Different
Access. Although 3G UMTS employments a completely distinctive radio get to standard, the center
arrange is the same as that utilized for GPRS and EDGE to carry partitioned circuit exchanged voice and
bundle data.

UMTS employments a wideband adaptation of CDMA possessing a 5 MHz wide channel. Being more
extensive than its competition CDMA2000 which as it was utilized a 1.25MHz channel, the tweak
conspire was known as wideband CDMA or WCDMA/W-CDMA. This title was regularly utilized to
allude to the total framework. It could be a frame of media transmission utilized for remote gathering and
transmission. It is an advancement in speed boost from the more seasoned 2G standard of transmission
speed and can increment information transmission times between gadgets and servers.
UMTS Applications
●​ Streaming / Download (Video, Audio)
●​ Videoconferences.
●​ Fast Internet / Intranet.
●​ Mobile E-Commerce (M-Commerce)
●​ Remote Login
●​ Background Class applications
●​ Multimedia-Messaging, E-Mail
●​ FTP Access
●​ Mobile Entertainment (Games)
Features of UMTS
●​ UMTS could be a component of IMT-2000 standard of the Universal Broadcast
communications Union (ITU), created by 3GPP.
●​ It employments wideband code division multiple access (W-CDMA) discuss interface.
●​ It gives transmission of content, digitized voice, video and multimedia.
●​ It gives tall transmission capacity to portable operators.
●​ It gives a tall information rate of 2Mbps.
●​ For High-Speed Downlink Parcel Get to (HSDPA) handsets, the data-rate is as tall as 7.2
Mbps within the downlink connection.
●​ It is additionally known as Flexibility of Mobile Multimedia Access (FOMA).
Advantages of UMTS
●​ UMTS could be a successor to 2G based GSM advances counting GPRS and EDGE .
Gaining a 3rd title 3GSM since it could be a 3G relocation for GSM
●​ Support 2Mbit/s information rates.
●​ Higher Information rates at lower incremental costs.
●​ Benefits of programmed universal wandering also necessarily security and charging
capacities, permitting administrators emigrate from 2G to 3G whereas holding numerous of
their existing back-office frameworks
●​ Gives administrators the adaptability to present unused mixed media administrations to trade
clients and buyers
●​ This not as it were gives client a valuable phone but moreover deciphers higher incomes for
the administrator.
Disadvantages of UMTS
●​ It is more expensive than GSM.
●​ Universal Mobile Telecommunication System has poor video experience.
●​ Universal Mobile Telecommunication System still not broadband.
Fig. UMTS architecture
As shown in the figure there are three main components in UMTS network architecture, User
Equipments is composed of Mobile Equipment (ME) and USIM. Radio Access Network is composed of
NodeB and RNC. Core Network is composed of circuit switched and packet switched functional modules.
For Circuit switched (CS) operations MSC and GMSC along with database modules such as VLR, HLR
will be available. For packet switched (PS) operations SGSN and GGSN will serve the purpose. GMSC
will be connected with PSTN/ISDN in CS case. GGSN is connected with Packet data Network (PDN) for
PS case. Interfaces between these entities are summarized below.
Uu interface between UE and NodeB
Iub interface between NodeB and RNC
Iur interface between RNC and RNC
Iu-CS interface between RNC and MSC
Iu-PS interface between RNC and
SGSN

3G UMTS network constituents


The UMTS network architecture can be divided into three main elements:
●​ User Equipment (UE): The User Equipment or UE is the name given to what was previous
termed the mobile, or cellphone. The new name was chosen because the considerably greater
functionality that the UE could have. It could also be anything between a mobile phone used
for talking to a data terminal attached to a computer with no voice capability.
●​ Radio Network Subsystem (RNS): The RNS also known as the UMTS Radio Access
Network, UTRAN, is the equivalent of the previous Base Station Subsystem or BSS in GSM.
It provides and manages the air interface fort he overall network.
●​ Core Network: The core network provides all the central processing and management for
the system. It is the equivalent of the GSM Network Switching Subsystem or NSS.
The core network is then the overall entity that interfaces to external networks including the public phone
network and other cellular telecommunications networks.
The main UMTS network blocks
User Equipment, UE
The USER Equipment or UE is a major element of the overall 3G UMTS network architecture. It forms
the final interface with the user. In view of the far greater number of applications and facilities that it can
perform, the decision was made to call it a user equipment rather than a mobile. However it is essentially
the handset (in the broadest terminology), although having access to much higher speed data
communications, it can be much more versatile, containing many more applications. It consists of a
variety of different elements including RF circuitry, processing, antenna, battery, etc.
There are a number of elements within the UE that can be described separately:
●​ UE RF circuitry: The RF areas handle all elements of the signal, both for the receiver and
for the transmitter. One of the major challenges for the RF power amplifier was to reduce the
power consumption. The form of modulation used for W-CDMA requires the use of a linear
amplifier. These inherently take more current than non linear amplifiers which can be used for
the form of modulation used on GSM. Accordingly to maintain battery life, measures were
introduced into many of the designs to ensure the optimum efficiency.
●​ Baseband processing: The base-band signal processing consists mainly of digital circuitry.
This is considerably more complicated than that used in phones for previous generations.
Again this has been optimised to reduce the current consumption as far as possible.
●​ Battery: While current consumption has been minimised as far as possible within the
circuitry of the phone, there has been an increase in current drain on the battery. With users
expecting the same lifetime between charging batteries as experienced on the previous
generation phones, this has necessitated the use of new and improved battery technology. Now
Lithium Ion (Li-ion) batteries are used. These phones to remain small and relatively light
while still retaining or even improving the overall life between charges.
●​ Universal Subscriber Identity Module, USIM: The UE also contains a SIM card, although
in the case of UMTS it is termed a USIM (Universal Subscriber Identity Module). This is a
more advanced version of the SIM card used in GSM and other systems, but embodies the
same types of information. It contains the International Mobile Subscriber Identity number
(IMSI) as well as the Mobile Station International ISDN Number (MSISDN). Other
information that the USIM holds includes the preferred language to enable the correct
language information to be displayed, especially when roaming, and a list of preferred and
prohibited Public Land Mobile Networks (PLMN).

The USIM also contains a short message storage area that allows messages to stay with the
user even when the phone is changed. Similarly "phone book" numbers and call information
of the numbers of incoming and outgoing calls are stored.
The UE can take a variety of forms, although the most common format is still a version of a "mobile
phone" although having many data capabilities. Other broadband dongles are also being widely used.

3G UMTS Radio Network Subsystem


This is the section of the 3G UMTS / WCDMA network that interfaces to both the UE and the core
network. The overall radio access network, i.e. collectively all the Radio Network Subsystem is known as
the UTRAN UMTS Radio Access Network.
The radio network subsystem is also known as the UMTS Radio Access Network or UTRAN.

3G UMTS Core Network


The 3G UMTS core network architecture is a migration of that used for GSM with further elements
overlaid to enable the additional functionality demanded by UMTS.
In view of the different ways in which data may be carried, the UMTS core network may be split into two
different areas:
●​ Circuit switched elements: These elements are primarily based on the GSM network entities
and carry data in a circuit switched manner, i.e. a permanent channel for the duration of the
call.
●​ Packet switched elements: These network entities are designed to carry packet data. This
enables much higher network usage as the capacity can be shared and data is carried as
packets which are routed according to their destination.
Some network elements, particularly those that are associated with registration are shared by both
domains and operate in the same way that they did with GSM
UMTS Network Architecture Overview

Circuit switched elements


The circuit switched elements of the UMTS core network architecture include the following network entities:
●​ Mobile switching centre (MSC):​ This is essentially the same as that within GSM, and it
manages the circuit switched calls under way.
●​ Gateway MSC (GMSC): This is effectively the interface to the external networks.

Packet switched elements


The packet switched elements of the 3G UMTS core network architecture include the following network
entities:
●​ Serving GPRS Support Node (SGSN): As the name implies, this entity was first developed
when GPRS was introduced, and its use has been carried over into the UMTS network
architecture. The SGSN provides a number of functions within the UMTS network
architecture.
○​ Mobility management When a UE attaches to the Packet Switched domain of the
UMTS Core Network, the SGSN generates MM information based on the mobile's
current location.
○​ Session management: The SGSN manages the data sessions providing the required
quality of service and also managing what are termed the PDP (Packet data
Protocol) contexts, i.e. the pipes over which the data is sent.
○​ Interaction with other areas of the network: The SGSN is able to manage its
elements within the network only by communicating with other areas of the
network, e.g. MSC and other circuit switched areas.
○​ Billing: The SGSN is also responsible billing. It achieves this by monitoring the
flow of user data across the GPRS network. CDRs (Call Detail Records) are
generated by the SGSN before being transferred to the charging entities (Charging
Gateway Function, CGF).
●​ Gateway GPRS Support Node (GGSN): Like the SGSN, this entity was also first introduced
into the GPRS network. The Gateway GPRS Support Node (GGSN) is the central element
within the UMTS packet switched network. It handles inter-working between the UMTS
packet switched network and external packet switched networks, and can be considered as a
very sophisticated router. In operation, when the GGSN receives data addressed to a specific
user, it checks if the user is active and then forwards the data to the SGSN serving the
particular UE.
Shared elements
The shared elements of the 3G UMTS core network architecture include the following network entities:
●​ Home location register (HLR): This database contains all the administrative information
about each subscriber along with their last known location. In this way, the UMTS network is
able to route calls to the relevant RNC / Node B. When a user switches on their UE, it
registers with the network and from this it is possible to determine which Node B it
communicates with so that incoming calls can be routed appropriately. Even when the UE is
not active (but switched on) it re-registers periodically to ensure that the network (HLR) is
aware of its latest position with their current or last known location on the network.
●​ Equipment identity register (EIR): The EIR is the entity that decides whether a given UE
equipment may be allowed onto the network. Each UE equipment has a number known as the
International Mobile Equipment Identity. This number, as mentioned above, is installed in the
equipment and is checked by the network during registration.
●​ Authentication centre (AuC) : The AuC is a protected database that contains the secret key
also contained in the user's USIM card.

CDMA2000
CDMA2000 is a code division multiple access (CDMA) version of IMT-2000 specifications developed by
International Telecommunication Union (ITU).
It includes a group of standards for voice and data services −
●​ Voice − CDMA2000 1xRTT, 1X Advanced
●​ Data − CDMA2000 1xEV-DO (Evolution-Data Optimized)
Features
●​ CDMA2000 is a family of technology for 3G mobile cellular communications for transmission of
voice, data and signals.
●​ It supports mobile communications at speeds between 144Kbps and 2Mbps.
●​ It has packet core network (PCN) for high speed secured delivery of data packets.
●​ It applies multicarrier modulation techniques to 3G networks. This gives higher data rate, greater
bandwidth and better voice quality. It is also backward compatible with older CDMA versions.
●​ It has multi-mode, multi-band roaming features.
Fig. CDMA 2000

The Radio Access Network (RAN) consists of multiple base stations, called Base station Transceiver
Systems (BTS). Each BTS is connected to a Base Station Controller (BSC). The Selection and
Distribution Unit (SDU) makes it possible for a BSC to connect to the Core Network (CN). Just like
UMTS, both CS and PS service domains are supported by CDMA2000.
The PS traffic is distributed by the SDU via interfaces A8 and A9 to Packet Control Function
(PCF) and then to Packet Data Serving Node (PDSN). The A8 interface provides data and A9 supports
signaling between PCF and SDU respectively.
Similarly, data and signaling is supported by A10 and A11 interfaces between PCF and PDSN. A
PDSN connects to one or more BSCs, which establishes, maintains and terminates link layer sessions to
MS. PDSN supports compression and packet filtering on the basis of Point to Point Protocol (PPP) whose
parameters can be negotiated between PDSN and Mobile Node (MN).
PDSN is also associated with an Authentication, Authorization and Accounting (AAA) server in
the service provider network.
CDMA2000 also supports four QoS classes (Conversational, Interactive, Streaming and
Background) in the same manner as UMTS, conversational being the most delay sensitive traffic, while
background being the least delay sensitive trafficA DiffServ Domain implementing LLQ The DiffServ
domain that we have selected for our example CDMA2000 network consists of DiffServ routers
implemented with Low Latency Queueing (LLQ).
LLQ is evolved from Class Based Weighted Fair Queueing (CBWFQ). Therefore, we briefly
explain CBWFQ before moving on to LLQ. CBWFQ is a combination of Custom Queueing (CQ) and
Weighted Fair Queueing (WFQ).
With CBWFQ as with CQ, we can specify the specific number of bytes for each queue to reserve
a minimum bandwidth and we also have the option of reserving bandwidth in terms of actual percentage
of traffic. CBWFQ behaves like WFQ in that CBWFQ can use WFQ inside one particular queue (called
class default queue), but it differs from WFQ in that; it does not keep up with flows for all traffic .
CBWFQ can classify packets on any marking scheme, e.g. DSCP, MPLS etc. The drop policy available is
tail drop or WRED, configurable per queue. The maximum number of queues available at each output
interface is 64 (one is class default queue) each queue having a maximum length of 64 packets. However,
it is possible to configure the number of queues according to the specific requirement at each output
interface. The output scheduler simply serves the configured number of queues and skips other queues.
The scheduling inside each queue is FIFO except class default queue, where one can select FIFO or
WFQ. LLQ is not a separate queueing tool, but rather an option of CBWFQ applied to one or more
classes. CBWFQ treats these classes as strict priority queues. CBWFQ always services packets in these
classes if a packet is waiting, just as PQ does for the highest priority queue. However, an important aspect
of LLQ is that it always serves high priority queues within the policed bandwidth. It is also possible to
have one low-latency queue inside a single policy map and it is also possible to have more than one LLQ
in a single policy map . Queueing does not differ when comparing using a single LLQ with multiple
low-latency queues in a single policy map. The scheduler always serves low latency queues first as
compared to higher latency queues but it does not re order the packets between various low-latency
queues, which means it serves them in FIFO logic .
C. CDMA2000-to-IP QoS Mapping In CDMA2000, QoS is based on DiffServ policies from AAA
profiles and parameters from HLR. For a mobile station there can be multiple DiffServ QoS profiles from
PDSN into the IP network. If the mobile station marks its traffic to PDSN with a DiffServ class indicator,
the PDSN can accept this classification or having the option to overwrite the marking with another
DiffServ class based on the AAA profile. If a mobile station does not mark its data traffic to PDSN, the
PDSN may optionally classify and mark the traffic with a suitable DiffServ class based on the AAA
profile [44]. In our example, we perform mapping as follows. Because we have selected a DiffServ
domain in which routers are implemented with low-latency queueing, we configure 4 queues out of 64
available queues 19 at the output interface. We use two low-latency queues and two high-latency queues.
We mark the traffic according to the marking rules of DiffServ domain based on the priority. We mark
conversational traffic with DSCP EF and assign it to the first low-latency queue (queue no. 1). The
interactive traffic is marked with DSCP AF41 and we put it into queue number 2, which is also a low
latency queue. We mark the streaming traffic with DSCP AF31 and put it into the 3rd queue and finally,
we mark background traffic with DSCP BE and assign it to the 4th queue.

Scheduler logic of Low Latency Queueing D. Mechanism to Build the Function Matrix for CDMA2000
In this section, we build the function matrix for a CDMA2000 network based on a DiffServ domain with
low-latency queueing. We will calculate end-to-end delay and throughput for each traffic class passing
through this domain as in the UMTS case using the same assumptions.

LTE Network Architecture

The high-level network architecture of LTE is comprised of following three main components:
●​ The User Equipment (UE).
●​ The Evolved UMTS Terrestrial Radio Access Network (E-UTRAN).
●​ The Evolved Packet Core (EPC).
The evolved packet core communicates with packet data networks in the outside world such as the
internet, private corporate networks or the IP multimedia subsystem. The interfaces between the different
parts of the system are denoted Uu, S1 and SGi as shown below:

Fig. LTE network architecture


The User Equipment (UE)
The internal architecture of the user equipment for LTE is identical to the one used by UMTS and GSM
which is actually a Mobile Equipment (ME). The mobile equipment comprised of the following
important modules:
●​ Mobile Termination (MT) : This handles all the communication functions.
●​ Terminal Equipment (TE) : This terminates the data streams.
●​ Universal Integrated Circuit Card (UICC) : This is also known as the SIM card for LTE
equipments. It runs an application known as the Universal Subscriber Identity Module (USIM).
A USIM stores user-specific data very similar to 3G SIM card. This keeps information about the user's
phone number, home network identity and security keys etc.
The E-UTRAN (The access network)
The architecture of evolved UMTS Terrestrial Radio Access Network (E-UTRAN) has been illustrated
below.
The E-UTRAN handles the radio communications between the mobile and the evolved packet core and
just has one component, the evolved base stations, called eNodeB or eNB. Each eNB is a base station
that controls the mobiles in one or more cells. The base station that is communicating with a mobile is
known as its serving eNB.
LTE Mobile communicates with just one base station and one cell at a time and there are following two
main functions supported by eNB:
●​ The eBN sends and receives radio transmissions to all the mobiles using the analogue and digital
signal processing functions of the LTE air interface.
●​ The eNB controls the low-level operation of all its mobiles, by sending them signalling messages
such as handover commands.
Each eBN connects with the EPC by means of the S1 interface and it can also be connected to nearby
base stations by the X2 interface, which is mainly used for signalling and packet forwarding during
handover.
A home eNB (HeNB) is a base station that has been purchased by a user to provide femtocell coverage
within the home. A home eNB belongs to a closed subscriber group (CSG) and can only be accessed by
mobiles with a USIM that also belongs to the closed subscriber group.
The Evolved Packet Core (EPC) (The core network)
The architecture of Evolved Packet Core (EPC) has been illustrated below. There are few more
components which have not been shown in the diagram to keep it simple. These components are like the
Earthquake and Tsunami Warning System (ETWS), the Equipment Identity Register (EIR) and Policy
Control and Charging Rules Function (PCRF).

Below is a brief description of each of the components shown in the above architecture:
●​ The Home Subscriber Server (HSS) component has been carried forward from UMTS and GSM
and is a central database that contains information about all the network operator's subscribers.
●​ The Packet Data Network (PDN) Gateway (P-GW) communicates with the outside world ie.
packet data networks PDN, using SGi interface. Each packet data network is identified by an
access point name (APN). The PDN gateway has the same role as the GPRS support node
(GGSN) and the serving GPRS support node (SGSN) with UMTS and GSM.
●​ The serving gateway (S-GW) acts as a router, and forwards data between the base station and the
PDN gateway.
●​ The mobility management entity (MME) controls the high-level operation of the mobile by
means of signalling messages and Home Subscriber Server (HSS).
●​ The Policy Control and Charging Rules Function (PCRF) is a component which is not shown in
the above diagram but it is responsible for policy control decision-making, as well as for
controlling the flow-based charging functionalities in the Policy Control Enforcement Function
(PCEF), which resides in the P-GW.
The interface between the serving and PDN gateways is known as S5/S8. This has two slightly different
implementations, namely S5 if the two devices are in the same network, and S8 if they are in different
networks.
Functional split between the E-UTRAN and the EPC
Following diagram shows the functional split between the E-UTRAN and the EPC for an LTE network:

2G/3G Versus LTE


Following table compares various important Network Elements & Signaling protocols used in 2G/3G
abd LTE.
2G/3G LTE

GERAN and UTRAN E-UTRAN

SGSN/PDSN-FA S-GW
GGSN/PDSN-HA PDN-GW

HLR/AAA HSS

VLR MME

SS7-MAP/ANSI-41/RADIUS Diameter

DiameterGTPc-v0 and v1 GTPc-v2

MIP PMIP

LoRa and LoRaWAN:


LoRa is the physical layer or the wireless modulation utilized to create the long range communication
link. Many legacy wireless systems use frequency shifting keying (FSK) modulation as the physical layer
because it is a very efficient modulation for achieving low power. LoRa is based on chirp spread spectrum
modulation, which maintains the same low power characteristics as FSK modulation but significantly
increases the communication range.
Chirp spread spectrum has been used in military and space communication for decades due to the long
communication distances that can be achieved and robustness to interference, but LoRa® is the first low
cost implementation for commercial usage. Long Range (LoRa®) The advantage of LoRa® is in the
technology’s long range capability.
A single gateway or base station can cover entire cities or hundreds of square kilometers. Range highly
depends on the environment or obstructions in a given location, but LoRa® and LoRaWAN™ have a link
budget greater than any other standardized communication technology.
The link budget, typically given in decibels (dB), is the primary factor in determining the range in a given
environment. Below are the coverage maps from the Proximus network deployed in Belgium. With a
minimal amount of infrastructure, entire countries can easily be covered.
LoRaWAN:
LoRaWAN™ defines the communication protocol and system architecture for the network while the
LoRa® physical layer enables the long-range communication link. The protocol and network architecture
have the most influence in determining the battery lifetime of a node, the network capacity, the quality of
service, the security, and the variety of applications served by the network.

Network Architecture
Many existing deployed networks utilize a mesh network architecture. In a mesh network, the individual
end-nodes forward the information of other nodes to increase the communication range and cell size of
the network. While this increases the range, it also adds complexity, reduces network capacity, and
reduces battery lifetime as nodes receive and forward information from other nodes that is likely
irrelevant for them. Long range star architecture makes the most sense for preserving battery lifetime
when long-range connectivity can be achieved.
In a LoRaWAN™ network nodes are not associated with a specific gateway. Instead, data transmitted by
a node is typically received by multiple gateways. Each gateway will forward the received packet from
the end-node to the cloud-based network server via some backhaul (either cellular, Ethernet, satellite, or
Wi-Fi).
The intelligence and complexity is pushed to the network server, which manages the network and will filter
redundant received packets, perform security checks, schedule acknowledgments through the optimal
gateway, and perform adaptive data rate, etc. If a node is mobile or moving there is no handover needed from
gateway to gateway, which is a critical feature to enable asset tracking applications–a major target
application vertical for IoT.
Battery Lifetime
The nodes in a LoRaWAN™ network are asynchronous and communicate when they have data ready to
send whether event-driven or scheduled. This type of protocol is typically referred to as the Aloha
method. In a mesh network or with a synchronous network, such as cellular, the nodes frequently have to
‘wake up’ to synchronize with the network and check for messages. This synchronization consumes
significant energy and is the number one driver of battery lifetime reduction. In a recent study and
comparison done by GSMA of the various technologies addressing the LPWAN space, LoRaWAN™
showed a 3 to 5 times advantage compared to all other technology options.
Network Capacity
In order to make a long range star network viable, the gateway must have a very high capacity or
capability to receive messages from a very high volume of nodes. High network capacity in a
LoRaWAN™ network is achieved by utilizing adaptive data rate and by using a multichannel
multi-modem transceiver in the gateway so that simultaneous messages on multiple channels can be
received. The critical factors effecting capacity are the number of concurrent channels, data rate (time on
air), the payload length, and how often nodes transmit.
Since LoRa® is a spread spectrum based modulation, the signals are practically orthogonal to each other
when different spreading factors are utilized. As the spreading factor changes, the effective data rate also
changes.
The gateway takes advantage of this property by being able to receive multiple different data rates on the
same channel at the same time. If a node has a good link and is close to a gateway, there is no reason for it
to always use the lowest data rate and fill up the available spectrum longer than it needs to.
By shifting the data rate higher, the time on air is shortened opening up more potential space for other
nodes to transmit. Adaptive data rate also optimizes the battery lifetime of a node. In order to make
adaptive data rate work, symmetrical up link and down link is required with sufficient downlink capacity.
These features enable a LoRaWAN™
network to have a very high capacity and make the network scalable.
A network can be deployed with a minimal amount of infrastructure, and as capacity is needed, more
gateways can be added, shifting up the data rates, reducing the amount of overhearing to other gateways,
and scaling the capacity by 6-8x.
Other LPWAN alternatives do not have the scalability of LoRaWAN™ due to technology trade-offs,
which limit downlink capacity or make the downlink range asymmetrical to the uplink range.
Device Classes – Not All Nodes Are Created Equal
End-devices serve different applications and have different requirements. In order to optimize a variety of
end application profiles, LoRaWAN™ utilizes different device classes. The device classes trade off
network downlink communication latency versus battery lifetime. In a control or actuator-type
application, the downlink communication latency is an important factor.
Security
It is extremely important for any LPWAN to incorporate security. LoRaWAN™ utilizes two layers of
security: one for the network and one for the application.
The network security ensures authenticity of the node in the network while the application layer of
security ensures the network operator does not have access to the end user’s application data. AES
encryption is used with the key exchange utilizing an IEEE EUI64 identifier.
There are trade-offs in every technology choice but the LoRaWAN™ features in network architecture,
device classes, security, scalability for capacity, and optimization for mobility address the widest variety
of potential IoT applications.
Module No. 3: Wireless Metropolitan and Local Area Networks

IEEE 802.16(WiMax)- Mesh mode, Physical and MAC layer,IEEE


802.11(Wi-Fi)- Architecture, Protocol Stack, Enhancements and
Applications
Wireless means transmitting signals using radio waves as the medium instead of wires. Wireless
technologies are used for tasks as simple as switching off the television or as complex as supplying the
sales force with information from an automated enterprise application while in the field. Now cordless
keyboards and mice, PDAs, pagers and digital and cellular
phones have become part of our daily life.

Some of the inherent characteristics of


wireless communications systems which
make it attractive for users, are given
below −
●​ Mobility − A wireless
communications system allows users to
access information beyond their desk
and conduct business from anywhere
without having a wire connectivity.
●​ Reachability − Wireless communication systems enable people to be
stay connected and be reachable, regardless of the location they are
operating from.
●​ Simplicity − Wireless communication system are easy and fast to
deploy in comparison of cabled network. Initial setup cost could be
a bit high but other advantages overcome that high cost.
●​ Maintainability − In a wireless system, you do not have to spend too
much cost and time to maintain the network setup.
●​ Roaming Services − Using a wireless network system, you can provide
service any where any time including train, buses, aeroplanes etc.
●​ New Services − Wireless communication systems provide various smart
services like SMS and MMS.
Wireless Network Topologies
There are basically three ways to set up a wireless network −
Point-to-point bridge
As you know, a bridge is used to connect two networks. A point-to-point bridge interconnects two
buildings having different networks. For example, a wireless LAN bridge can interface with an Ethernet
network directly to a particular access point (as shown in the following image).

Point-to-multipoint bridge
This topology is used to connect three or more LANs that may be located on different floors in a building
or across buildings(as shown in the following image).

Mesh or ad hoc network


This network is an independent local area network that is not connected to a wired infrastructure and in
which all stations are connected directly to one another(as shown in the following image).

Wireless Technologies
Wireless technologies can be classified in different ways depending on their range. Each wireless
technology is designed to serve a specific usage segment. The requirements for each usage segment are
based on a variety of variables, including Bandwidth needs, Distance needs and Power.
Wireless Wide Area Network (WWAN)
This network enables you to access the Internet via a wireless wide area network (WWAN) access card
and a PDA or laptop.
These networks provide a very fast data speed compared with the data rates of mobile
telecommunications technology, and their range is also extensive. Cellular and mobile networks based on
CDMA and GSM are good examples of WWAN.
Wireless Personal Area Network (WPAN)
These networks are very similar to WWAN except their range is very limited.
Wireless Local Area Network (WLAN)
This network enables you to access the Internet in localized hotspots via a wireless local area network
(WLAN) access card and a PDA or laptop.
It is a type of local area network that uses high-frequency radio waves rather than wires to
communicate between nodes.
These networks provide a very fast data speed compared with the data rates of mobile
telecommunications technology, and their range is very limited. Wi-Fi is the most widespread and popular
example of WLAN technology.
Wireless Metropolitan Area Network (WMAN)
This network enables you to access the Internet and multimedia streaming services via a wireless region
area network (WRAN).
These networks provide a very fast data speed compared with the data rates of mobile
telecommunication technology as well as other wireless network, and their range is also extensive.
Issues with Wireless Networks
There are following three major issues with Wireless Networks.
●​ Quality of Service (QoS) − One of the primary concerns about wireless
data delivery is that, unlike the Internet through wired services,
QoS is inadequate. Lost packets and atmospheric interference are
recurring problems of the wireless protocols.
●​ Security Risk − This is another major issue with a data transfer over a wireless network.
Basic network security mechanisms like the service set identifier (SSID) and Wireless
Equivalency Privacy (WEP); these measures may be adequate for residences and small
businesses, but they are inadequate for the entities that require stronger security.
●​ Reachable Range − Normally, wireless network offers a range of about
100 meters or less. Range is a function of antenna design and power.
Now a days the range of wireless is extended to tens of miles so this
should not be an issue any more.
Wireless Broadband Access (WBA)
Broadband wireless is a technology that promises high-speed connection over the air. It uses radio waves
to transmit and receive data directly to and from the potential users whenever they want it. Technologies
such as 3G, Wi-Fi, WiMAX and UWB work together to meet unique customer needs.
WBA is a point-to-multipoint system which is made up of base station and subscriber equipment.
Instead of using the physical connection between the base station and the subscriber, the base station uses
an outdoor antenna to send and receive high-speed data and voice-to-subscriber equipment.
WBA offers an effective, complementary solution to wireline broadband, which has become
globally recognized by a high percentage of the population.

IEEE 802.11(WiFi-Wireless Fidelity)


What is Wi-Fi ?
Wired networks differ from wireless which uses radio waves rather than transmitting electrical signals
over the cables.
Wi-Fi stands for Wireless Fidelity. It is a technology for wireless local area networking with
devices based on IEEE 802.11 standards.
Wi-Fi compatible devices can connect to the internet via WLAN network and a wireless access
point abbreviated as AP. Every WLAN has an access point which is responsible for receiving and
transmitting data from/to users.
IEEE has defined certain specifications for wireless LAN, called IEEE 802.11 which covers
physical and data link layers.
Access Point(AP) is a wireless LAN base station that can connect one or many wireless devices
simultaneously to the Internet.
Wi-Fi stands for Wireless Fidelity. Wi-Fi is based on the IEEE 802.11 family of standards and is
primarily a local area networking (LAN) technology designed to provide in-building broadband coverage.
Radio Signals
Radio Signals are the keys, which make WiFi networking possible.
These radio signals transmitted from WiFi antennas are picked up by
WiFi receivers, such as computers and cell phones that are equipped
with WiFi cards. Whenever, a computer receives any of the signals
within the range of a WiFi network, which is usually 300 — 500 feet for
antennas, the WiFi card reads the signals and thus creates an internet
connection between the user and the network without the use of a cord.
Access points, consisting of antennas and routers, are the main source that transmit and receive radio
waves. Antennas work stronger and have a longer radio transmission with a radius of 300-500 feet, which
are used in public areas while the weaker yet effective router is more suitable for homes with a radio
transmission of 100-150 feet.
WiFi Cards
You can think of WiFi cards as being invisible cords that connect your computer to the antenna for a
direct connection to the internet.
WiFi cards can be external or internal. If a WiFi card is not installed in your computer, then you
may purchase a USB antenna attachment and have it externally connect to your USB port, or have an
antenna-equipped expansion card installed directly to the computer (as shown in the figure given above).
For laptops, this card will be a PCMCIA card which you insert to the PCMCIA slot on the laptop.
WiFi Releases
●​ 802.11a: 5GHz band

●​ 802.11b could transfer data at rates of between 1.5 and 54 Mbps uses 2.4GHz band

●​ 802.11d: LAN and MAN

●​ 802.11g: Broadband wireless

●​ 802.11i: Security

●​ 802.11n: Wideband service

●​ Further releases of the took place as time progressed, each one providing improved performance
or different capabilities, the major ones being: 802.11g (2003); 802.11n (2009), 802.11ac (2013),
802.11ax (2019)
WiFi Hotspots
A WiFi hotspot is created by installing an access point to an internet connection. The access point
transmits a wireless signal over a short distance. It typically covers around 300 feet. When a WiFi enabled
device such as a Pocket PC encounters a hotspot, the device can then connect to that network wirelessly.
Most hotspots are located in places that are readily accessible to the public such as airports, coffee shops,
hotels, book stores, and campus environments. 802.11b is the most common specification for hotspots
worldwide. The 802.11g standard is backwards compatible with .11b but .11a uses a different frequency
range and requires separate hardware such as an a, a/g, or a/b/g adapter. The largest public WiFi networks
are provided by private internet service providers (ISPs); they charge a fee to the users who want to
access the internet.
Fig. WiFi Application
Hotspots are increasingly developing around the world. In fact, T-Mobile USA controls more than 4,100
hotspots located in public locations such as Starbucks, Borders, Kinko's, and the airline clubs of Delta,
United, and US Airways. Even select McDonald's restaurants now feature WiFi hotspot access.
Any notebook computer with integrated wireless, a wireless adapter attached to the motherboard
by the manufacturer, or a wireless adapter such as a PCMCIA card can access a wireless network.
Furthermore, all Pocket PCs or Palm units with Compact Flash, SD I/O support, or built-in WiFi, can
access hotspots.
Some Hotspots require a WEP key to connect, which is considered as private and secure. As for
open connections, anyone with a WiFi card can have access to that hotspot. So in order to have internet
access under WEP, the user must input the WEP key code.

The architecture of this standard has 2 kinds of services:


1. BSS (Basic Service Set)
2. ESS (Extended Service Set)

BSS is the basic building block of WLAN. It is made of wireless mobile stations and an optional central
base station called Access Point.
Stations can form a network without an AP and can agree to be a part of a BSS.
A BSS without an AP cannot send data to other BSSs and defines a standalone network. It is called
Ad-hoc network or Independent BSS(IBSS).i.e A BSS without AP is an ad-hoc network.
A BSS with AP is infrastructure network.

The figure below depicts an IBSS, BSS with the green coloured box depicting an AP.
ESS is made up of 2 or more BSSs with APs. BSSs are connected to the distribution system via their APs.
The distribution system can be any IEEE LAN such as Ethernet.

ESS has 2 kinds of stations:


1. Mobile – stations inside the BSS
2. Stationary – AP stations that are part of wired LAN.

The topmost green box represents the distribution system and the other 2 green boxes represent the APs of
2 BSSs.

Wi-Fi wireless connectivity is an established part of everyday life. All smartphones have Wi-Fi
technology incorporated as one of the basic elements of the phone enabling low cost connectivity to be
provided. In addition to this, computers, laptops, tablets, cameras and very many other devices use Wi-Fi.
Wi-Fi access is available in many places via Wi-Fi access points or small DSL / Ethernet routers. Homes,
offices, shopping centres, airports, coffee shops and many more places offer Wi-Fi access.
Wi-Fi is now one of the major forms of communication for many devices and with home automation
increasing, even more devices are using it. Home Wi-Fi is a big area of usage of the technology with most
homes that use broadband connections to the Internet using Wi-Fi access as a key means of
communication.
Local area networks of all forms use Wi-Fi as one of the main forms of communication along with
Ethernet. For the home, office and many other areas, Wi-Fi is a major carrier of data.
To enable different items incorporating wireless technology like this to communicate with each other,
common standards are needed. The standard for Wi-Fi is IEEE 802.11. The different variants like 802.11n
or 802.11ac are different standards within the overall series and they define different variants. By
releasing updated variants, the overall technology has been able to keep pace with the ever growing
requirements for more data and higher speeds, etc. Technologies including gigabit Wi-Fi are now widely
used.

Typical modern WiFi router


How Wi-Fi was born
Although it is possible to trace the history of Wi-Fi back to many developments in radio or wireless
technology, the first release of IEEE 802.11 occurred in 1997. This was a time when the Internet was in its
infancy and most personal computers were desktop computers. This first release of IEEE 802.11 was for a
system that provided 1 or 2 Mbps transfer rates using frequency hopping or direct sequence spread
spectrum. The standard was only referred to as IEEE 802.11 and there were no suffix letters as we see
today.
Then in 1999, the 802.11b specification was released. This provided raw data rates of 11 Mbps, and used
the 2.4GHz ISM band: the first products were released in 2000.
The release of 802.11b was followed by 802.11a and this used an OFDM waveform and could transfer
data at rates of between 1.5 and 54 Mbps and it uses RF channels in the 5 GHz ISM band where there was
far more available space.
Further releases of the took place as time progressed, each one providing improved performance or
different capabilities, the major ones being: 802.11g (2003); 802.11n (2009), 802.11ac (2013), 802.11ax
(2019).
Another major milestone in the development of Wi-Fi 802.11 was the formation of the Wi-Fi Alliance in
1999. This is an industry body that works towards greater levels of adoption of Wi-Fi as well as ensuring
that all devices can inter-operate successfully. It is separate from the IEEE which develops the standards,
but naturally it works with them.
What is Wi-fi?
There have been many debates about where the term Wi-Fi came from. Often people will think it stands
for Wireless Fidelity, but this is not actually the case. Even though the term Wireless Fidelity often
appears in many documents, the truth is that this is an incorrect explanation of the term.
The term Wi-Fi was coined as a brand name by the Wi-Fi Alliance when they were formed and took on
board the promotion of the standard.
Wi-Fi is a wireless based technology that allows devices like laptops, smart phones,TVs, gaming devices,
etc to connect at high speed to the internet without the need for a physical wired connection.
The technology uses licence free allocations so that it is free for all to use without the need for a wireless
transmitting licence. Typically Wi-Fi uses the 2.4 and 5 GHz ISM, Industrial, Scientic and Medical, ISM
bands as these do not require a licence, but it also means they are open to other users as well and this can
mean that interference exists.
Power levels are also low. Typically they are around 100 or 200 mW, although the maximum levels
depend upon the country in which the equipment is located. Some allow maximum powers of a watt or
more on some channels.
The core of any Wi-Fi system is known as the Access Point, AP. The Wi-Fi access point is essentially the
base station that communicates with the Wi-Fi enabled devices - data can then be routed onto a local area
network, normally via Ethernet and typically links onto the Internet.

Fig.How a Wi-Fi Access Point may be connected on an office local area network
Public Wi-Fi access points are typically used to provide local Internet access often on items like
smartphones or other devices without the need for having to use more costly mobile phone data. They are
also often located within buildings where the mobile phone signals are not sufficiently strong.
Home Wi-Fi systems often use an Ethernet router: this provides the Wi-Fi access point as well as Ethernet
communications for desk top computers, printers and the like as well as the all important link to the
Internet via a firewall. Being an Ethernet router it transcribes the IP addresses to provide a firewall
capability.
Although Wi-Fi links are established on either of the two main bands, 2.4 GHz and 5GHz, many Ethernet
routers and Wi-Fi access points provide dual band Wi-Fi connectivity and they will provide 2.4 GHz and
5 GHz Wi-Fi. This enables the best Wi-Fi links to be made regardless of usage levels and interference on
the bands.
There will typically be a variety of different Wi-Fi channels that can be used. The Wi-Fi access point or
Wi-Fi router will generally select the optimum channel to be used. If the access point or router provides
dual band Wi-Fi capability, a selection of the band will also be made. These days, this selection is
normally undertaken by the Wi-Fi access point or router, without user intervention so there is no need to
select 2.4 GHz or 5 GHz Wi-Fi as on older systems.
Fig. Home wifi
In order to ensure the the local area network to which the Wi-Fi access point is connected remains secure,
a password is normally required to be able to log on to the access point. Even home Wi-Fi networks use a
password to ensure that unwanted users do not access the network.
Many types of device can connect to Wi-Fi networks. Today devices like smartphones, laptops
and the like expect to use Wi-Fi and therefore it is incorporated as part of the product - no need to do
anything apart from connect. A lot of other devices also have Wi-Fi embedded in them: smart TVs,
cameras and many more. Their set up is also very easy.
Occasionally some devices may need a little more attention. These days, most desktop PCs will
come ready to use with Ethernet, and often they have Wi-Fi capability included. Some may not have
Wi-Fi incorporated and therefore that may need additional hardware if they are required to use Wi-Fi
links. An additional card in the PC, or an external dongle should suffice for this.
In general, most devices that need to communicate data electronically will have a Wi-Fi capability.
WiFi network types
Although most people are familiar with the basic way that a home Wi-Fi network might work, it is not the
only format for a WiFi network.
Essentially there are two basic types of Wi-Fi network:
●​ Local area network based network: This type of network may be loosely termed a LAN based
network. Here a Wi-Fi Access Point, AP is linked onto a local area network to provide wireless as well
as wired connectivity, often with more than one Wi-Fi hotspot.​

The infrastructure application is aimed at office areas or to provide a "hotspot". The office may even
work wirelessly only and just have a Wireless Local Area Network, WLAN. A backbone wired network
is still required and is connected to a server. The wireless network is then split up into a number of cells,
each serviced by a base station or Access Point (AP) which acts as a controller for the cell. Each Access
Point may have a range of between 30 and 300 metres dependent upon the environment and the location
of the Access Point.​

More normally a LAN based network will provide both wired and wireless access. This is the type of
network that is used in most homes, where a router which has its own firewall is connected to the
Internet, and wireless access is provided by a Wi-Fi access point within the router,. Ethernet and often
USB connections are also provided for wired access.
●​ Ad hoc network: The other type of Wi-Fi network that may be used is termed an Ad-Hoc network.
These are formed when a number of computers and peripherals are brought together. They may be
needed when several people come together and need to share data or if they need to access a printer
without the need for having to use wire connections. In this situation the users only communicate with
each other and not with a larger wired network.​

As a result there is no Wi-Fi Access Point and special algorithms within the protocols are used to enable
one of the peripherals to take over the role of master to control the Wi-Fi network with the others acting
as slaves.​

This type of network is often used for items like games controllers / consoles to communicate.

WiFi hotspots
One of the advantages of using WiFi IEEE 802.11 is that it is possible to connect to the Internet when out
and about. Public WiFi access is everywhere - in cafes, hotels, airports, and very many other places.
Sometimes all that is required is to select a network and press the connect button. Others require a
password to be entered.

Typical modern WiFi router with multiple antennas


When using public Wi-Fi networks it is essential to act wisely because it is very easy for hackers to gain
access and see exactly what you are sending: user names, passwords, credit card credentials, etc. If the
Wi-Fi network does not use encryption, then all the data can be seen by potential hackers.
In order to develop a common standard for the implementation for Wi-Fi hotspots a standard known
as Hotspot 2.0 was developed. This is implemented by a number of operators when deploying Wi-Fi
hotspots.

When looking at what is WiFi, there are some key topics to look at. There are both the theoretical and
practical issues to looking at dependent upon what is needed:
●​ Wi-Fi variants & standards: There are several different forms of Wi-Fi. The first that were widely
available were IEEE802.11a and 802.11b. These have long been superseded with a variety of variants
offering much higher speeds and generally better levels of connectivity. There are many different Wi-Fi
standard which have been used, each one with different levels of performance. IEEE 802.11a, 802.11b,
g, n, 802.11ac, 802.11ad Gigabit Wi-Fi, 11af White-Fi, ah, ax etc.​

●​ Positioning a Wi-Fi router: The performance of a Wi-Fi router can be very dependent upon its
location. Place it badly and it will not be able to perform as well. By locating a router in the best
position, much better performance can be gained.​
The location of the Wi-Fi access point or router is key to providing good performance. Locating it in the
right position can enable it to give much better service over more of the intended area.​

●​ Using Hotspots securely: Wi-Fi hotspots are everywhere, and they are very convenient to use
providing cheap access to data services. But public Wi-Fi hotspots are not particularly secure - some are
very open and can open up the unwary user to having credentials and other secure details being obtained
or computers hacked, etc.​

When using public Wi-Fi, great care must be taken and several rules should be followed to ensure the
malicious users do not take advantage. Wi-Fi security is always a major issue.​

When using a Wi-Fi link that could be monitored by someone close by, for example when in a coffee
shop, etc, make sure that the link is secure along with the website being browsed, i.e. only visit https
sites. It is always wise not to expose credit card details or login passwords, etc when on a public Wi-Fi
link, even if the Wi-Fi link is secure. It is all too easy for details to be gathered, and saved for use later.​

If using a smartphone, it is far, far safer to use the mobile network itself. If necessary when using a
laptop or tablet, link this to the smartphone as personal hotspot as this will have a password (remember
to choose a safe one) and this is much less likely to be hacked.​

Wi-Fi is now an essential part of the connectivity system working alongside mobile communications,
local area wired connectivity and much more. With the growing use of various forms of wireless
connectivity for devices like smartphone and laptops as well as connected televisions, security system and
a host more, the use of Wi-Fi will only grow. In fact with the Internet of Things now being a reality and
its use increasing, the use of Wi-Fi will also continue to grow.
As new standards are developed its performance will improve, both for office, local hotspots and home
Wi-Fi. For the future, not only will speeds improve, with the introduction of aspects like Gigabit Wi-Fi,
but also the methods of use and its flexibility. In this way, Wi-Fi will remain a chosen technology for short
range connectivity.
IEEE 802.11 protocol stack:

Fig. 802.11 protocol stack

Fig. Part of IEEE 802.11 protocol stack


In 802.11 the MAC sublayer determines which channel gets to transmit next. The sublayer above, the
LLC (Logical Link Layer), hides the differences between the varying 802.11 versions for the network
layer.
The 802.11 physical layer
All 802.11 techniques use short-range radios to transmit signals in either 2.4-GHz or 5-GHz ISM
frequency bands. These bands are unlicensed, and so are shared by many other devices such as garage
door openers, or microwave ovens. Fewer applications tend to use the 5-GHz band, so 5-GHz can be
better for some applications despite shorter range due to higher frequency
All 802.11 transmission methods define multiple rates. Different rates can be used depending on the
current conditions. If the signal is weak, a low rate is used. If the signal is clear, the highest rate is used.
The process of adjustment is called rate adaption.
802.11b
802.11b is a spread-spectrum method. It supports rates of 1, 2, 5.5, and 11Mb/s
802.11b is similar to the CDMA system, except that one spreading code is shared between all users.
802.11b uses a spreading sequence called the Barker Sequence. The autocorrelation of the Barker
Sequence is low except when sequences are aligned. This allows a receiver to lock onto the start of a
transmission. The Barker sequence is “used with BPSK modulation to send 1 bit per 11 chips”
Higher rates use CCK (Complementary Code Keying) to construct codes, rather than the Barker Sequence
802.11a
802.11a was standardized after 802.11b, despite the group being formed first (hence the name). It supports
rates up to 54Mb/s in the 5-GHz ISM band
802.11a is based on OFDM (Orthogonal Frequency Division Multiplexing).
Bits are sent over 52 subcarriers in parallel. 48 carry data, and 4 are used for synchronization. A symbol
lasts 4μs, and sends either 1, 2, 4, or 6 bits. “The bits are coded for error correction with a binary
convolutional code first so only 1/2, 2/3, or 3/4 of the bits are not redundant”.
802.11a can run at different rates using the different combinations . The rates range from 5 to 55Mb/s.

802.11g
802.11g uses the OFDM modulation methods of 802.11a, but operates in 2.4GHz ISM band [1, P. 302].
It has the same rates as 802.11a, as well as compatibility with 802.11b devices .
802.11n
802.11n was ratified in 2009. The aim of 802.11n was throughput of 100Mb/s after transmission
overheads were removed.
To meet the goal:
●​ Channels were doubled from 20MHz to 40MHz.
●​ Frame overhead was reduced by allowing a group of frames to be sent together.
●​ Up to four streams could be transmitted at a time using four antennas.
In 802.11, the stream signals interfere at the receiver, but they can be separated using MIMO (Multiple
Input Multiple Output) techniques.
The MAC sublayer protocol
The 802.11 MAC sublayer is different from the Ethernet MAC sublayer for two reasons:
●​ Radios are almost always half duplex
●​ Transmission ranges of different stations might be different​
802.11 uses the CSMA/CA (CSMA with Collision Avoidance) protocol. CSMA/CA is similar
to ethernet CSMA/CD. It uses channel sensing and exponential backoff after collisions, but
instead of entering backoff once a collision has been detected, CSMA/CA uses backoff
immediately (unless the sender has not used the channel recently and the channel is idle) [1, P.
303].
The algorithm will backoff for a number of slots, for example 0 to 15 in the case of the of the OFDM
physical layer. The station waits until the channel is idle by sensing that there is no signal for a short
period of time. It counts down idle slots, pausing when frames are sent. When its counter reaches 0, it
sends its frames [1, P. 303].
Acknowledgements “are used to infer collisions because collisions cannot be detected”
This way of operating is called DCF (Distributed Coordination Function). in DCF each station is
acting independently, without a central control.
The other problem facing 802.11 protocols is transmission ranges differing between stations. It’s possible
for transmissions in one part of a cell to not be received in another part of the cell, which can make it
impossible for a sender to sense a busy channel, resulting in collisions.-
802.11 defines channel sensing to consist of physical and virtual sensing. Physical sensing “checks the
medium to see if there is a valid signal”.
With virtual sensing, each station keeps a record of what channel is in use. It does this with the NAV
(Network Allocation Vector). Each frame includes a NAV field that contains information on how long
the sequence that the frame is part of will take to complete [1, P. 305].
802.11 is designed to:
●​ Be reliable.
●​ Be power-saving.
●​ Provide quality of service.
The main strategy for reliability is to lower the transmission rate if too many frames are unsuccessful.
Lower transmission rates use more robust modulations. If too many frames are lost, a station can lower its
rate. If frames are successfully delivered, a station can test a higher rate to see if should upgrade.
Another strategy for successful transmissions is to send shorter frames. 802.11 allows frames to be split
into fragments, with their own checksum. The fragment size can be adjusted by the AP. Fragments are
numbered and sent using a stop-and-wait protocol.
802.11 uses beacon frames. Beacon frames are broadcast periodically by the AP. The frames advertise
the presence of the AP to clients and carry system parameters, such as the identifier of the AP, the time,
how long until the next beacon, and security settings”.
Clients can set a power-management bit in frames that are sent to the AP to alert it that the client is
entering power-save mode. In power-save mode, the client rests and the AP buffers traffic intended for it.
The client wakes up for every beacon, and checks a traffic map that’s sent with the beacon. The traffic
map tells the client whether there is buffered traffic. If there is, the client sends a poll to the AP, and the
AP sends the buffered traffic .
802.11 provides quality of service by extending CSMA/CA with defined intervals between frames.
Different kinds of frames have different time intervals. The interval between regular data frame is called
the DIFS (DCF InterFrame Spacing). Any station can attempt to acquire a channel after the channel has
been idle for DIFS].
The shortest interval is SIFS (Short InterFrame Spacing). SIFS is used to send an ACK, other control
frames like RTS, or for sending another fragment (which prevents another station from transmitting
during the middle of a frame) .
Different priorities of traffic are determined with different AIFS (Arbitration InterFrame Space)
intervals. A short AIF can allow the AP to send higher priority traffic. An AIF that is longer than DIFS
means the traffic will be sent after regular traffic .
Another quality of service mechanism is transmission opportunity. Previously, CSMA/CA allowed only
one frame to be sent at a time. This slowed down stations with significantly faster rates. Transmission
opportunities make it so each station has equal airtime, not an equal number of sent frames .
802.11 frame structure
There are three different classes of frames used in the air:
●​ Data
●​ Control
●​ Management

Fig. frame format


The first part of frame is the Frame Control field, made up of 11 subfields:
●​ Protocol Version: set to 00 for current versions of 802.11.
●​ Type: can be one of data, control, or management, and the Subtype (e.g RTS or CTS). These are
set to 10 and 0000 in binary for a normal data field.
●​ To DS and From DS: these bits indicate whether frames are coming or going from a network
connected to the AP (the network is called the distribution system).
●​ More Fragments: this bit means that more fragments will follow.
●​ Retry: this bit “marks a retransmission of a frame sent earlier”.
●​ Power Management: this bit indicates that the sender is going into power-save mode.
●​ More Data: this bit indicates that the sender has additional frames for the receiver.
●​ Protected Frame: this bit indicates that the frame body has been encrypted for security.
●​ Order: this “bit tells the receiver that the higher layer expects the sequence of frames to arrive
strictly in order”.

The second field in the data frame is the Duration field. This describes how long the frame and its
acknowledgements will occupy the channel (measured in microseconds). It’s included in all frames,
including control frame.
The addresses to and from an AP follow the standard IEEE 802 format. The Address 1 is the receiver,
Address 2 is the transmitter, Address 3 is the address of the endpoint that originally sent the frame via the
AP
The Sequence 16-bit field numbers frames so that duplicates can be detected. The first 4 bits identify the
fragment, the last 12 contain a number that’s incremented on each transmission
The Data field contains the payload. It can be up to 2312 bytes. The first bytes of the payload are for the
LLC layer to identify the higher-layer protocol that the data .
The final part of the frame is the Frame Check Sequence field, containing a 32-bit CRC for validating the
frame
“Management frames have the same format as data frames, plus a format for the data portion that varies
with the subtype (e.g. parameters in beacon frames)”
Control frames contain Frame Control, Duration, and Frame Check Sequence fields, but they might only
have one address and no Data section.
Services
802.11 defines a number of services that must be provided by conformant wireless LANs.
Mobile stations use the association service to connect to APs. Usually, the service is used just after a
station has moved within range of an AP. When the station is within range, it learns the identity and
capabilities of the AP through either beacon frames, or by asking the AP directly. The station sends a
request to associate with the AP, which the AP can either accept or reject .
The reassociation service is used to let a station change its preferred AP. If correctly used, there should
be no data loss between the handover. The station or the AP can also disassociate. The station should use
this before shutting down .
Stations should authenticate before sending frames via the AP. Authentication is handled differently
depending on the security scheme. If the network is open, anyone can use it. Otherwise credentials are
required. WPA2 (WiFi Protected Access 2) is the recommended approach that implements security
defined in the 802.11i standard. With WPA2, the AP communicates with an authentication server that
“has a username and password database to determine if the station is allowed to access the network”. A
password can also be configured (known as a pre-shared key)
The distribution service determines how to route frames from the AP. If the destination is local, the
frames are sent over the air. If they are not, they are forwarded over the wired network.
The integration service handles translation for frames to be sent outside the 802.11 LAN.
The data delivery service lets stations transmit and receive data using the protocols outlined in this
section
A privacy service manages encryption and decryption. The encryption algorithm for WPA2 is based on
AES (Advanced Encryption Standard). The encryption keys are determined during authentication The
QOS traffic scheduling is used to handle traffic with different priorities. It uses the protocols described
in The MAC sublayer protocol section
“The transmit power control service gives stations the information they need to meet regulatory limits
on transmit power that vary from region to region”.
“The dynamic frequency selection service give stations the information they need to avoid transmitting
on frequencies in the 5-GHz band that are being used for radar in the proximity”
WiMax (Worldwide Interoperability of Microwave Access):
WiMAX is one of the hottest broadband wireless technologies around today. WiMAX systems are
expected to deliver broadband access services to residential and enterprise customers in an economical
way.
Loosely, WiMax is a standardized wireless version of Ethernet intended primarily as an
alternative to wire technologies (such as Cable Modems, DSL and T1/E1 links) to provide broadband
access to customer premises.
More strictly, WiMAX is an industry trade organization formed by leading communications,
component, and equipment companies to promote and certify compatibility and interoperability of
broadband wireless access equipment that conforms to the IEEE 802.16 and ETSI HIPERMAN standards.
WiMAX would operate similar to WiFi, but at higher speeds over greater distances and for a greater
number of users. WiMAX has the ability to provide service even in areas that are difficult for wired
infrastructure to reach and the ability to overcome the physical limitations of traditional wired
infrastructure.
WiMAX was formed in April 2001, in anticipation of the publication of the original 10-66 GHz
IEEE 802.16 specifications. WiMAX is to 802.16 as the WiFi Alliance is to 802.11.
WiMAX is
●​ Acronym for Worldwide Interoperability for Microwave Access.
●​ Based on Wireless MAN technology.
●​ A wireless technology optimized for the delivery of IP centric services over a wide area.
●​ A scalable wireless platform for constructing alternative and complementary broadband networks.
●​ A certification that denotes interoperability of equipment built to
the IEEE 802.16 or compatible standard. The IEEE 802.16 Working
Group develops standards that address two types of usage models −
○​ A fixed usage model (IEEE 802.16-2004).
○​ A portable usage model (IEEE 802.16e).
What is 802.16a ?
WiMAX is such an easy term that people tend to use it for the 802.16 standards and technology
themselves, although strictly it applies only to systems that meet specific conformance criteria laid down
by the WiMAX Forum.
The 802.16a standard for 2-11 GHz is a wireless metropolitan area network (MAN) technology
that will provide broadband wireless connectivity to Fixed, Portable and Nomadic devices.
It can be used to connect 802.11 hot spots to the Internet, provide campus connectivity, and
provide a wireless alternative to cable and DSL for last mile broadband access.
WiMax Speed and Range
WiMAX is expected to offer initially up to about 40 Mbps capacity per wireless channel for both fixed
and portable applications, depending on the particular technical configuration chosen, enough to support
hundreds of businesses with T-1 speed connectivity and thousands of residences with DSL speed
connectivity. WiMAX can support voice and video as well as Internet data.
WiMax developed to provide wireless broadband access to buildings, either in competition to
existing wired networks or alone in currently unserved rural or thinly populated areas. It can also be used
to connect WLAN hotspots to the Internet. WiMAX is also intended to provide broadband connectivity to
mobile devices. It would not be as fast as in these fixed applications, but expectations are for about 15
Mbps capacity in a 3 km cell coverage area.
With WiMAX, users could really cut free from today's Internet access arrangements and be able
to go online at broadband speeds, almost wherever they like from within a MetroZone.
WiMAX could potentially be deployed in a variety of spectrum bands: 2.3GHz, 2.5GHz, 3.5GHz, and
5.8GHz
Why WiMax ?
●​ WiMAX can satisfy a variety of access needs. Potential applications include extending broadband
capabilities to bring them closer to subscribers, filling gaps in cable, DSL and T1 services, WiFi,
and cellular backhaul, providing last-100 meter access from fibre to the curb and giving service
providers another cost-effective option for supporting broadband services.
●​ WiMAX can support very high bandwidth solutions where large spectrum deployments (i.e. >10
MHz) are desired using existing infrastructure keeping costs down while delivering the
bandwidth needed to support a full range of high-value multimedia services.
●​ WiMAX can help service providers meet many of the challenges they face due to increasing
customer demands without discarding their existing infrastructure investments because it has the
ability to seamlessly interoperate across various network types.
●​ WiMAX can provide wide area coverage and quality of service capabilities for applications
ranging from real-time delay-sensitive voice-over-IP (VoIP) to real-time streaming video and
non-real-time downloads, ensuring that subscribers obtain the performance they expect for all
types of communications.
●​ WiMAX, which is an IP-based wireless broadband technology, can be integrated into both
wide-area third-generation (3G) mobile and wireless and wireline networks allowing it to become
part of a seamless anytime, anywhere broadband access solution.
Ultimately, WiMAX is intended to serve as the next step in the evolution of 3G mobile phones, via a
potential combination of WiMAX and CDMA standards called 4G.

Sr. No. Key Wifi WiMax

Definition Wifi stands for Wireless Fidelity. WiMax stands for Wireless
1 Inter-operability for
Microwave Access.

Usage WiFi uses Radio waves to create WiMax uses spectrum to


wireless high-speed internet and deliver connection to
2 network connections. A wireless network and handle a larger
adapter is needed to create inter-operable network.
hotspots.
IEEE Wifi is defined under IEEE WiMax is defined under
802.11x standards where x IEEE 802.16y standards
3
defines various WiFi versions. where y defines various
WiMax versions.

Usage Wifi is used in LAN applications. WiMax is used in MAN


4
applications.

QoS Wifi does not gurrantee Quality of WiMax guarantees Quality


5
Service, QoS. of Service, QoS.

Network Range Wifi network ranges at max 100 WiMax network ranges to
6
meters. max 90 kms.

Transmission Wifi transmission speed can be WiMax transmission speed


7
speed upto 54 mbps. can be upto 70 mbps.
WiMAX is similar to the wireless standard known as Wi-Fi, but on a much larger scale and at faster
speeds. A nomadic version would keep WiMAX-enabled devices connected over large areas, much like
today’s cell phones. We can compare it with Wi-Fi based on the following factors.
IEEE Standards
Wi-Fi is based on IEEE 802.11 standard whereas WiMAX is based on IEEE 802.16. However, both are
IEEE standards.
Range
Wi-Fi typically provides local network access for a few hundred feet with the speed of up to 54 Mbps, a
single WiMAX antenna is expected to have a range of up to 40 miles with the speed of 70 Mbps or more.
As such, WiMAX can bring the underlying Internet connection needed to service local Wi-Fi networks.
Scalability
Wi-Fi is intended for LAN applications, users scale from one to tens with one subscriber for each CPE
device. Fixed channel sizes (20MHz).
WiMAX is designed to efficiently support from one to hundreds of Consumer premises equipments
(CPE)s, with unlimited subscribers behind each CPE. Flexible channel sizes from 1.5MHz to 20MHz.
Bit rate
Wi-Fi works at 2.7 bps/Hz and can peak up to 54 Mbps in 20 MHz channel.
WiMAX works at 5 bps/Hz and can peak up to 100 Mbps in a 20 MHz channel.
Quality of Service
Wi-Fi does not guarantee any QoS but WiMax will provide your several level of QoS.
As such, WiMAX can bring the underlying Internet connection needed to service local Wi-Fi networks.
Wi-Fi does not provide ubiquitous broadband while WiMAX does.
Comparison Table
Freature WiMax Wi-Fi Wi-Fi
(802.16a) (802.11b) (802.11a/g)

Primary Broadband Wireless Wireless LAN Wireless LAN


Application Access

Frequency Band Licensed/Unlicensed 2.4 GHz ISM 2.4 GHz ISM (g)
2 G to 11 GHz 5 GHz U-NII (a)

Channel Adjustable 25 MHz 20 MHz


Bandwidth 1.25 M to 20 MHz

Half/Full Duplex Full Half Half

Radio Technology OFDM Direct Sequence OFDM


(256-channels) Spread Spectrum (64-channels)

Bandwidth <=5 bps/Hz <=0.44 bps/Hz <=2.7 bps/Hz


Efficiency

Modulation BPSK, QPSK, QPSK BPSK, QPSK,


16-, 64-, 256-QAM 16-, 64-QAM

FEC Convolutional Code None Convolutional Code


Reed-Solomon

Encryption Mandatory- 3DES Optional- RC4 Optional- RC4


Optional- AES (AES in 802.11i) (AES in 802.11i)

Mobility Mobile WiMax In development In development


(802.16e)

Mesh Yes Vendor Vendor Proprietary


Proprietary

Access Protocol Request/Grant CSMA/CA CSMA/CA


Features:
●​ Two Type of Services
WiMAX can provide two forms of wireless service −
●​ Non-line-of-sight − service is a WiFi sort of service. Here a
small antenna on your computer connects to the WiMAX
tower. In this mode, WiMAX uses a lower frequency range
-- 2 GHz to 11 GHz (similar to WiFi).
●​ Line-of-sight − service, where a fixed dish antenna points
straight at the WiMAX tower from a rooftop or pole. The
line-of-sight connection is stronger and more stable,
so it's able to send a lot of data with fewer errors.
Line-of-sight transmissions use higher frequencies,
with ranges reaching a possible 66 GHz.

●​ OFDM-based Physical Layer


The WiMAX physical layer (PHY) is based on orthogonal frequency division multiplexing, a
scheme that offers good resistance to multipath, and allows WiMAX to operate in NLOS
conditions.
●​ Very High Peak Data Rates
WiMAX is capable of supporting very high peak data rates. In fact, the peak PHY data rate can be
as high as 74Mbps when operating using a 20MHz wide spectrum.
More typically, using a 10MHz spectrum operating using TDD scheme with a 3:1
downlink-to-uplink ratio, the peak PHY data rate is about 25Mbps and 6.7Mbps for the downlink
and the uplink, respectively.
●​ Scalable Bandwidth and Data Rate Support
WiMAX has a scalable physical-layer architecture that allows for the data rate to scale
easily with available channel bandwidth.
For example, a WiMAX system may use 128, 512, or 1,048-bit FFTs (fast fourier
transforms) based on whether the channel bandwidth is 1.25MHz, 5MHz, or 10MHz,
respectively. This scaling may be done dynamically to support user roaming across different
networks that may have different bandwidth allocations.
●​ Adaptive Modulation and Coding (AMC)
WiMAX supports a number of modulation and forward error correction (FEC) coding
schemes and allows the scheme to be changed as per user and per frame basis, based on channel
conditions.
AMC is an effective mechanism to maximize throughput in a time-varying channel.
●​ Link-layer Retransmissions
WiMAX supports automatic retransmission requests (ARQ) at the link layer for
connections that require enhanced reliability. ARQ-enabled connections require each transmitted
packet to be acknowledged by the receiver; unacknowledged packets are assumed to be lost and
are retransmitted.
●​ Support for TDD and FDD
IEEE 802.16-2004 and IEEE 802.16e-2005 supports both time division duplexing and
frequency division duplexing, as well as a half-duplex FDD, which allows for a low-cost system
implementation.
●​ WiMAX Uses OFDM
Mobile WiMAX uses Orthogonal frequency division multiple access (OFDM) as a
multiple-access technique, whereby different users can be allocated different subsets of the
OFDM tones.
●​ Flexible and Dynamic per User Resource Allocation
Both uplink and downlink resource allocation are controlled by a scheduler in the base
station. Capacity is shared among multiple users on a demand basis, using a burst TDM scheme.
●​ Support for Advanced Antenna Techniques
The WiMAX solution has a number of hooks built into the physical-layer design, which
allows for the use of multiple-antenna techniques, such as beamforming, space-time coding, and
spatial multiplexing.
●​ Quality-of-service Support
The WiMAX MAC layer has a connection-oriented architecture that is designed to
support a variety of applications, including voice and multimedia services.
WiMAX system offers support for constant bit rate, variable bit rate, real-time, and
non-real-time traffic flows, in addition to best-effort data traffic.
WiMAX MAC is designed to support a large number of users, with multiple connections
per terminal, each with its own QoS requirement.
●​ Robust Security
WiMAX supports strong encryption, using Advanced Encryption Standard (AES), and
has a robust privacy and key-management protocol.
The system also offers a very flexible authentication architecture based on Extensible
Authentication Protocol (EAP), which allows for a variety of user credentials, including
username/password, digital certificates, and smart cards.
●​ Support for Mobility
The mobile WiMAX variant of the system has mechanisms to support secure seamless
handovers for delay-tolerant full-mobility applications, such as VoIP.

IP-based Architecture
The WiMAX Forum has defined a reference network architecture that is based on an all-IP platform. All
end-to-end services are delivered over an IP architecture relying on IP-based protocols for end-to-end
transport, QoS, session management, security, and mobility.

Building Blocks of WiMax:


A WiMAX system consists of two major parts −
●​ A WiMAX base station.
●​ A WiMAX receiver.
WiMAX Base Station
A WiMAX base station consists of indoor electronics and a WiMAX tower similar in concept to a
cell-phone tower. A WiMAX base station can provide coverage to a very large area up to a radius of 6
miles. Any wireless device within the coverage area would be able to access the Internet.
The WiMAX base stations would use the MAC layer defined in the standard, a common interface
that makes the networks interoperable and would allocate uplink and downlink bandwidth to subscribers
according to their needs, on an essentially real-time basis.
Each base station provides wireless coverage over an area called a cell. Theoretically, the
maximum radius of a cell is 50 km or 30 miles however, practical considerations limit it to about 10 km
or 6 miles.
WiMAX Receiver
A WiMAX receiver may have a separate antenna or could be a stand-alone box or a PCMCIA
card sitting in your laptop or computer or any other device. This is also referred as customer premise
equipment (CPE).
WiMAX base station is similar to accessing a wireless access point in a WiFi network, but the
coverage is greater.
Backhaul
A WiMAX tower station can connect directly to the Internet using a high-bandwidth, wired
connection (for example, a T3 line). It can also connect to another WiMAX tower using a line-of-sight
microwave link.
Backhaul refers both to the connection from the access point back to the base station and to the
connection from the base station to the core network.
It is possible to connect several base stations to one another using high-speed backhaul
microwave links. This would also allow for roaming by a WiMAX subscriber from one base station
coverage area to another, similar to the roaming enabled by cell phones.
The IEEE 802.16e-2005 standard provides the air interface for WiMAX, but does not define the
full end-to-end WiMAX network. The WiMAX Forum's Network Working Group (NWG) is responsible
for developing the end-to-end network requirements, architecture, and protocols for WiMAX, using IEEE
802.16e-2005 as the air interface.The WiMAX NWG has developed a network reference model to serve
as an architecture framework for WiMAX deployments and to ensure interoperability among various
WiMAX equipment and operators.
The network reference model envisions a unified network architecture for supporting fixed,
nomadic, and mobile deployments and is based on an IP service model. Below is simplified illustration of
an IP-based WiMAX network architecture. The overall network may be logically divided into three parts:
●​ Mobile Stations (MS) used by the end user to access the network.
●​ The access service network (ASN), which comprises one or more base stations and one or more
ASN gateways that form the radio access network at the edge.
●​ Connectivity service network (CSN), which provides IP connectivity and all the IP core network
functions.
The network reference model developed by the WiMAX Forum NWG defines a number of functional
entities and interfaces between those entities. The following figure shows some of the more important
functional entities.

●​ Base station (BS) − The


BS is responsible for
providing the air
interface to the MS.
Additional functions that
may be part of the BS are
micro mobility
management functions,
such as handoff
triggering and tunnel establishment, radio resource management,
QoS policy enforcement, traffic classification, DHCP (Dynamic Host
Control Protocol) proxy, key management, session management, and
multicast group management.
●​ Access service network gateway (ASN-GW) − The ASN gateway typically acts as
a layer 2 traffic aggregation point within an ASN. Additional
functions that may be part of the ASN gateway include intra-ASN
location management and paging, radio resource management, and
admission control, caching of subscriber profiles, and encryption
keys, AAA client functionality, establishment, and management of
mobility tunnel with base stations, QoS and policy enforcement,
foreign agent functionality for mobile IP, and routing to the
selected CSN.
●​ Connectivity service network (CSN) − The CSN provides connectivity to the
Internet, ASP, other public networks, and corporate networks. The
CSN is owned by the NSP and includes AAA servers that support
authentication for the devices, users, and specific services. The CSN
also provides per user policy management of QoS and security. The
CSN is also responsible for IP address management, support for
roaming between different NSPs, location management between ASNs,
and mobility and roaming between ASNs.
The WiMAX architecture framework allows for the flexible decomposition and/or combination of
functional entities when building the physical entities. For example, the ASN may be decomposed into
base station transceivers (BST), base station controllers (BSC), and an ASNGW analogous to the GSM
model of BTS, BSC, and Serving GPRS Support Node (SGSN).
The WiMAX framework is based on several core principles −
●​ Support for different RAN topologies.
●​ Well-defined interfaces to enable 802.16 RAN architecture independence while enabling seamless
integration and interworking with WiFi, 3GPP3 and 3GPP2 networks.
●​ Leverage and open, IETF-defined IP technologies to build scalable all-IP 802.16 access networks
using common off the shelf (COTS) equipment.
●​ Support for IPv4 and IPv6 clients and application servers, recommending use of IPv6 in the
infrastructure.
●​ Functional extensibility to support future migration to full mobility and delivery of rich
broadband multimedia.
WiMAX is a technology based on the IEEE 802.16 specifications to enable
the delivery of last-mile wireless broadband access as an alternative to
cable and DSL. The design of WiMAX network is based on the following
major principles −
●​ Spectrum − able to be deployed in both licensed and unlicensed
spectra.
●​ Topology − supports different Radio Access Network (RAN)
topologies.
●​ Interworking − independent RAN architecture to enable seamless
integration and interworking with WiFi, 3GPP and 3GPP2 networks and
existing IP operator core network.
●​ IP connectivity − supports a mix of IPv4 and IPv6 network
interconnects in clients and application servers.
●​ Mobility management − possibility to extend the fixed access to
mobility and broadband multimedia services delivery.

WiMAX Physical Layer (PHY):


For bands in 10-66GHz range, 802.16 defines one interface called Wireless MAN-SC
For 2-11GHz (both licensed and unlicensed):
●​ Wireless MAN-SC (single carrier modulation)
●​ Wireless MAN-OFDM (256 carrier OFDM with access to different stations using TDMA)
●​ Wireless MAN-OFDM (2048 carrier OFDM by assigning subset of carriers to individual station)
iii. WiMAX PHY features include ‘Adaptive Modulation and Coding (AMC)’, ‘Hybrid Automatic Repeat
Request (HARQ)’, ‘Channel Quality Indicator Channel (CQICH)’ which is a feedback channel.
iv. All these features provide robust link adoption in mobile environment at vehicular speeds in excess of
120Km/h.

WiMAX Medium Access Control (MAC):


i. Each subscriber station need to compete for media only one (for entry).Then, WiMAX base station
provides time slot to each subscriber station which may increase or decrease depending on need.
ii. There is a scheduling algorithm for service to each station. This algorithm is robust and not affected by
over loading and over subscription.
iii. WiMAX supports different transport technologies such as IPv4, IPv6 and Ethernet.
iv. WiMAX mesh networking allows subscriber stations to communicate with each other i.e. “Subscriber”
mode and with base station i.e. “base station” mode simultaneously.
5. Spectrum Allocation for WiMAX:
i. The biggest spectrum segment for WiMAX is around 2.5GHz.
ii. The other bands are around 3.5HZ, 2.3/2.5GHz, or 5GHz, with 2.3/2.5GHz.

Mesh Mode:

Wireless mesh operation mode is one of the most


effective network branches among the emerging
technologies. This network can connect multiple wireless
access points (known as nodes) and form a mesh
network, which is a network of connections that provides broad coverage and enables multiple paths and
routes of communication. It is able to balance the traffic load and provide support for fault tolerance, so
that if a node goes down, the network can self-configure and self-heal to find alternative routes of access .

Fig Mesh mode in IEEE 802.16 (WiMAX)

WMNs can be seen as one type of MANETs. An ad-hoc network (possibly mobile) is a set of
network devices that want to communicate, but have no fixed infrastructure available and no
predetermined pattern of available communication links. The individual nodes of the network are
responsible for a dynamic discovery of the other nodes that can communicate directly with them, i.e. what
are their neighbors (forming a multi-hop network). Ad-hoc networks are chosen so that they can be used
in situations where the infrastructure is not available or unreliable, or even in emergency situations. A
mesh network is composed of multiple nodes / routers, which starts to behave like a single large network,
enabling the client to connect to any of them. In this way it is possible to transmit messages from one
node to another in different ways. Mesh type networks have the advantage of being low cost, easy to
deploy and reasonably fault tolerant.
In another analogy, a wireless mesh network can be regarded as a set of antennas, which are
spaced a certain distance from each other so that each covers a portion or area of a goal or region. A first
antenna covers an area, the second antenna covers a continuous area after the first and so on, as if it were
a tissue cell, or a spider web that interconnects various points and wireless clients. What is inside these
cells and covers the span of the antennas, can take advantage of the network services, provided that the
client has a wireless card with the interface technology.

Mesh networks are networks with a dynamic topology that show a variable and constant change
with growth or decline, and consist of nodes whose communication at the physical level occurs through
variants of the IEEE 802.11 and IEEE 802.16 standard, and whose routing is dynamic. The image below
shows an example of a mesh network. In mesh networks, the access point / base stations area is usually
fixed.

To achieve these goals, WiMAX networks can


be structured into two operating modes: PMP
(Point-to-Multipoint) and mesh networks, and the second
is the focus of this chapter. Mesh mode is a type of
operation that can interconnect multiple mobile clients
together with many WiMAX base stations (nodes) and
form a network of connections so as to provide a wide
coverage area for mobile clients. All the clients can
communicate with each other and there is no need for an
intermediate node to act as the mediator of the network.
In this mode, the IEEE 802.16 can provide broadband
access with wireless support both single-hop and
multi-hop settings
Figure .Mesh network

WiMax Physical layer:


The PHY Layer establishes the physical connection between both sides, often in the two directions
(uplink and downlink). As 802.16 is evidently a digital technology, the PHYsical Layer is responsible for
transmission of the bit sequences. It defines the type of signal used, the kind of modulation and
demodulation, the transmission power and also other physical characteristics.
The 802.16 standard considers the frequency band 2-66GHz. This band is divided into two parts:
●​ The first range is between 2 and 11 GHz and is destined for NLOS transmissions. This was
previously the 802.16a standard. This is the only range presently included in WiMAX.
●​ The second range is between 11 and 66 GHz and is destined for LOS transmissions. It is not used
for WiMAX.
Five PHYsical interfaces are defined in the 802.16 standard.
FDD Mode
In an FDD system, the uplink and downlink channels are located on separate frequencies. A fixed
duration frame is used for both uplink and downlink transmissions. This facilitates the use of different
modulation types. It also allows simultaneous use of both full-duplex SSs, which can transmit and receive
simultaneously and, optionally, half-duplex SSs (H-FDD for Half-duplex Frequency Division Duplex),
which cannot. A full-duplex SS is capable of continuously listening to the downlink channel, while a
half-duplex SS can listen to the downlink channel only when it is not transmitting on the uplink channel.
Figure 9.1 illustrates different cases of the FDD mode of operation.
Fig. FDD and TDD operation

When half-duplex SSs are used, the bandwidth controller does not allocate an uplink bandwidth for a
half-duplex SS at the same time as the latter is expected to receive data on the downlink channel,
including allowance for the propagation delay uplink/downlink transmission shift delays.​

Fig. a) TDD b) FDD frame format


TDD Mode
In the case of TDD, the uplink and downlink transmissions share the same frequency but they take place
at different times. A TDD frame has a fixed duration and contains one downlink and one uplink subframe.
The frame is divided into an integer number of Physical Slots (PSs), which help to partition the bandwidth
easily. For OFDM and OFDMA PHYsical layers, a PS is defined as the duration of four modulation
symbols. The frame is not necessarily divided into two equal parts. The TDD framing is adaptive in that
the bandwidth allocated to the downlink versus the uplink can change. The split between the uplink and
downlink is a system parameter and the 802.16 standard states that it is controlled at higher layers within
the system.

WiMax MAC layer

The IEEE 802.16 MAC was designed for


point-to-multipoint broadband wireless
access applications. The primary task of the
WiMAX MAC layer is to provide an
interface between the higher transport layers
and the physical layer.
The MAC layer takes packets from the upper
layer, these packets are called MAC service
data units (MSDUs) and organizes them into
MAC protocol data units (MPDUs) for
transmission over the air. For received
transmissions, the MAC layer does the
reverse.
The IEEE 802.16-2004 and IEEE
802.16e-2005 MAC design includes a
convergence sublayer that can interface with
a variety of higher-layer protocols, such as
ATM TDM Voice, Ethernet, IP, and any
unknown future protocol.
Fig. WiMax MAC layer
The 802.16 MAC is designed for point-to-multipoint (PMP) applications and is based on collision sense
multiple access with collision avoidance (CSMA/CA).
The MAC incorporates several features suitable for a broad range of
applications at different mobility rates, such as the following −
●​ Privacy key management (PKM) for MAC layer security. PKM version 2 incorporates support for
extensible authentication protocol (EAP).
●​ Broadcast and multicast support.
●​ Manageability primitives.
●​ High-speed handover and mobility management primitives.
●​ Three power management levels, normal operation, sleep, and idle.
●​ Header suppression, packing and fragmentation for an efficient use of spectrum.
●​ Five service classes, unsolicited grant service (UGS), real-time polling service (rtPS),
non-real-time polling service (nrtPS), best effort (BE), and Extended real-time variable rate
(ERT-VR) service.
These features combined with the inherent benefits of scalable OFDMA make 802.16 suitable for
high-speed data and bursty or isochronous IP multimedia applications.
Support for QoS is a fundamental part of the WiMAX MAC-layer design. WiMAX borrows some of the
basic ideas behind its QoS design from the DOCSIS cable modem standard.
Strong QoS control is achieved by using a connection-oriented MAC architecture, where all downlink
and uplink connections are controlled by the serving BS.
WiMAX also defines a concept of a service flow. A service flow is a unidirectional flow of packets with
a particular set of QoS parameters and is identified by a service flow identifier (SFID).

Wimax architecture:
Worldwide Interoperability of Microwave Access (WiMAX) is a fast-emerging wide-area wireless
broadband technology that shows great promise as the last mile solution for delivering high-speed Internet
access to the masses. It represents an inexpensive alternative to digital subscriber lines (DSL) and cable
broadband access, the installation costs for a wireless infrastructure based on IEEE 802.16 being far less
than today’s wired solutions, which often require laying cables and ripping up buildings and streets.
Wireless broadband access is set up like cellular systems, using base stations that service a radius
of several miles/kilometres. Base stations do not necessarily have to reside on a tower. More often than
not, the base station antenna will be located on a rooftop of a tall building or other elevated structure such
as a grain silo or water tower. A customer premise unit, similar to a satellite TV setup, is all it takes to
connect the base station to a customer. The signal is then routed via standard Ethernet cable either directly
to a single computer, or to an 802.11 hot spot or a wired Ethernet LAN.
The original 802.16 standard operates in the 10-66GHz frequency band and requires line-of-sight
towers. The 802.16a extension, ratified in January 2003, uses a lower frequency of 2-11GHz, enabling
nonline-of-sight connections. This constitutes a major breakthrough in wireless broadband access,
allowing operators to connect more customers to a single tower and thereby substantially reduce service
costs.

The IEEE 802.16-2004 standard subsequently revised and replaced the IEEE 802.16a and 802.16REVd
versions. This is designed for fixed-access usage models. This standard may be referred to as fixed
wireless because it uses a mounted antenna at the subscriber’s site. The antenna is mounted to a roof or
mast, similar to a satellite television dish. IEEE 802.16-2004 also addresses indoor installations, in which
case it may not be as robust as in outdoor installations.
The IEEE 802.16e standard is an amendment to the 802.16-2004 base specification and targets
the mobile market by adding portability and the ability for mobile clients with appropriate adapters to
connect directly to a WiMAX network.

Wireless Local Loop


Local loop is a circuit line from a subscriber’s phone to the local central office (LCO). But the
implementation of local loop of wires is risky for the operators, especially in rural and remote areas due to
less number of users and increased cost of installation. Hence, the solution for it is the usage of wireless
local loop (WLL) which uses wireless links rather than copper wires to connect subscribers to the local
central office.

Wireless Local Loop Architecture:

Fig. Wireless Local Loop WLL architecture


WLL components:
1.​ PSTN:​
It is a Public Switched Telephone Network which is a circuit switched network. It is a collection
of the world's interconnected circuit switched telephone networks.
2.​ Switch Function:​
Switch Function switches the PSTN among various WANUs.
3.​ WANU:
• It provides various functionalities like:
●​ Authentication
●​ Air interface privacy
●​ Over-the-air registration of subscriber units.
●​ Operations and Maintenance
●​ Routing
●​ Billing
●​ Switching functions
●​ Transcoding of voice and data.

It is short for Wireless Access Network Unit. It is present at the local exchange office. All local
WASUs are connected to it. Its functions include: Authentication, Operation & maintenance,
Routing, Transceiving voice and data. It consists of following sub-components:
○​ Transceiver: It transmits/receives data.
○​ WLL Controller: It controls the wireless local loop component with WASU.
○​ AM: It is short for Access Manager. It is responsible for authentication.
○​ HLR: It is short for Home Location Register. It stores the details of all local WASUs.
4.​ WASU:​
It is short for Wireless Access Subscriber Units. It is present at the house of the subscriber. It
connects the subscriber to WANU and the power supply for it is provided locally.
It provides an air interface UWLL towards the network and a traditional interface TWLL
towards the subscriber.
• The power supply for it is provided locally.
• The interface includes
●​ protocol conversion and transcoding
●​ authentication functions
●​ signaling functions
• The TWLL interface can be an RJ-11 or RJ-45 port.
• The UWLL interface can be AMPS, GSM, DECT and so one.
• Switching Function (SF): The switching function (SF) is associated with a switch that can be
digital switch with or without Advanced Intelligent Network (AIN) capability, an ISDN switch or
a Mobile Switching Centre (MSC).
• The AWLL interface between the WANU and the SF can be ISDN-BRI or IS-634 or IS-653 or
such variants.​

WLL configuration:
BTS (base Transceiver Station)
FSU(Fixed Subscriber Unit)
Loop: . In telephone, loop is a circuit line from a subscriber’s phone to a line-terminating equipment at a
central office.
• Implementation of a local loop especially in rural areas used to remain a risk for many operators due to
less users and increased cost of materials. The loop lines are copper wires which require more
investments.
• However today with Wireless local loop (WLL) has been introduced which solves most of these
problems.
• As WLL is wireless, the labor-charges and time-consuming investments are no longer relevant.
• WLL systems can be based on one of the four below technologies:
1.​ Satellite-based systems.
2.​ Cellular-based systems.
3.​ Microcellular-based Systems
4.​ Fixed Wireless Access Systems

Deployment Issues:
• To compete with other local loop technologies WLL needs tom [provide sufficient coverage and
capacity, high circuit quality and efficient data services.
• Moreover the WLL cost should be competitive with its wireline counterpart.
• Various issues are considered in WLL development which include:
1.​ Spectrum: The implementation of WLL should be flexible to accommodate different flexible
bands as well as non-continuous bands. More these bands are licensed by government.
2.​ Service quality: Customer expects that the quality of service should be better than the wireline
counterpart. The quality requirements include link quality, reliability and fraud immunity.
3.​ Network Planning: Unlike Mobile System, WLL assumes that user is stationary, not moving.
Also the network penetration should be greater than 90%. Therefore WLL should be installed
based on parameters like Population Density etc.
4.​ Economics: Major cost here is electronic equipment’s. In current scenario, the cost of such
electronic equipment is reducing periodically.
• In traditional telephone networks, your phone would be connected to the nearest exchange through a
pair of copper wires.
• Wireless local loop (WLL) technology simply means that the subscriber is connected to the nearest
exchange through a radio link instead of through these copper wires.
Fig. WLL configuration
Advantages of WLL:
○​ It eliminates the first mile or last mile construction of the network connection.
○​ Low cost due to no use of conventional copper wires.
○​ Much more secure due to digital encryption techniques used in wireless communication.
○​ Highly scalable as it doesn’t require the installation of more wires for scaling it.
Features of WLL:
○​ Internet connection via modem
○​ Data service
○​ Voice service
○​ Fax service
1. Introduction

Wireless communication technologies have transformed the way devices and networks
exchange information, enabling innovative applications across diverse domains—from
consumer electronics to industrial automation and vehicular communications. This chapter
provides a comprehensive overview of four key areas in short-range wireless and ad hoc
networking:

1. IEEE 802.15.1 (Bluetooth): The de facto standard for short-range device-to-device


connectivity, exploring piconet and scatternet topologies, along with a detailed
breakdown of the Bluetooth protocol stack.

2. IEEE 802.15.4 (ZigBee): A popular Low-Rate Wireless Personal Area Network (LR-WPAN)
specification optimized for low-power, low-data-rate sensor and control applications.

3. Wireless Sensor Networks (WSN): Architectures, design considerations, challenges, and


diverse application domains where WSNs play a crucial role.

4. Ad Hoc Networks: Focusing on Mobile Ad Hoc Networks (MANETs) and Vehicular Ad Hoc
Networks (VANETs), as well as the emerging Electrical Vehicular Ad Hoc Networks (E-
VANET).

Each section delves into the technical details, operational mechanisms, and contemporary
developments, emphasizing their significance in modern communication systems. By the end of
this chapter, readers will have an in-depth understanding of how these technologies function,
the challenges they address, and the avenues they open for future innovations.

2. IEEE 802.15.1 (Bluetooth)

Bluetooth, standardized under IEEE 802.15.1, is a short-range wireless technology designed for
low-power, low-cost communication among devices. Its core value lies in simplifying the
exchange of information—such as audio, data, and control signals—over short distances,
typically within 10 meters (Class 2 devices) or 100 meters (Class 1 devices). Since its inception,
Bluetooth has evolved through multiple versions (e.g., Bluetooth 5.x) to address demands for
higher throughput and improved energy efficiency.

2.1 Piconet

A piconet is the fundamental network topology in Bluetooth. It comprises one device acting as a
master and up to seven active slave devices. This master-slave relationship is central to how
devices coordinate access to the shared medium.
1. Definition and Structure:

o A piconet is formed when a Bluetooth device (master) initiates a connection with


one or more devices (slaves).

o The master is responsible for timing and control. It defines the frequency-
hopping sequence and timing structure that all slaves must follow.

o Up to eight devices can be actively involved in the piconet (one master plus
seven slaves). Additional devices may be parked or held in low-power states,
waiting for scheduling.

2. Formation:

o Inquiry and Paging: When a device wants to join or create a piconet, it performs
an inquiry to discover nearby devices. The paging procedure follows to establish
a synchronized connection.

o Synchronization: The master sends out signals (frequency hops and timing
beacons), which the slaves use to synchronize their clocks and frequency
hopping.

3. Operational Mechanism:

o Time Division Duplex (TDD): Bluetooth uses a time-division approach. Each


packet slot is 625 microseconds, and the master and slave alternate sending and
receiving.

o Frequency Hopping Spread Spectrum (FHSS): Bluetooth operates in the 2.4 GHz
ISM band and uses adaptive frequency hopping to mitigate interference. The
piconet hops through 79 channels (in most regions) at a rate of 1600 hops per
second.

o Polling by the Master: The master polls slaves in a round-robin or priority-based


manner. Slaves only transmit when polled, preserving an organized channel
access.

2.2 Scatternet
A scatternet is formed when multiple piconets overlap or interconnect. While the piconet
structure is straightforward, building a scatternet adds complexity and requires devices to
operate in multiple piconets concurrently.

1. Concept and Formation:

o Multiple Piconets: A device in one piconet may act as a slave in another piconet
or even take on the role of master. This multi-role capability allows Bluetooth
networks to scale beyond eight devices.

o Bridging: Devices that participate in more than one piconet are called bridge
devices. They forward data between piconets, effectively linking them into a
scatternet.

2. Interoperability:

o Common Protocol Stack: All devices adhere to the same Bluetooth protocol
stack. This uniformity enables interoperability as long as roles and timing are
managed correctly.
o Scheduling Complexity: A bridge device must synchronize with two or more sets
of frequency-hopping sequences. This often leads to scheduling challenges that
require careful time-slot allocation to avoid collisions.

3. Challenges:

o Resource Constraints: Bridge devices handle more communication overhead,


potentially increasing power consumption and latency.

o Complex Routing: While Bluetooth primarily uses a star topology within a


piconet, scatternets introduce mesh-like interactions, demanding more complex
routing strategies.

o Scalability: As more piconets join a scatternet, coordinating frequency hops and


schedules becomes increasingly difficult.

2.3 Protocol Stack

The Bluetooth protocol stack is typically divided into core protocols, cable replacement and
telephony control protocols, and adopted protocols. This layered architecture provides a
structured way to manage everything from low-level radio frequency operations to high-level
application interactions.

1. Radio Layer:

o Operates in the 2.4 GHz ISM band.


o Defines the physical characteristics (modulation scheme, transmit power).

o Uses Gaussian Frequency Shift Keying (GFSK) or other enhanced modulation


schemes (e.g., π/4 DQPSK in Bluetooth EDR).

2. Baseband Layer:

o Handles packet framing, timing, addressing, and the fundamental Time-Division


Duplex operation.

o Implements frequency hopping and ensures the synchronization of devices in a


piconet.

3. Link Manager Protocol (LMP):

o Responsible for link setup, security, and control.

o Manages pairing, authentication, encryption, and low-power modes like sniff,


hold, and park.

4. Host Controller Interface (HCI):

o Acts as a boundary between the Bluetooth controller (radio, baseband, LMP)


and the host (upper layers running on a separate processor).

o Allows standardized commands from the host to control lower layers.

5. Logical Link Control and Adaptation Protocol (L2CAP):

o Provides multiplexing of higher-level protocols (e.g., SDP, RFCOMM).

o Handles segmentation and reassembly of data packets.

o Supports Quality of Service (QoS) provisions for different traffic types.

6. RFCOMM (Radio Frequency Communication):

o A serial port emulation protocol that enables legacy applications to run over
Bluetooth as if they were using a standard serial link.

o Often used for dial-up networking, data transfer between PCs and phones, etc.

7. Service Discovery Protocol (SDP):

o Allows devices to discover services offered by other Bluetooth devices.

o Provides a mechanism to query device capabilities (e.g., audio profile, headset


profile).
8. Profiles and Applications:

o Bluetooth Profiles define standardized configurations for specific use-cases (e.g.,


A2DP for streaming audio, HID for keyboards/mice).

o Applications build on these profiles to ensure interoperability and consistent


user experiences.

3. IEEE 802.15.4 (ZigBee)

Designed for low-power, low-data-rate applications, IEEE 802.15.4 underpins the ZigBee
standard, offering a robust foundation for sensor networks, industrial control, and home
automation. ZigBee extends the baseline physical (PHY) and medium access control (MAC)
specifications in IEEE 802.15.4 with a defined network layer and application framework, making
it a popular choice for wireless monitoring and control solutions.

3.1 LR-WPAN Device Architecture

ZigBee devices operate in Low-Rate Wireless Personal Area Networks (LR-WPANs). Their
architecture is optimized to use minimal power and handle small data bursts typical of sensing
or control signals.

1. Node Types:

o Full Function Device (FFD): Can serve as a coordinator or router and


communicate with any other device. Maintains a complete protocol set and can
handle routing tasks.

o Reduced Function Device (RFD): Typically a simple sensor or actuator with


limited capabilities. Communicates only with coordinators or routers and cannot
forward traffic.

2. Functional Blocks:

o Radio Transceiver: Compliant with IEEE 802.15.4 PHY, typically using DSSS (Direct
Sequence Spread Spectrum) in the 2.4 GHz ISM band or sub-GHz bands (868 MHz
in Europe, 915 MHz in North America).

o Microcontroller (MCU): Runs the ZigBee stack (network and application layers)
and handles local processing.
o Sensors/Actuators: Interface with the physical environment (e.g., temperature
sensor, LED actuator).

3. Communication Model:

o Star Topology: A coordinator acts as the central node, with end devices
connecting to it directly.

o Peer-to-Peer (Mesh) Topology: Multiple coordinators and routers form a mesh,


offering self-healing routes and robust coverage over larger areas.

3.2 Protocol Stack

ZigBee’s protocol stack, built on top of IEEE 802.15.4, consists of a Physical layer, MAC layer,
Network layer, and Application layer (including application support sub-layer and the ZigBee
Device Object, ZDO).

1. Physical Layer (PHY):

o Defines transmit power, frequency channels, and modulation schemes (e.g., O-


QPSK at 2.4 GHz).

o Typical data rates: 250 kbps at 2.4 GHz, lower in sub-GHz bands.

o Low-power design with options for rapid sleep/wake cycles.

2. MAC Layer:
o Responsible for channel access, beaconing, frame validation, and
acknowledgments.

o Uses Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) to


share the radio channel fairly among devices.

o Supports Guaranteed Time Slots (GTS) for time-critical or guaranteed-bandwidth


traffic in beacon-enabled modes.

3. Network Layer:

o Formation and Maintenance: Handles network address assignments, route


discovery, and route maintenance.

o Routing Protocol: Often utilizes AODV (Ad hoc On-Demand Distance Vector) or
table-driven variants adapted for low-power mesh topologies.

o Security Services: Employs AES-128 encryption, key management, and secure


frame transmission for confidentiality and integrity.

4. Application Support Sub-layer (APS):

o Acts as an interface between the Network layer and the Application layer.

o Responsible for data distribution, binding (mapping one device’s output to


another’s input), and group addressing.

5. ZigBee Device Object (ZDO):

o Manages device roles (e.g., coordinator, router, end device) and network
functions (e.g., discovery of other devices, initiating or joining a network).

o Coordinates security and manages authentication and key exchange.

6. Application Framework:

o Includes clusters, which are function-specific commands and attributes (e.g.,


lighting cluster, thermostat cluster).

o Ensures interoperable services for home and industrial automation by defining


standard devices and functionalities.

The ZigBee Alliance maintains and updates specifications (e.g., ZigBee Pro, ZigBee 3.0), focusing
on interoperability and backward compatibility. In modern IoT ecosystems, ZigBee competes
with other low-power standards like Thread and BLE (Bluetooth Low Energy), but it remains
widely adopted in large-scale sensor and control networks.
4. Wireless Sensor Networks (WSN)

Wireless Sensor Networks (WSNs) are distributed networks of sensor nodes that autonomously
monitor environmental or system parameters (e.g., temperature, vibration, chemical
concentrations) and communicate the collected data to a central sink or base station. These
networks find applications in critical areas such as industrial automation, agriculture, defense,
and healthcare, where large-scale, real-time monitoring is essential.

4.1 Design Considerations

1. Energy Efficiency:

o Limited Power Source: WSN nodes often run on batteries or energy-harvesting


systems (solar, vibration, thermal). Prolonged operation demands ultra-low-
power circuit design and energy-optimized communication protocols.

o Duty Cycling: Nodes periodically switch between active and sleep modes to
conserve energy. Protocols must coordinate wake-up schedules to ensure data
collection and delivery.
2. Scalability:

o Large Deployments: Networks can comprise hundreds or thousands of nodes.


Routing protocols and data aggregation techniques must handle large node
populations efficiently.

o Adaptive Topologies: WSN architectures must accommodate node failures,


mobility, or new node deployments without manual reconfiguration.

3. Reliability:

o Harsh Environments: Nodes may operate under extreme temperatures,


humidity, or mechanical stress. Redundancy and robust error-correction
mechanisms are critical.

o Fault Tolerance: A node or link failure should not collapse the network. Mesh
connectivity and dynamic rerouting help maintain service continuity.

4. Data Aggregation:

o Traffic Reduction: Aggregation techniques combine or summarize sensor


readings at intermediate nodes to minimize redundant transmissions and
conserve energy.

o Temporal and Spatial Correlation: Sensors close to each other often produce
correlated data, which can be compressed or filtered before transmission.

4.2 Issues and Challenges

1. Power Constraints:

o Limited Battery Life: Frequent communication or sensing quickly depletes


batteries. Energy-harvesting approaches introduce complexity in node hardware
and management.

o Efficient MAC and Routing Protocols: Protocols like S-MAC, T-MAC, or duty-
cycling mechanisms are specifically designed to reduce collision and idle
listening.

2. Security Concerns:

o Resource Limitations: Strong encryption or multi-step authentication can be


computationally expensive. Lightweight security protocols must balance security
with performance.
o Physical Vulnerability: Sensor nodes in open environments can be tampered
with or destroyed, leading to compromised keys or false data injection.

3. Data Reliability and Quality of Service (QoS):

o Unreliable Links: Wireless links in WSNs can be prone to high bit error rates or
interference. Retransmission and error correction schemes should be optimized
for energy usage.

o Prioritizing Critical Data: Certain data (e.g., alarm signals) may require higher
priority and guaranteed delivery.

4. Coverage Gaps:

o Deployment Challenges: Random deployment methods (e.g., aerial scattering of


sensor nodes) can lead to uneven coverage.

o Dynamic Environments: Changes in terrain or obstacles can create coverage


holes. Adaptive algorithms for coverage maintenance are necessary.

4.3 WSN Architecture

A typical WSN architecture includes:

1. Sensing Nodes (End Devices):

o Collect environmental data (temperature, humidity, motion, etc.).

o Implement low-power radio transceivers for inter-node communication.


2. Cluster Heads (Routers):

o Aggregate and forward data from a group of nodes to the base station.

o Often equipped with more computational resources and energy reserves than
end devices.

o Perform local data processing or filtering to reduce network traffic.

3. Base Station (Sink):

o Gathers data from cluster heads or directly from sensor nodes.

o Connects WSN data to the backend network, e.g., the internet or a local server
for data storage and analysis.

o Acts as a control center, sending commands or queries to sensor nodes.

4. Topology Types:

o Star Topology: Simple, each node communicates directly with a central


coordinator (base station). Suitable for small-scale or short-range WSNs.

o Clustered Topology: The network is partitioned into clusters, each managed by a


cluster head that aggregates local sensor data.

o Mesh Topology: Nodes dynamically forward data toward the base station via
multiple hops. Offers fault tolerance and scalability.

4.4 Applications

1. Industrial Automation:

o Condition Monitoring: WSNs detect mechanical anomalies (vibration,


temperature rise) in machinery to schedule predictive maintenance.

o Process Control: Sensors gather real-time data from assembly lines, adjusting
production parameters to optimize yield and quality.

2. Environmental Monitoring:

o Agriculture: Nodes measure soil moisture, temperature, and nutrients to guide


irrigation and fertilization.

o Wildlife Tracking: WSNs help track animal migration and habitat conditions with
minimal human intervention.

3. Healthcare:
o Patient Monitoring: Wearable sensors transmit vitals (heart rate, blood pressure)
in real-time to a central system.

o Assisted Living: WSNs help elderly or disabled individuals by detecting falls or


anomalies, automatically alerting caregivers.

4. Military Applications:

o Battlefield Surveillance: WSNs provide tactical awareness of troop or vehicle


movements, infiltration attempts, and environmental hazards.

o Infrastructure Protection: Sensor nodes detect intrusions or sabotage attempts


in high-security perimeters.

WSNs will continue to evolve with the integration of machine learning and edge computing,
enabling more autonomous and intelligent sensor networks capable of local decision-making
while minimizing communication overhead.

5. Ad Hoc Networks

An ad hoc network is a self-configuring network of wireless nodes that collaborate to forward


packets for each other without relying on a fixed infrastructure. This decentralized approach
allows rapid deployment in environments where traditional network installations are
impractical—such as disaster zones or moving vehicles.

5.1 Introduction to MANET and VANET

Mobile Ad Hoc Network (MANET) is a network of mobile devices forming a temporary network
without any fixed infrastructure or centralized administration. Each node can function as both
an end device and a router, discovering routes dynamically as topology changes.

Characteristics of MANETs

1. Self-Organization: Nodes join or leave at will, automatically adjusting network


configuration and routes.

2. Dynamic Topology: Frequent node mobility changes link availability and routing paths.

3. Multi-hop Communication: Data travels through multiple nodes before reaching its
destination.

4. Decentralization: No single point of failure or central authority, increasing robustness


but complicating management.
Applications of MANETs

1. Disaster Recovery: Communication infrastructure can be quickly established among


rescue teams when conventional networks are down.

2. Battlefield Communications: Military units deploy MANETs to coordinate troop


movements and share real-time intelligence.

3. Temporary Events: Large gatherings or conferences can use MANETs to provide localized
communication services.

Vehicular Ad Hoc Network (VANET) focuses on communication among vehicles and between
vehicles and roadside infrastructure. It leverages wireless interfaces (commonly IEEE 802.11p or
Cellular V2X) to enable applications such as collision warnings, traffic condition updates, and
infotainment.

Characteristics of VANETs

1. High Mobility: Vehicles move rapidly, causing frequent changes in network topology.

2. Predictable Patterns: Vehicle movement often follows road layouts, offering some
degree of predictability in routing.
3. Low Communication Latency Requirements: Safety applications require fast data
exchange (e.g., braking or hazard alerts).

4. Decentralized Control: VANETs typically operate without a single controlling entity,


though roadside units (RSUs) can provide partial coordination.

Applications of VANETs

1. Intelligent Transportation Systems (ITS): Enhances traffic flow, congestion management,


and route planning.

2. Real-Time Vehicular Safety Systems: Vehicles share speed, brake status, and sensor data
to prevent collisions.

3. Infotainment Services: Location-based services, streaming content, and advertising


delivered on the move.

5.2 Advantages and Limitations

1. Advantages:

o Rapid Deployment: Ad hoc networks can be set up quickly without extensive


infrastructure.

o Flexibility: Nodes can roam freely while maintaining network connectivity.

o Infrastructure Independence: Operates in remote or infrastructure-less


environments.
2. Limitations:

o Routing Complexity: Frequent topology changes require sophisticated, adaptive


routing protocols (e.g., AODV, DSR, OLSR).

o Security Vulnerabilities: Open wireless medium is susceptible to eavesdropping,


spoofing, and denial-of-service attacks.

o QoS Concerns: Ensuring reliability and consistent performance in bandwidth-


constrained, mobile environments is challenging.

6. Overview of E-VANET (Electrical Vehicular Ad Hoc Networks)

The drive toward sustainable transportation has introduced Electric Vehicles (EVs) into the
vehicular ecosystem. As VANET technology evolves, the concept of Electrical Vehicular Ad Hoc
Networks (E-VANET) has emerged, integrating EV-specific needs such as charging infrastructure
communication and range management into the ad hoc network model.

6.1 Concept: Integration of EVs into VANET

E-VANET extends VANET architecture to address EV-centric requirements:

1. Charging Station Awareness: Nodes (EVs) must locate nearby charging stations and
verify availability or waiting times. E-VANET protocols can broadcast or request charging
station status to optimize route planning.

2. Energy Constraints: EVs have limited battery capacity, making efficient route selection
and recharging strategies critical for seamless travel.

3. Infrastructure Cooperation: Roadside units or smart city infrastructure can coordinate


energy demands, guide vehicles to the least congested charging stations, and even
manage load across the electrical grid.

6.2 Unique Characteristics of E-VANET

1. Energy Efficiency and Range Management:

o Real-time Range Estimation: Vehicles exchange information on battery status,


traffic, and road conditions to refine range estimates.

o Cooperative Routing: Nodes can dynamically route data to maintain connectivity


while preserving battery life for both traction and communication subsystems.

2. Charging Station Communication:


o Reservation Mechanisms: E-VANET can implement reservation protocols that let
vehicles pre-book charging slots, reducing wait times and congestion at stations.

o Payment and Authentication: Secure transactions for charging fees can be


facilitated by the network using digital certificates or token-based systems.

3. Interoperability with Smart Grids:

o Vehicle-to-Grid (V2G): EVs can potentially feed power back into the grid during
peak demand or store excess renewable energy. E-VANET ensures real-time
coordination for these transactions.

o Grid Balancing: Utility providers and vehicles communicate to balance load,


optimize charging schedules, and reduce peak load pressures.

6.3 Emerging Applications

1. Smart Grid Integration:

o Load Management: Utilities communicate with EVs to manage charging during


off-peak hours or integrate renewable energy sources efficiently.

o Dynamic Pricing: Real-time electricity pricing signals can encourage off-peak


charging, lowering costs for EV owners.

2. EV Route Optimization:

o Multi-Criteria Routing: Routing decisions consider not just traffic but also battery
level, charging station density, and expected queue times.

o Crowdsourced Data: Vehicles share current station availability, estimated waiting


time, and route conditions in real-time.

3. Cooperative Driving Support:

o Platooning: EVs traveling together in close proximity can reduce aerodynamic


drag, improving energy efficiency.

o Collision Avoidance: VANET safety features (e.g., broadcast of sudden braking)


remain critical, with E-VANET also factoring in energy-efficient maneuvers.

E-VANET stands at the intersection of smart mobility and sustainable energy. Although it
inherits many technical foundations from conventional VANETs, its specialized focus on energy
management and charging infrastructure communication positions it as a key enabler for the
widespread adoption of EVs.
7. Conclusion

Short-range wireless technologies and ad hoc networks form the backbone of modern
distributed communication systems, enabling an array of applications from simple data
exchange between personal devices to complex, mission-critical functionalities like industrial
process control and vehicular safety.

• IEEE 802.15.1 (Bluetooth) remains a critical standard for consumer electronics, wearable
devices, and personal area networks, employing piconets and scatternets to extend
scalability.

• IEEE 802.15.4 (ZigBee) provides a power-efficient solution for LR-WPANs, powering


large-scale sensor and control networks with robust mesh capabilities, security, and
interoperability.

• Wireless Sensor Networks (WSNs) build upon low-power hardware and distributed
intelligence to sense and act upon environments, finding utility in industrial,
environmental, healthcare, and military applications. While they offer significant
benefits in real-time monitoring, they also face design challenges related to energy
constraints, security, and reliability.

• Ad Hoc Networks, particularly MANETs and VANETs, showcase the power of self-
organizing, decentralized communication in scenarios where infrastructure is unavailable
or infeasible. They facilitate rapid deployment and flexible connectivity while grappling
with issues like routing complexity, security, and QoS management.

• E-VANET extends the VANET paradigm to support electric vehicles, incorporating


charging station communication, energy-aware routing, and integration with smart grids.
This emerging concept underlines the growing intersection of sustainable energy and
intelligent transportation.

Future innovations will likely converge these technologies, leveraging the strengths of each
while addressing lingering limitations. Advanced security frameworks, machine learning–driven
route optimization, and enhanced energy management schemes will drive next-generation
wireless systems toward increasingly autonomous, scalable, and resilient networks. Engineers
and researchers in the field must remain vigilant about evolving standards, cross-technology
interoperability, and the overarching goal of efficiency and reliability for both current and future
applications.
Module No. 1

Fundamentals of Wireless Communication


Introduction to Wireless Communication, Advantages, Disadvantages and Applications;
Multiple Access Techniques - FDMA, TDMA, CDMA, OFDMA; Spread Spectrum
Techniques – DSSS, FHSS; Evolution of wireless generations – 1G to 5G (Based on
technological differences and advancements); 5G – Key requirements and drivers of 5G
systems, Use cases, Massive MIMO. Self-learning Topics: Modulation Techniques - QAM,
MSK, GMSK

Introduction to Wireless Communication:

Interconnection of systems, people or things with the help of a communication media can be
referred as network. The type of communication in which use electromagnetic waves as
communication media for transmitting and receiving data or voice is called wireless
communication.
Wireless communication is a broad term that incorporates all procedures and forms of
connecting and communicating between two or more devices using a wireless signal through
wireless communication technologies and devices. Wireless communication involves the
transmission of information over a distance without the help of wires, cables or any other
forms of electrical conductors.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
●​ The transmitted distance can be anywhere between a few meters (for example, a
television's remote control) and thousands of kilometers (for example, radio
communication).
●​ Wireless communication can be used for cellular telephony, wireless access to the
internet, wireless home networking, and so on.
●​ Other examples of applications of radio wireless technology include GPS units, garage
door openers, wireless computer mice, keyboards and headsets, headphones, radio
receivers, satellite television, broadcast television and cordless telephones.
Wireless - Advantages
Wireless communication involves transfer of information without any physical connection between two
or more points. Because of this absence of any 'physical infrastructure', wireless communication has
certain advantages. This would often include collapsing distance or space.
Wireless communication has several advantages; the most important ones are discussed below −
●​ Cost effectiveness
Wired communication entails the use of connection wires. In wireless networks, communication does not
require elaborate physical infrastructure or maintenance practices. Hence the cost is reduced.
Example − Any company providing wireless communication services does not incur a lot of costs, and
as a result, it is able to charge cheaply with regard to its customer fees.
The cost of installing wires, cables and other infrastructure is eliminated in wireless communication and
hence lowering the overall cost of the system compared to wired communication system. Installing wired
networks in building, digging up the Earth to lay the cables and running those wires across the streets is
extremely difficult, costly and time consuming job.
In historical buildings, drilling holes for cables is not a best idea as it destroys the integrity and
importance of the building. Also, in older buildings with no dedicated lines for communication, wireless
communication like Wi-Fi or Wireless LAN is the only option.
●​ Flexibility
Wireless communication enables people to communicate regardless of their location. It is not necessary
to be in an office or some telephone booth in order to pass and receive messages.
Miners in the outback can rely on satellite phones to call their loved ones, and thus, help improve their
general welfare by keeping them in touch with the people who mean the most to them.
●​ Convenience
Wireless communication devices like mobile phones are quite simple and therefore allow anyone to use
them, wherever they may be. There is no need to physically connect anything in order to receive or pass
messages.Example − Wireless communications services can also be seen in Internet technologies such as
Wi-Fi. With no network cables hampering movement, we can now connect with almost anyone,
anywhere, anytime.
●​ Mobility
As mentioned earlier, mobility is the main advantage of wireless communication system. It offers the
freedom to move around while still connected to network.
●​ Speed
Improvements can also be seen in speed. The network connectivity or the accessibility were much
improved in accuracy and speed. Example − A wireless remote can operate a system faster than a wired
one. The wireless control of a machine can easily stop its working if something goes wrong, whereas
direct operation can’t act so fast.
●​ Accessibility
The wireless technology helps easy accessibility as the remote areas where ground lines can’t be
properly laid, are being easily connected to the network.
Example − In rural regions, online education is now possible. Educators no longer need to travel to
far-flung areas to teach their lessons. Thanks to live streaming of their educational modules.
●​ Constant connectivity
Constant connectivity also ensures that people can respond to emergencies relatively quickly.
Example − A wireless mobile can ensure you a constant connectivity though you move from place to
place or while you travel, whereas a wired land line can’t.
●​ Reliability
Since there are no cables and wires involved in wireless communication, there is no chance of
communication failure due to damage of these cables, which may be caused by environmental conditions,
cable splice and natural diminution of metallic conductors.
●​ Disaster Recovery
In case of accidents due to fire, floods or other disasters, the loss of communication infrastructure in
wireless communication system can be minimal.

●​ Ease of Installation
The setup and installation of wireless communication network’s equipment and infrastructure is very easy
as we need not worry about the hassle of cables. Also, the time required to setup a wireless system like a
Wi-Fi network for example, is very less when compared to setting up a full cabled network.
Disadvantages of Wireless Communication
Even though wireless communication has a number of advantages over wired communication, there are a
few disadvantages as well. The most concerning disadvantages are Interference, Security and Health.
Interference
Wireless Communication systems use open space as the medium for transmitting signals. As a result,
there is a huge chance that radio signals from one wireless communication system or network might
interfere with other signals.
The best example is Bluetooth and Wi-Fi (WLAN). Both these technologies use the 2.4GHz frequency
for communication and when both of these devices are active at the same time, there is a chance of
interference.
Security
One of the main concerns of wireless communication is Security of the data. Since the signals are
transmitted in open space, it is possible that an intruder can intercept the signals and copy sensitive
information.
Health Concerns
Continuous exposure to any type of radiation can be hazardous. Even though the levels of RF energy that
can cause the damage are not accurately established, it is advised to avoid RF radiation to the maximum.
Basic Elements of a Wireless Communication System
A typical Wireless Communication System can be divided into three elements: the Transmitter, the
Channel and the Receiver. The following image shows the block diagram of wireless communication
system.

Fig. basic elements if wireless communication system

The Transmission Path


A typical transmission path of a Wireless Communication System consists of Encoder, Encryption,
Modulation and Multiplexing. The signal from the source is passed through a Source Encoder, which
converts the signal in to a suitable form for applying signal processing techniques.
The redundant information from signal is removed in this process in order to maximize the utilization of
resources. This signal is then encrypted using an Encryption Standard so that the signal and the
information is secured and doesn’t allow any unauthorized access.
Channel Encoding is a technique that is applied to the signal to reduce the impairments like noise,
interference, etc. During this process, a small amount of redundancy is introduced to the signal so that it
becomes robust against noise. Then the signal is modulated using a suitable Modulation Technique (like
PSK, FSK and QPSK etc.) , so that the signal can be easily transmitted using antenna.
The modulated signal is then multiplexed with other signals using different Multiplexing Techniques like
Time Division Multiplexing (TDM) or Frequency Division Multiplexing (FDM) to share the valuable
bandwidth.

The Channel
The channel in Wireless Communication indicates the medium of transmission of the signal i.e. open
space. A wireless channel is unpredictable and also highly variable and random in nature. A channel
maybe subject to interference, distortion, noise, scattering etc. and the result is that the received signal
may be filled with errors.
The Reception Path
The job of the Receiver is to collect the signal from the channel and reproduce it as the source signal. The
reception path of a Wireless Communication System comprises of Demultiplexing , Demodulation,
Channel Decoding, Decryption and Source Decoding. From the components of the reception path it is
clear that the task of the receiver is just the inverse to that of transmitter.
The signal from the channel is received by the Demultiplexer and is separated from other signals. The
individual signals are demodulated using appropriate Demodulation Techniques and the original message
signal is recovered. The redundant bits from the message are removed using the Channel Decoder.
Since the message is encrypted, Decryption of the signal removes the security and turns it into simple
sequence of bits. Finally, this signal is given to the Source Decoder to get back the original transmitted
message or signal.
Types of Wireless Communication Systems
Today, people need Mobile Phones for many things like talking, internet, multimedia etc. All these
services must be made available to the user on the go i.e. while the user is mobile. With the help of these
wireless communication services, we can transfer voice, data, videos, images etc.
Wireless Communication Systems also provide different services like video conferencing, cellular
telephone, paging, TV, Radio etc. Due to the need for variety of communication services, different types
of Wireless Communication Systems are developed. Some of the important Wireless Communication
Systems available today are:
●​ Television and Radio Broadcasting
●​ Satellite Communication
●​ Radar
●​ Mobile Telephone System (Cellular Communication)
●​ Global Positioning System (GPS)
●​ Infrared Communication
●​ WLAN (Wi-Fi)
●​ Bluetooth
●​ ZigBee

●​ Paging
●​ Cordless Phones
●​ Radio Frequency Identification (RFID)
There are many other system with each being useful for different applications. Wireless Communication
systems can be again classified as Simplex, Half Duplex and Full Duplex. Simplex communication is one
way communication. An example is Radio broadcast system.
Half Duplex is two way communication but not simultaneous one. An example is walkie – talkie (civilian
band radio). Full Duplex is also two way communication and it is a simultaneous one. Best example for
full duplex is mobile phones.
The devices used for Wireless Communication may vary from one service to other and they may have
different size, shape, data throughput and cost. The area covered by a Wireless Communication system is
also an important factor. The wireless networks may be limited to a building, an office campus, a city, a
small regional area (greater than a city) or might have global coverage.
We will see a brief note about some of the important Wireless Communication Systems.
Television and Radio Broadcasting
Radio is considered to be the first wireless service to be broadcast. It is an example of a Simplex
Communication System where the information is transmitted only in one direction and all the users
receiving the same data.

Satellite Communication
Satellite Communication System is an important type of Wireless Communication. Satellite
Communication Networks provide worldwide coverage independent to population density.
Satellite Communication Systems offer telecommunication (Satellite Phones), positioning and navigation
(GPS), broadcasting, internet, etc. Other wireless services like mobile, television broadcasting and other
radio systems are dependent of Satellite Communication Systems.
Mobile Telephone Communication System
Perhaps, the most commonly used wireless communication system is the Mobile Phone Technology. The
development of mobile cellular device changed the World like no other technology. Today’s mobile
phones are not limited to just making calls but are integrated with numerous other features like Bluetooth,
Wi-Fi, GPS, and FM Radio.
The latest generation of Mobile Communication Technology is 5G (which is indeed successor to the
widely adapted 4G). Apart from increased data transfer rates (technologists claim data rates in the order of
Gbps), 5G Networks are also aimed at Internet of Things (IoT) related applications and future
automobiles.
Global Positioning System (GPS)
GPS is solely a subcategory of satellite communication. GPS provides different wireless services like
navigation, positioning, location, speed etc. with the help of dedicated GPS receivers and satellites.
Bluetooth
Bluetooth is another important low range wireless communication system. It provides data, voice and
audio transmission with a transmission range of 10 meters. Almost all mobile phones, tablets and laptops
are equipped with Bluetooth devices. They can be connected to wireless Bluetooth receivers, audio
equipment, cameras etc.
Paging
Although it is considered an obsolete technology, paging was a major success before the wide spread use
of mobile phones. Paging provides information in the form of messages and it is a simplex system i.e. the
user can only receive the messages.
Wireless Local Area Network (WLAN)
Wireless Local Area Network or WLAN (Wi-Fi) is an internet related wireless service. Using WLAN,
different devices like laptops and mobile phones can connect to an access point (like a Wi-Fi Router) and
access internet.
Wi-Fi is one of the widely used wireless network, usually for internet access (but sometimes for data
transfer within the Local Area Network). It is very difficult to imagine the modern World without Wi-Fi.
Infrared Communication
Infrared Communication is another commonly used wireless communication in our daily lives. It uses the
infrared waves of the Electromagnetic (EM) spectrum. Infrared (IR) Communication is used in remote
controls of Televisions, cars, audio equipment etc.

Multiple Access Techniques: FDMA, TDMA, CDMA, OFDMA

protocols are required for sharing data on non dedicated channels. Multiple access protocols can be subdivided
further as –

1.​ Random Access Protocol: In this, all stations have same superiority that is no station has more
priority than another station. Any station can send data depending on medium’s state( idle or busy). It has
two features:
1.​ There is no fixed time for sending data
2.​ There is no fixed sequence of stations sending
data The Random access protocols are further subdivided as:
(a)​ ALOHA – It was designed for wireless LAN but is also applicable for shared medium. In this,
multiple stations can transmit data at the same time and can hence lead to collision and data being
garbled.
●​ Pure Aloha:
When a station sends data it waits for an acknowledgement. If the acknowledgement doesn’t
come within the allotted time then the station waits for a random amount of time called
back-off time (Tb) and re-sends the data. Since different stations wait for different amount of
time, the probability of further collision decreases.
Vulnerable Time = 2* Frame transmission
time Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
●​ Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed
only at the beginning of these slots. If a station misses out the allowed time, it must wait for
the next slot. This reduces the probability of collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
For more information on ALOHA refer – LAN Technologies
(b)​ CSMA – Carrier Sense Multiple Access ensures fewer collisions as the station is required to first
sense the medium (for idle or busy) before transmitting data. If it is idle then it sends data, otherwise it
waits till the channel becomes idle. However there is still chance of collision in CSMA due to propagation
delay. For example, if station A wants to send data, it will first sense the medium.If it finds the channel
idle, it will start sending data. However, by the time the first bit of data is transmitted (delayed due to
propagation delay) from station A, if station B requests to send data and senses the medium it will also
find it idle and will also send data. This will result in collision of data from station A and B.
CSMA access modes-
●​ 1-persistent: The node senses the channel, if idle it sends the data, otherwise it continuously
keeps on checking the medium for being idle and transmits unconditionally(with 1
probability) as soon as the channel gets idle.
●​ Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it checks the
medium after a random amount of time (not continuously) and transmits when found idle.
●​ P-persistent: The node senses the medium, if idle it sends the data with p probability. If the
data is not transmitted ((1-p) probability) then it waits for some time and checks the medium
again, now if it is found idle then it send with p probability. This repeat continues until the
frame is sent. It is used in Wifi and packet radio systems.
●​ O-persistent: Superiority of nodes is decided beforehand and transmission occurs in that
order. If the medium is idle, node waits for its time slot to send data.
(c)​ CSMA/CD – Carrier sense multiple access with collision detection. Stations can terminate
transmission of data if collision is detected.
(d)​ CSMA/CA – Carrier sense multiple access with collision avoidance. The process of collisions
detection involves the sender receiving acknowledgement signals. If there is just one signal(its own) then
the data is successfully sent but if there are two signals(its own and the one with which it has collided)
then it means a collision has occurred. To distinguish between these two cases, collision must have a lot
of impact on received signal. However it is not so in wired networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:
1.​ Interframe space – Station waits for medium to become idle and if found idle it does not
immediately send data (to avoid collision due to propagation delay) rather it waits for a
period of time called Interframe space or IFS. After this time it again checks the medium for
being idle. The IFS duration depends on the priority of station.
2.​ Contention Window – It is the amount of time divided into slots. If the sender is ready to send
data, it chooses a random number of slots as wait time which doubles every time medium is
not found idle. If the medium is found busy it does not restart the entire process, rather it
restarts the timer when the channel is found idle again.
3.​ Acknowledgement – The sender re-transmits the data if acknowledgement is not received
before time-out.
2.​Controlled Access:
In this, the data is sent by that station which is approved by all other stations. For further details refer –
Controlled Access Protocols
3.​Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code to multiple stations to
access channel simultaneously.
●​ Frequency Division Multiple Access (FDMA) – The available bandwidth is divided into
equal bands so that each station can be allocated its own band. Guard bands are also added so
that no two bands overlap to avoid crosstalk and noise.
●​ Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between multiple
stations. To avoid collision time is divided into slots and stations are allotted these slots to
transmit data. However there is a overhead of synchronization as each station needs to know
its time slot. This is resolved by adding synchronization bits to each slot. Another issue with
TDMA is propagation delay which is resolved by addition of guard bands.
●​ Code Division Multiple Access (CDMA) – One channel carries all transmissions
simultaneously. There is neither division of bandwidth nor division of time. For example, if
there are many people in a room all speaking at the same time, then also perfect reception of
data is possible if only two person speak the same language. Similarly, data from different
stations can be transmitted simultaneously in different code languages.

A satellite’s service is present at a particular location on the earth station and sometimes it is not present.
That means, a satellite may have different service stations of its own located at different places on the
earth. They send carrier signal for the satellite.
In this situation, we do multiple access to enable satellite to take or give signals from different stations at
time without any interference between them. Following are the three types of multiple access techniques.
●​ FDMA (Frequency Division Multiple Access)
●​ TDMA (Time Division Multiple Access)
●​ CDMA (Code Division Multiple Access)

FDMA
In this type of multiple access, we assign each signal a different type of frequency band (range). So, any
two signals should not have same type of frequency range. Hence, there won’t be any interference
between them, even if we send those signals in one channel.
One perfect example of this type of access is our radio channels. We can see that each station has been
given a different frequency band in order to operate.
Let’s take three stations A, B and C. We want to access them through FDMA technique. So we assigned
them different frequency bands.
As shown in the figure, satellite station A has been kept under the frequency range of 0 to 20 HZ.
Similarly, stations B and C have been assigned the frequency range of 30-60 Hz and 70-90 Hz
respectively. There is no interference between them.
The main disadvantage of this type of system is that it is very burst. This type of multiple access is not
recommended for the channels, which are dynamic and uneven. Because, it will make their data as
inflexible and inefficient.
Advantages of FDMA
As FDMA systems use low bit rates (large symbol time) compared to average delay spread, it offers the
following advantages −
●​ Reduces the bit rate information and the use of efficient numerical codes increases the capacity.
●​ It reduces the cost and lowers the inter symbol interference (ISI)
●​ Equalization is not necessary.
●​ An FDMA system can be easily implemented. A system can be configured so that the
improvements in terms of speech encoder and bit rate reduction may be easily incorporated.
●​ Since the transmission is continuous, less number of bits are required for synchronization and
framing.
Disadvantages of FDMA
Although FDMA offers several advantages, it has a few drawbacks as well, which are listed below −
●​ It does not differ significantly from analog systems; improving the capacity depends on the
signal-to-interference reduction, or a signal-to-noise ratio (SNR).
●​ The maximum flow rate per channel is fixed and small.
●​ Guard bands lead to a waste of capacity.
●​ Hardware implies narrowband filters, which cannot be realized in VLSI and therefore increases
the cost.

TDMA
As the name suggests, TDMA is a time based access. Here, we give certain time frame to each channel.
Within that time frame, the channel can access the entire spectrum bandwidth
Each station got a fixed length or slot. The slots, which are unused will remain in idle stage.

Suppose, we want to send five packets of data to a particular channel in TDMA technique. So, we should
assign them certain time slots or time frame within which it can access the entire bandwidth.
In above figure, packets 1, 3 and 4 are active, which transmits data. Whereas, packets 2 and 5 are idle
because of their non-participation. This format gets repeated every time we assign bandwidth to that
particular channel.
Although, we have assigned certain time slots to a particular channel but it can also be changed
depending upon the load bearing capacity. That means, if a channel is transmitting heavier loads, then it
can be assigned a bigger time slot than the channel which is transmitting lighter loads. This is the biggest
advantage of TDMA over FDMA. Another advantage of TDMA is that the power consumption will be
very low.
Note − In some applications, we use the combination of both TDMA and FDMA techniques. In this
case, each channel will be operated in a particular frequency band for a particular time frame. In this
case, the frequency selection is more robust and it has greater capacity over time compression.
Time Division Multiple Access (TDMA) : a digital wireless telephony transmission technique. TDMA
allocates each user a different time slot on a given frequency. TDMA divides each cellular channel into
three time slots in order to increase the amount of data that can be carried.
TDMA technology was more popular in Europe, Japan and Asian countries, where as CDMA is widely
used in North and South America. But now a days both techologies are very popular through out of the
world.
Advantages of TDMA:
●​ TDMA can easily adapt to transmission of data as well as voice communication.
●​ TDMA has an ability to carry 64 kbps to 120 Mbps of data rates.
●​ TDMA allows the operator to do services like fax, voice band data, and SMS as well as
bandwidth-intensive application such as multimedia and video conferencing.
●​ Since TDMA technology separates users according to time, it ensures that there will be no
interference from simultaneous transmissions.
●​ TDMA provides users with an extended battery life, since it transmits only portion of the time
during conversations.
●​ TDMA is the most cost effective technology to convert an analog system to digital.
Disadvantages of TDMA
●​ Disadvantage using TDMA technology is that the users has a predefined time slot. When moving
from one cell site to other, if all the time slots in this cell are full the user might be disconnected.
●​ Another problem in TDMA is that it is subjected to multipath distortion. To overcome this
distortion, a time limit can be used on the system. Once the time limit is expired the signal is
ignored.
CDMA
In CDMA technique, a unique code has been assigned to each channel to distinguish from each other. A
perfect example of this type of multiple access is our cellular system. We can see that no two persons’
mobile number match with each other although they are same X or Y mobile service providing
company’s customers using the same bandwidth.
In CDMA process, we do the decoding of inner product of the encoded signal and chipping sequence.
Therefore, mathematically it can be written as
Encodedsignal=Orginaldata×chippingsequence
Encodedsignal=Orginaldata×chippingsequence
The basic advantage of this type of multiple access is that it allows all users to coexist and use the entire
bandwidth at the same time. Since each user has different code, there won’t be any interference.
In this technique, a number of stations can have number of channels unlike FDMA and TDMA. The best
part of this technique is that each station can use the
entire spectrum at all time.

CDMA is characterized by the use of codes to increase


the bandwidth of a channel.
This code is known as spreading codes and functions to
spread data using series of codes and these codes are
independent of the data. CDMA splits a time slot within
the same frequency into codes and unlike other multi
access modes, these slots are actually transmitted
simultaneously as shown in the figure to the left. The
only problem associated with this form of
communication is that noise in the medium increases with an increase in the number of users.
Introduction of codes to the channel multiplexing technique also improved the level of security because
the data being transmitted through a channel is done so using the already mentioned spreading codes. To
be able to access the transmitted data, the receiving end must have a knowledge of the code used in the
transmission of the data.

Suppose there are four stations M, N, O, and P individually transmitting 1, 0, 1, 1. And each one is having
a unique code sequence (C1, C2, C3, C4) where the codes are of orthogonal nature.
To represent data bits and code bits we will use polar signaling thus,
●​ Binary 0 will be represented as -1 and
●​ Binary 1 will be represented as +1 (or 1)
Thus, data vector i.e., (M, N, O, P) will be(1, -1, 1, 1).

Parameter for choosing codes:


●​ The sum of resultant bits obtained from the multiplication of codes of any two stations must
be 0.
It is to be noted here that always while finding the product of two data sequences, 1st bit of one sequence
is multiplied with the first bit of another sequence. Likewise, 2nd bit with 2nd bit and so on.
Suppose here, C1*C4 = (1, 1, -1, -1).(1, -1, 1, -1) = (1,-1, -1, 1)
On addition of all 4 bits of resultant, we will get 0. Thus, codes are of orthogonal nature.
●​ The sum of resultant obtained when a code sequence is multiplied with itself must indicate
the total number of stations transmitting.
Suppose, C2*C2 = (1, -1, -1, 1). (1, -1, -1, 1) = (1, 1, 1, 1)
So, 1+1+1+1 will give 4 as output. Hence, verifying that there are 4 stations transmitting at a time.
Transmission: We have discussed previously that, to perform DS-CDMA, first, data bits are to be
multiplied separately with their respective code.
Hence, the resultant of product of data bit and code bit will be:
Now, over the channel, the bits will be transmitted combinedly.

The complete bit sequence to be transmitted will be produced by adding the bits according to their positional
sequence:
The sequence transmitted over the channel will be: 2, 2, 2, -2.
Reception: The receiver will get the above sequence. Now, to retrieve the actual information from this
received (coded form) data, each receiving station must have the code sequence of their respective
transmitting station.
Here each receiver will get the original data sequence by multiplying the received bit sequence with its
respective code stream.

R1 = (2, 2, -2, 2)
R2 = (2, -2, -2, -2)
R3 = (2, 2, 2, -2)
R4 = (2, -2, 2, 2)
Hence, by summing every bit of the sequence and dividing it will the total number of transmitting stations
one can get the originally transmitted data bit. So, calculating for each receiving station, we will get:
R1 = [2 + 2 + (-2) + 2]/Number of stations = 4/4 = 1
R2 = [2 + (-2) + (-2) + (-2)]/Number of stations = -4/4 = -1
R3 = [2 + 2 + 2 + (-2)]/Number of stations = 4/4 = 1
R4 = [2 + (-2) + 2 + 2]/Number of stations = 4/4 = 1
According to polar signalling 1 denotes binary 1 and -1 denotes binary 0. Therefore, the data bits received
at each receiving station will be 1, 0, 1, 1.
It can be clearly checked that the received bits are exactly the same as the one which was transmitted
from the transmitting stations. Hence, in this way CDMA can be implemented.

Code Division Multiple Access (CDMA: a digital wireless technology that uses spread-spectrum
techniques. CDMA does not assign a specific frequency to each user. Instead, every channel uses the full
available spectrum. Individual conversations are encoded with a pseudo-random digital sequence.
CDMA consistently provides better capacity for voice and data communications than other commercial
mobile technologies, allowing more subscribers to connect at any given time, and it is the common
platform on which 3G technologies are built.
Advantages of CDMA
●​ One of the main advantages of CDMA is that dropouts occur only when the phone is at least
twice as far from the base station. Thus, it is used in the rural areas where GSM cannot cover.
●​ Another advantage is its capacity; it has a very high spectral capacity that it can accommodate
more users per MHz of bandwidth.
Disadvantages of CDMA
●​ Channel pollution, where signals from too many cell sites are present in the subscriber. s phone
but none of them is dominant. When this situation arises, the quality of the audio degrades.
●​ When compared to GSM is the lack of international roaming capabilities.
●​ The ability to upgrade or change to another handset is not easy with this technology because the
network service information for the phone is put in the actual phone unlike GSM which uses SIM
card for this.
●​ Limited variety of the handset, because at present the major mobile companies use GSM
technology.

OFDM: Orthogonal Frequency Division Multiplexing


OFDM, Orthogonal Frequency Division Multiplexing is a form of signal waveform or modulation that
provides some significant advantages for data links.
Accordingly, OFDM, Orthogonal Frequency Division Multiplexing is used for many of the latest wide
bandwidth and high data rate wireless systems including Wi-Fi, cellular telecommunications and many
more.
The fact that OFDM uses a large number of carriers, each carrying low bit rate data, means that it is very
resilient to selective fading, interference, and multipath effects, as well providing a high degree of
spectral efficiency.
Early systems using OFDM found the processing required for the signal format was relatively high, but
with advances in technology, OFDM presents few problems in terms of the processing required.
OFDM is a form of multicarrier modulation. An OFDM signal consists of a number of closely spaced
modulated carriers. When modulation of any form - voice, data, etc. is applied to a carrier, then sidebands
spread out either side. It is necessary for a receiver to be able to receive the whole signal to be able to
successfully demodulate the data. As a result when signals are transmitted close to one another they must
be spaced so that the receiver can separate them using a filter and there must be a guard band between
them. This is not the case with OFDM. Although the sidebands from each carrier overlap, they can still be
received without the interference that might be expected because they are orthogonal to each another. This
is achieved by having the carrier spacing equal to the reciprocal of the symbol period.

How orthogonal frequency-division multiplexing works


In the traditional stream, each bit might be represented by a 1 nanosecond segment of the signal, with 0.25
ns spacing between bits, for example. Using OFDM to split the signal across four component streams lets
each bit be represented by 4 ns of the signal with 1 ns spacing between. The overall data rate is the same,
4 bits every 5 ns, but the signal integrity is higher.
As an illustration, imagine you were sending a letter to your grandmother. You could write your letter on a
single piece of paper and mail it to her in an envelope. This would be like using a single frequency (one
piece of paper) to send your entire message. But, because your grandmother can't see well, you instead
write the same message in larger letters (a slower data rate) on several pieces of paper (representing data
streams on different channels) but put them all in the same envelope (using same overall frequency
spectrum).
OFDM builds on simpler frequency-division multiplexing (FDM). In FDM, the total data stream is
divided into several subchannels, but the frequencies of the subchannels are spaced farther apart so they
do not overlap or interfere. With OFDM, the subchannel frequencies are close together and overlapping
but are still orthogonal, or separate, in that they are carefully chosen and modulated so that the
interference between the subchannels is canceled out.
Extending orthogonal frequency-division multiplexing
OFDM has been further extended into what's called orthogonal frequency-division multiple access
(OFDMA). OFDMA enables devices sharing the same overall channel to have the component
subchannels dedicated to specific devices.
To extend the above illustration, OFDMA would be like including a single-page letter to your grandfather
in the same envelope as the letter to your grandmother. Since all devices on the same channel share the
same collision domain, this reduces the need for the devices to wait or take turns to receive data. This will
specifically help in situations where a device needs a low, but consistent, stream of data or in situations
where many devices are connected to a single base station. This is a key feature in Wi-Fi 6, 4G and 5G
new radio (5G NR) cellular data to support high data rates and many devices, especially for internet of
things (IoT) devices.
OFDM applications
Orthogonal frequency-division multiplexing is used in many technologies, including the following:
●​ Digital radio, Digital Radio Mondiale, and digital audio broadcasting and satellite radio.
●​ Digital television standards, Digital Video Broadcasting-Terrestrial/Handheld (DVB-T/H),
DVB-Cable 2 (DVB-C2). OFDM is not used in the current U.S. digital television Advanced
Television Systems Committee standard, but it is used in the future 4K/8K-capable ATSC 3.0
standard.
●​ Wired data transmission, Asymmetric Digital Subscriber Line (ADSL), Institute of Electrical
and Electronics Engineers (IEEE) 1901 powerline networking, cable internet providers. Fiber
optic transmission may use either OFDM signals or several distinct frequencies as FDM.
●​ Wireless LAN (WLAN) data transmission. All Wi-Fi systems use OFDM, including IEEE
802.11a/b/g/n/ac/ax. The addition of OFDMA to the Wi-Fi 6/802.11ax standard enables more
devices to use the same base station simultaneously. OFDM is also used in metropolitan area
network (MAN) IEEE 802.16 Worldwide Interoperability for Microwave Access (WiMAX>)
installations.
●​ Cellular data. Long-Term Evolution (LTE) and 4G cellphone networks use OFDM. It is also
an integral part of 5G NR cellular What problems does OFDMA solve?
●​ Previous Wi-Fi standards were intended for web browsing and email in low-density situations.
Today's users aren't just greater in number; they're performing more data-intensive functions in
more settings than ever before.
●​ Network congestion caused by simultaneous requests causes slowdowns, since clients must form
a queue to complete transmissions. OFDMA solves the congestion problem by accommodating
multiple users at the same time and allocating bandwidth more efficiently.
●​ deployments.

Spread-Spectrum techniques are methods by which a signal (e.g. an electrical, electromagnetic, or


acoustic signal) generated with a particular bandwidth is deliberately spread in the frequency domain,
resulting in a signal with a wider bandwidth.
These techniques are used for a variety of reasons, including the establishment of secure
communications, increasing resistance to natural interference, noise and jamming, to prevent detection,
and to limit power flux density (e.g. in satellite downlinks).
Spread spectrum is designed to be used in wireless applications (LANs and WANs). In wireless
applications, all stations use air (or a vacuum) as the medium for communication. Stations must be able to
share this medium without interception by an eavesdropper and without being subject to jamming from a
malicious intruder.
To achieve these goals, spread spectrum techniques add redundancy, they spread the original
spectrum needed for each station. If the required bandwidth for each station is B, spread spectrum
expands it to Bss such that Bss >> B. The expanded bandwidth allows the source to wrap its message in a
protective envelope for a more secure transmission.
The following figure shows the idea of spread spectrum. Spread spectrum achieves its goals through two
principles:

1.​ The bandwidth allocated to


each station needs to be, by far,
larger than what is needed. This
allows redundancy.
2.​ The expanding of the
original bandwidth B to the
bandwidth Bss
Department of Information Technology | APSIT
must be done by a process that is independent of the original signal. In other words, the spreading process
occurs after the signal is created by the source.

After the signal is created by the source, the spreading process uses a spreading code and spreads the
bandwidth. The figure shows the original bandwidth B and the spreaded bandwidth Bss. The spreading
code is a series of numbers that look random, but are actually a pattern.
There are two techniques to spread the bandwidth:

1.​Frequency Hopping Spread Spectrum (FHSS)


2.​Direct Sequence Spread Spectrum (DSSS).

1.​Frequency Hopping Spread Spectrum (FHSS):

The Frequency Hopping Spread Spectrum (FHSS) technique uses M different carrier frequencies that are
modulated by the source signal. At one moment, the signal modulates one carrier frequency; at the next
moment, the signal modulates another carrier frequency. Although the modulation is done using one
carrier frequency at a time, M frequencies are used in the long run. The bandwidth occupied by a source
after spreading is BpHSS >> B.
The following figure shows the general layout for FHSS. A pseudorandom code generator, called
pseudorandom noise (PN), creates a k-bit pattern for every hopping period Th•
The frequency table uses the pattern to find the frequency to be used for this hopping period and passes it
to the frequency synthesizer. The frequency synthesizer creates a carrier signal of that frequency, and the
source signal modulates the carrier signal.

For Example M is no. of patterns= 8 and k= no. of bits is 3. The pseudorandom code generator will create
eight different 3-bit patterns. These are mapped to eight different frequencies in the frequency table as
shown in the following figure.

The pattern for this station is 101, 111, 001, 000, 010, all, 100. Note that the pattern is pseudorandom it is
repeated after eight hoppings. This means that at hopping period 1, the pattern is 101. The frequency
selected is 700 kHz, the source signal modulates this carrier frequency.

The second k-bit pattern selected is 111, which selects the 900-kHz carrier; the eighth pattern is 100, the
frequency is 600 kHz. After eight hoppings, the pattern repeats, starting from 101 again.

If there are many k-bit patterns and the hopping period is short, a sender and receiver can have privacy. If
an intruder tries to intercept the transmitted signal, she can only access a small piece of data because she
does not know the spreading sequence to quickly adapt herself to the next hop. The scheme has also an
anti-jamming effect. A malicious sender may be able to send noise to jam the signal for one hopping
period (randomly), but not for the whole period.
Bandwidth Sharing
If the number of hopping frequencies is M, we can multiplex M channels into one by using the same Bss
bandwidth. This is possible because a station uses just one frequency in each hopping period; M - 1 other
frequencies can be used by other M - 1 stations. In other words, M different stations can use the same Bss
if an appropriate modulation technique such as multiple FSK (MFSK) is used.

2.​Direct Sequence Spread Spectrum


The direct sequence spread spectrum (nSSS) technique also expands the bandwidth of the original signal,
but the process is different. In DSSS, we replace each data bit with n bits using a spreading code. In other
words, each bit is assigned a code of n bits, called chips, where the chip rate is n times that of the data bit.
The following figure shows the concept of DSSS.

As an example, let us consider the sequence used in a wireless LAN, the famous Barker sequence where n
is 11. We assume that the original signal and the chips in the chip generator use polar NRZ encoding. The
following figure shows the chips and the result of multiplying the original data by the chips to get the
spread signal.
In the figure, the spreading code is 11 chips having the pattern 10110111000 (in this case). If the original
signal rate is N, the rate of the spread signal is 11N. This means that the required bandwidth for the spread
signal is 11 times larger than the bandwidth of the original signal. The spread signal can provide privacy if
the intruder does not know the code. It can also provide immunity against interference if each station uses
a different code.
Evolution of wireless generations –
1G to 5G
Mobile wireless communication system has gone through several evolution stages in the past few decades
after the introduction of the first generation mobile network in early 1980s. Due to huge demand for more
connections worldwide, mobile communication standards advanced rapidly to support more users. Let’s
take a look on the evolution stages of wireless technologies for mobile communication.
History of wireless technology
Marconi, an Italian inventor, transmitted Morse code signals using radio waves wirelessly to a distance of
3.2 KMs in 1895. It was the first wireless transmission in the history of science. Since then, engineers and
scientists were working on an efficient way to communicate using RF waves.
Telephone became popular during the mid of 19th century. Due to wired connection and restricted
mobility, engineers started developing a device which doesn’t requires wired connection and transmit
voice using radio waves.

Every successive generation of wireless standards – abbreviated to “G” – have introduced dizzying
advances in data-carrying capacity and decreases in latency, and 5G will be no exception. Although formal
5G standards are yet to be set, 5G is expected to be at least three times faster than current 4G standards.
To truly understand how we got here, it’s useful to chart the unstoppable rise of wireless standards from the
first generation (1G) to where we are today, on the cusp of a global 5G rollout.
1G: Where it all began

The first generation of mobile networks – or 1G as they


were retroactively dubbed when the next generation was
introduced – was launched by Nippon Telegraph and
Telephone (NTT) in Tokyo in 1979. By 1984, NTT had
rolled out 1G to cover the whole of Japan.
In 1983, the US approved the first 1G operations and the
Motorola’s DynaTAC became one of the first ‘mobile’
phones to see widespread use stateside. Other countries
such as Canada and the UK rolled out their own 1G networks a few years later.
However, 1G technology suffered from a number of drawbacks. Coverage was poor and sound quality was
low. There was no roaming support between various operators and, as different systems operated on different
frequency ranges, there was no compatibility between systems. Worse of all, calls weren’t encrypted, so
anyone with a radio scanner could drop in on a call.
Despite these shortcomings and a hefty $3,995 price tag ($9,660 in today’s money), the DynaTAC still
managed to rack up an astonishing 20 million global subscribers by 1990. There was no turning back; the
success of 1G paved the way for the second generation, appropriately called 2G.
2G: The Cultural Revolution

The second generation of mobile networks, or 2G, was


launched under the GSM standard in Finland in 1991. For the
first time, calls could be encrypted and digital voice calls
were significantly clearer with less static and background
crackling.
But 2G was about much more than telecommunications; it
helped lay the groundwork for nothing short of a cultural
revolution. For the first time, people could send text messages
(SMS), picture messages, and multimedia messages (MMS)
on their phones. The analog past of 1G gave way to the digital
future presented by 2G. This led to mass-adoption by
consumers and businesses alike on a scale never before seen.
Although 2G’s transfer speeds were initially only around 9.6
kbit/s, operators rushed to invest in new infrastructure such as
mobile cell towers. By the end of the era, speeds of 40 kbit/s were achievable and EDGE connections offered
speeds of up to 500 kbit/s. Despite relatively sluggish speeds, 2G revolutionized the business landscape and
changed the world forever.
3G: The ‘Packet-Switching’ Revolution

3G was launched by NTT DoCoMo in 2001 and aimed to


standardize the network protocol used by vendors. This
meant that users could access data from any location in the
world as the ‘data packets’ that drive web connectivity were
standardized. This made international roaming services a
real possibility for the first time.
3G’s increased data transfer capabilities (4 times faster than
2G) also led to the rise of new services such as video
conferencing, video streaming and voice over IP (such as
Skype). In 2002, the Blackberry was launched, and many of
its powerful features were made possible by 3G
connectivity.
The twilight era of 3G saw the launch of the iPhone in 2007, meaning that its network capability was about to
be stretched like never before.
4G: The Streaming Era

4G was first deployed in Stockholm, Sweden


and Oslo, Norway in 2009 as the Long Term
Evolution (LTE) 4G standard. It was
subsequently introduced throughout the world
and made high-quality video streaming a
reality for millions of consumers. 4G offers
fast mobile web access (up to 1 gigabit per
second for stationary users) which facilitates
gaming services, HD videos and HQ video
conferencing.
The catch was that while transitioning from 2G
to 3G was as simple as switching SIM cards,
mobile devices needed to be specifically
designed to support 4G. This helped device
manufacturers scale their profits dramatically
by introducing new 4G-ready handsets and
was one factor behind Apple’s rise to become
the world’s first trillion dollar company.
While 4G is the current standard around the globe, some regions are plagued by network patchiness and have
low 4G LTE penetration. According to Ogury, a mobile data platform, UK residents can only access 4G
networks 53 percent of the time, for example.
5G: The Internet of Things Era

With 4G coverage so low in some areas, why has the focus shifted to 5G already?
5G has actually been years in the making.
During an interview with TechRepublic, Kevin
Ashton described how he coined the term "the
Internet of Things" – or IoT for short – during a
PowerPoint presentation he gave in the 1990s
to convince Procter & Gamble to start using
RFID tag technology.
The phrase caught on and IoT was soon touted
as the next big digital revolution that would see
billions of connected devices seamlessly share
data across the globe. According to Ashton, a
mobile phone isn’t a phone, it’s the IoT in your
pocket; a number of network-connected
sensors that help you accomplish everything
from navigation to photography to
communication and more. The IoT will see
data move out of
server centers and into what are known as ‘edge devices’ such as Wi-Fi-enabled appliances like fridges,
washing machines, and cars.
By the early 2000s, developers knew that 3G and even 4G networks wouldn’t be able to support such a
network. As 4G’s latency of between 40ms and 60ms is too slow for real-time responses, a number of
researchers started developing the next generation of mobile networks.
In 2008, NASA helped launch the Machine-to-Machine Intelligence (M2Mi) Corp to develop IoT and M2M
technology, as well as the 5G technology needed to support it. In the same year, South Korea developed a 5G
R&D program, while New York University founded the 5G-focused NYU WIRELESS in 2012.
The superior connectivity offered by 5G promised to transform everything from banking to healthcare. 5G
offers the possibility of innovations such as remote surgeries, telemedicine and even remote vital sign
monitoring that could save lives.
Three South Korean carriers – KT, LG Uplus and SK Telecom – rolled out live commercial 5G services last
December and promise a simultaneous March 2019 launch of 5G across the country.

Invention of first mobile phone – The evolution begins


Martin Cooper, an engineer at Motorola during 1970s working on a handheld device capable of two way
communication wirelessly, invented the first generation mobile phone. It was initially developed to use in
a car, the first prototype was tested in 1974.
This invention is considered as a turning point in wireless communication which led to an evolution of
many technologies and standards in future.
1G – First generation mobile communication system
The first generation of mobile network was deployed in Japan by Nippon Telephone and Telegraph
company (NTT) in Tokyo during 1979. In the beginning of 1980s, it gained popularity in the US, Finland,
UK and Europe. This system used analogue signals and it had many disadvantages due to technology
limitations.
Most popular 1G system during 1980s
●​ Advanced Mobile Phone System (AMPS)
●​ Nordic Mobile Phone System (NMTS)
●​ Total Access Communication System (TACS)
●​ European Total Access Communication System
(ETACS) Key features (technology) of 1G system
●​ Frequency 800 MHz and 900 MHz
●​ Bandwidth: 10 MHz (666 duplex channels with bandwidth of 30 KHz)
●​ Technology: Analogue switching
●​ Modulation: Frequency Modulation (FM)
●​ Mode of service: voice only
●​ Access technique: Frequency Division Multiple Access
(FDMA) Disadvantages of 1G system
●​ Poor voice quality due to interference
●​ Poor battery life
●​ Large sized mobile phones (not convenient to carry)
●​ Less security (calls could be decoded using an FM demodulator)
●​ Limited number of users and cell coverage
●​ Roaming was not possible between similar systems
2G – Second generation communication system GSM
Second generation of mobile communication system introduced a new digital technology for wireless
transmission also known as Global System for Mobile communication (GSM). GSM technology became
the base standard for further development in wireless standards later. This standard was capable of
supporting up to 14.4 to 64kbps (maximum) data rate which is sufficient for SMS and email services.
Code Division Multiple Access (CDMA) system developed by Qualcomm also introduced and
implemented in the mid 1990s. CDMA has more features than GSM in terms of spectral efficiency,
number of users and data rate.
Key features of 2G system
●​ Digital system (switching)
●​ SMS services is possible
●​ Roaming is possible
●​ Enhanced security
●​ Encrypted voice transmission
●​ First internet at lower data rate
●​ Disadvantages of 2G system
●​ Low data rate
●​ Limited mobility
●​ Less features on mobile devices
●​ Limited number of users and hardware capability
2.5G and 2.75G system
In order to support higher data rate, General Packet Radio Service (GPRS) was introduced and
successfully deployed. GPRS was capable of data rate up to 171kbps (maximum).
EDGE – Enhanced Data GSM Evolution also developed to improve data rate for GSM networks. EDGE
was capable to support up to 473.6kbps (maximum).
Another popular technology CDMA2000 was also introduced to support higher data rate for CDMA
networks. This technology has the ability to provide up to 384 kbps data rate (maximum).
3G – Third generation communication system
Third generation mobile communication started with the introduction of UMTS – Universal Mobile
Terrestrial / Telecommunication Systems. UMTS has the data rate of 384kbps and it support video calling
for the first time on mobile devices.
After the introduction of 3G mobile communication system, smart phones became popular across the
globe. Specific applications were developed for smartphones which handles multimedia chat, email,
video calling, games, social media and healthcare.
Key features of 3G system
●​ Higher data rate
●​ Video calling
●​ Enhanced security, more number of users and coverage
●​ Mobile app support
●​ Multimedia message support
●​ Location tracking and maps
●​ Better web browsing
●​ TV streaming
●​ High quality 3D games
3.5G to 3.75G Systems
In order to enhance data rate in existing 3G networks, another two technology improvements are
introduced to network. HSDPA – High Speed Downlink Packet access and HSUPA – High Speed Uplink
Packet Access, developed and deployed to the 3G networks. 3.5G network can support up to 2mbps data
rate.
3.75 system is an improved version of 3G network with HSPA+ High Speed Packet Access plus. Later
this system will evolve into more powerful 3.9G system known as LTE (Long Term Evolution).
Disadvantages of 3G systems
●​ Expensive spectrum licenses
●​ Costly infrastructure, equipment and implementation
●​ Higher bandwidth requirements to support higher data rate
●​ Costly mobile devices
●​ Compatibility with older generation 2G system and frequency bands
4G – Fourth generation communication system
4G systems are enhanced version of 3G networks developed by IEEE, offers higher data rate and capable
to handle more advanced multimedia services. LTE and LTE advanced wireless technology used in 4th
generation systems. Furthermore, it has compatibility with previous version thus easier deployment and
upgrade of LTE and LTE advanced networks are possible.
Simultaneous transmission of voice and data is possible with LTE system which significantly improve
data rate. All services including voice services can be transmitted over IP packets. Complex modulation
schemes and carrier aggregation is used to multiply uplink / downlink capacity.
Wireless transmission technologies like WiMax are introduced in 4G system to enhance data rate and
network performance.
Key features of 4G system
●​ Much higher data rate up to 1Gbps
●​ Enhanced security and mobility
●​ Reduced latency for mission critical applications
●​ High definition video streaming and gaming
●​ Voice over LTE network VoLTE (use IP packets for
voice) Disadvantages of 4G system
●​ Expensive hardware and infrastructure
●​ Costly spectrum (most countries, frequency bands are too expensive)
●​ High end mobile devices compatible with 4G technology required, which is costly
●​ Wide deployment and upgrade is time consuming
5G – Fifth generation communication system
5G network is using advanced technologies to deliver ultra fast internet and multimedia experience for
customers. Existing LTE advanced networks will transform into supercharged 5G networks in future.
In earlier deployments, 5G network will function in non standalone mode and standalone mode. In non
standalone mode both LTE spectrum and 5G-NR spectrum will be used together. Control signaling will be
connected to LTE core network in non standalone mode.There will be a dedicated 5G core network higher
bandwidth 5G – NR spectrum for standalone mode. Sub 6-GHz spectrum of FR1 ranges are used in the
initial deployments of 5G networks.
In order to achieve higher data rate, 5G technology will use millimeter waves and unlicensed spectrum for
data transmission. Complex modulation technique has been developed to support massive data rate for
Internet of Things.
Find more about what is 5G NR (New Radio) and how 5G NR works
Cloud based network architecture will extend the functionalities and analytical capabilities for industries,
autonomous driving, healthcare and security applications.
Key features of 5G technology
●​ Ultra fast mobile internet up to 10Gbps
●​ Low latency in milliseconds (significant for mission critical applications)
●​ Total cost deduction for data
●​ Higher security and reliable network
●​ Uses technologies like small cells, beam forming to improve efficiency
●​ Forward compatibility network offers further enhancements in future
●​ Cloud based infrastructure offers power efficiency, easy maintenance and upgrade of hardware
Comparison of 1G to 5G technology
5G: The Future Of Wireless Networks?

5G: Future –
The race for 5G deployment is led by companies like Qualcomm, Huawei, and Intel. Worldwide
commercial launch is expected in 2020. Initial launch and testing has been done by companies like AT&T
and Verizon in four U.S cities.
5G’s range is lesser than supported by 4G LTE or 3G networks due to its frequency which makes the
waves to travel less distance.Hence more base station (Signal Towers) are needed to be installed for good
connectivity, this maybe considered as a disadvantage. Therefore setup of 5G will take time and people
cant expect this amazing revolution in the near future.

What Is Massive MIMO Technology?


MIMO stands for Multiple-Input Multiple-Output. While there are many layers of depth in MIMO
technology, MIMO can essentially be boiled down to one principle: MIMO spatial multiplexing is the
simultaneous use of the same radio frequencies to transmit different signals. It means that several
transmitting antennas at a base station can transmit different signals and several receiving antennas at a
device can receive and divide them simultaneously.
Standard MIMO networks tend to use two or four antennas to transmit data and the same number to
receive it. Massive MIMO, on the other hand, is a MIMO system with an especially high number of
antennas. Massive MIMO increases the number of transmitting antennas (dozens or more than 100
elements) at a base station.
Massive MIMO offers two major innovations: 3D beanforming and MU-MIMO (multi-user MIMO).
Beamforming is a traffic-signaling system for cellular basestations that identifies the most efficient
data-delivery route to a particular user, and it reduces interference for nearby users in the process. At
massive MIMO basestations, signal-processing algorithms plot the best transmission route through the air
to each user. Then they can send individual data packets in many different directions, bouncing them off
buildings and other objects in a precisely coordinated pattern. In brief, think massive MIMO as a massive
3D beanforming that increases horizontal and vertical coverage capabilities.

Massive MIMO Matrix,


MU-MIMO further expands the total capacity per
basestation by enabling communication with
multiple devices using the same resources,
creating a virtually unified device side. The
simultaneous use of the antennas of multiple
devices help achieve the formation of virtual
large-scale MIMO channels. The combination of
these two innovations makes it possible to raise
wireless transmission speed by increasing the
number of antennas at the base station without
consuming more frequency bandwidth or
increasing modulation multiple values.
Relation​ between
M ≫ K and both can be large
number of BS antennas M ≈ K and both are small (e.g., below 10)
(e.g., M=100 and K=20).
(M) and users (K)

Designed to work with both TDD and FDD Designed for TDD operation to
Duplexing mode
operation exploit channel reciprocity

Mainly based on codebooks with set of Based on sending uplink pilots


Channel acquisition
predefined angular beams and exploiting channel reciprocity

Almost no variations over time


Link​ quality​ after Varies over time and frequency, due to
and frequency, thanks to channel
precoding/combining frequency-selective and small-scale fading
hardening

The allocation can be planned in


The allocation must change rapidly to
Resource allocation advance since the channel quality
account for channel quality variations
varies slowly

Cell-edge SNR increases


proportionally to the number of
Cell-edge performance Only good if the BSs cooperate
antennas, without causing more
inter-cell interference

Why Do We Need Massive MIMO for 5G?

Multiple-input/multiple-out (MIMO) technology is an established wireless communications technique for


sending and receiving multiple data signals simultaneously over the same radio channel. MIMO
techniques play a prominent role in Wi-Fi communications, as well as 3G, 4G, and 4G LTE networks.
5G New Radio, however, takes it to the next level, introducing the concept of massive MIMO, which —
as the name implies — involves the application of MIMO technology on a much larger scale for greater
network coverage and capacity. Massive MIMO uses many more transmit and receive antennas to
increase transmission gain and spectral efficiency. To achieve massive MIMO capacity gain, multiple UEs
must generate downlink traffic simultaneously. Many variables impact the actual gain provided by
massive MIMO.
While there is no specific minimum number of antennas required for the application of massive MIMO,
the generally accepted threshold for a system is more than eight transmit and eight receive antennas. And
the number can be much higher, extending to systems with tens or even hundreds of antennas.
Massive MIMO — along with smart antenna techniques such as beamforming and beam steering — are
among the key technologies for enabling the higher throughput and capacity gains promised by 5G. And
they are essential for delivering the 100x data rates and the 1,000x capacity goals specified in the
International Mobile Telecommunications-2020 (IMT-2020) vision.

Since massive MIMO uses many more antennas than the number of UEs in the cell, the beam is much
narrower, enabling the base station to deliver RF energy to the UE more precisely and efficiently. The
antenna's phase and gain are controlled individually, with the channel information remaining with the base
station, simplifying UE without adding multiple receiver antennas. Installation of a large number of base
station antennas will increase the signal-to-noise ratio in the cell, which leads to higher cell site capacity
and throughput. Since 5G massive MIMO implementation is on mmWave frequencies, the antennas
required are small and easy to install and maintain.

Infographic: Massive MIMO operation principle.

Still, for device designers, MIMO and beamforming at mmWave frequencies introduce many new
challenges. 5G NR standards provide the physical-layer frame structure, new reference signal, and new
transmission modes to support 5G enhanced mobile broadband (eMBB) data rates. Designers must
understand the 3D beam patterns and ensure the beams can connect to the base station and deliver the
desired performance, reliability, and user experience. Because massive MIMO, beamforming, and beam
steering represent such significant changes in how 5G NR devices connect across sub-6 GHz and
mmWave operating bands, validating the device quality of experience and performance on the network
becomes even more critical.

To implement MIMO and beamforming on 5G base stations, designers must carefully select hardware and
software tools to simulate, design, and test highly complex systems containing tens or even hundreds of
antenna elements.

Engineers will use active phased array antennas to implement MIMO and beamforming in base stations
and devices. Not only are active antennas essential to overcome signal propagation issues such as higher
path loss at mmWave frequencies, they also provide the ability to dynamically shape and steer beams to
specific users. Active antennas offer more flexibility and improve the performance of 5G
communications.

But deploying active phased array antennas in commercial wireless communications represents a major
change from the passive antennas used in previous generations. MIMO and beamforming technologies
increase capacity and coverage in a cell. For 5G devices and base stations, multi-antenna techniques
require support across multiple frequency bands — from sub-6 GHz to mmWave frequencies — and
across many scenarios, including massive IoT connections and extreme data throughput.

Aerospace and defense radar and satellite communications have long used active phased array antennas,
but these antenna arrays tend to be large and very expensive. Applying this technology to commercial
wireless — where the antenna arrays will need to be much smaller and less costly — introduces many
new challenges. There is a long list of 3GPP required tests for base stations, including radiated transmitter
tests and radiated receiver tests. Depending on the base station configuration, some FR1 tests require
radiated tests, and all FR2 tests require radiated tests.

Nearly all 5G MIMO testing requires over-the-air (OTA) testing. Early in development, OTA test
solutions need to characterize the 3D beam performance across the range of the antenna, including
aspects such as antenna gain, sidelobe, and null depth for the full range of 5G frequencies and
bandwidths.

Estimated by several researches, about 5% of service providers would start offering 5G wireless service,
representing big progress from 5G proofs of concepts (POCs) in 2018. 5G, as the next-generation cellular
standard after 4G (LTE), has been defined across several global standard bodies: ITU (International
Telecommunication Union), 3GPP (Third Generation Partnership Project), ETSI (European
Telecommunications Standards Institute). The official ITU specification, International Mobile
Telecommunications-2020 (IMT-2020), targets maximum downlink and uplink throughputs of 20 Gbps
and 10 Gbps, respectively, and latency below 5 ms (milliseconds) and massive scalability.
5G will not be able to achieve IMT-2020 requirements, such as 20 Gbps, without some major
breakthroughs. At this moment, it’s not yet clear which technologies will do the most for 5G in the long
run, but a few early favorites have emerged. The front runners include millimeter waves, small cells, full
duplex, beanforming…and of course, massive MIMO.

Telecoms have already been adopting massive MIMO on existing 4G LTE networks, especially TD-LTE
(Time-Division LTE) networks (for example, SoftBank in 2016 and China Mobile in 2017). However,
FDD-LTE (Frequency-Division-Duplex LTE) massive MIMO comes later because TD-LTE has the
advantage of using the same frequency for both downlink and uplink, and the uplink channel quality
information could be used for the downlink as well. FDD-LTE, on the other hand, requires another radio’s
resources to obtain the feedback information that is neccessary to implement beanforming for the
downlink communication. This indicates that FDD massive MIMO requires bigger overhead and is not as
efficient as TD-LTE massive MIMO. It wasn’t until 2018 that Verizon started massive MIMO trials of 96
antenna elements.
With 5G up and coming, commercial networks almost certainly have to adopt massive MIMO, and a
typical 5G massive MIMO plans are 64 or 128 arrays at 3.5GHz and more than 128 arrays at 28GHz or
above.
What Are the Key Factors in Driving 5G’s Massive MIMO Adoption?
●​ Coverage:
In general, 5G will use higher radio spectrums than 2G/3G/4G, including centimeter waves
and millimeter waves such as 3.5GHz and 28GHz. Its radio propagation loss is much bigger
than previous sub-1GHz and around 2GHz. Also, 5G radio propagation can be strongly
affected by the surrounding environment, such as building shadowing, reflection from walls,
human bodies and rain attenuation. This sensitivity would make massive MIMO’s coverage
enhancement ability stand out.
●​ Capacity:
As we have mentioned, both beamforming and MU-MIMO can increase single-user
throughput and total network capacity per basestation. Massive MIMO becomes far more
practical at higher frequencies, such as those planned for many 5G deployments.
●​ Early Differentiation:
Both 4G and 5G are mainly based on 3GPP standardization, and 5G service providers would
have trouble differentiating its network from other competitors (like today’s 4G landscape).
However, at this junction of transition, adopting massive MIMO first could potentially offer
a better 5G service (than others) to consumers. The better user experience could lead to user
migration, and with proper marketing campaign, the migration could eventually held steady
through the 5G period.
However, there are some obstacles on 5G’s path of adopting massive MIMO.
●​ Huge Antenna:
General LTE base stations used to adopt 2x2 MIMO architecture, and these antenna
elements have to be located at least a half wavelength shifted to decrease antenna elements’
mutual coupling and multipath channel’s spatial correlation. In the case of 2.6GHz, its
wavelength is about 11 centimeters, so only about 5.5 centimeters distance (in other words,
half the wavelength) between antenna elements is desired. However, the more number of
antenna elements are deployed, the bigger the antenna size will be. For example, the
maximum length of 128x128 massive MIMO on 2.6GHz could reach 1 meter, which
obviously cannot fit existing sites. Also, the weight could equal tens of kilograms or more,
and a normal pole might not be able to handle the weight. Of course, the high-functioning
and heavy massive MIMO antennas can cost much more than existing ones, which could be
an another factor postponing its deployment.
●​ Device Capability:
4x4 MIMO is the current mainstream technology with chipset support since 2016
(Qualcomm, Huawei’s HiSilicon, etc.). However, for massive MIMO, including flexible
precoding and MU-MIMO, to prove its merit, its device capability must be better than
existing Transmission Mode (TM) 3 and 4, as current TM3/TM4 support is not enough.
TM3/TM4 devices cannot decode Channel State Information Reference Signal (CSI-RS)
and User Equipment-specific Reference Signal (UE-specific RS), and these devices cannot
provide feedback channel state information (which is necessary for massive MIMO
beamforming) based on the measurement of CSI-RS. TM9 and TM10 specified in 3GPP
Release 10 and 11 can solve this issue, but there are only several devices that activate TM9
commercially up to now. Without devices’ support, service providers would likely delay its
adoption…and device suppliers will likely delay its support until massive MIMO is heading
mainstream…creating a vicious spiral that could potentially kill 5G’s massive MIMO
adoption in its infancy.
●​ Trade War:
The adoption of massive MIMO requires close collaboration between network equipment
makers and device suppliers. The largest network equipment makers happen to come from
the co-target of current iteration of trade war: China. Huawei is trying to solve the
bottleneck of macro basestations and generally considered the leader in massive MIMO
technology. The ongoing trade war and the concern on national security would drive major
telecoms away from Huawei, and have to wait for other non-China equipment makers to
provide their matre massive MIMO solutions.
Introduction to Cisco Unified Wireless Network

Overview of Cisco’s Wireless Architecture and Its Significance in Modern Enterprise


Networking

In today’s fast-paced digital landscape, wireless connectivity has become an essential


component of nearly every enterprise network. Employees, customers, and IoT devices rely on
stable and high-performance wireless LAN (WLAN) connections for day-to-day tasks,
collaboration, and real-time data exchange. Cisco, a leader in networking technologies, has
addressed this need through a well-defined, centralized architecture known as the Cisco Unified
Wireless Network (CUWN). This framework integrates multiple elements—such as Lightweight
Access Points (LAPs), Wireless LAN Controllers (WLCs), security appliances, and network
management tools—to provide a highly scalable, secure, and manageable wireless solution.

Historically, wireless networks evolved from simple setups to complex architectures supporting
mobility, quality of service (QoS), and strong security. Early designs relied heavily on
autonomous access points that managed their own configurations independently, leading to
inconsistent policy enforcement and a lack of centralized visibility. As enterprises grew in size
and complexity, the need arose for a more streamlined solution. Cisco’s Unified Wireless
Network addressed these challenges by offering a centralized point of control (the WLC) to
manage a fleet of lightweight access points.

Cisco’s wireless architecture is significant in modern networks because of:

1. Centralized Management: Administrators can configure, manage, and monitor all access
points from a single interface.

2. Scalability: New lightweight access points can be seamlessly added to accommodate


growth, without the complexity of reconfiguring each AP individually.

3. Security Enforcement: Policy enforcement occurs in a centralized, consistent manner


across the entire network, enhancing overall security.

4. Mobility and Roaming: CUWN supports advanced roaming features that allow client
devices to seamlessly transition between access points and locations.

5. Visibility and Analytics: Integrated platforms such as Cisco Prime Infrastructure and
Cisco DNA Center provide deep visibility into client behavior, traffic patterns, and
network health.

By focusing on centralization, automation, and intelligent radio resource management, Cisco’s


Unified Wireless Network continues to be a foundation for modern wireless designs.
Evolution of the Cisco Unified Wireless Network Framework

Cisco’s journey into unified wireless networking can be traced back to its acquisition of Aironet
in 1999, which brought significant expertise in wireless LAN technologies. Over the years, Cisco’s
WLAN solutions evolved from standalone autonomous access points to more integrated
platforms.

Key milestones include:

• Autonomous Access Points: Early Cisco Aironet devices had to be individually configured
with SSIDs, security parameters, and channels. This model, while sufficient for small
deployments, became unwieldy in larger networks.

• Lightweight Access Points and LWAPP/CAPWAP: Cisco introduced lightweight access


points and the Lightweight Access Point Protocol (LWAPP), later transitioning to the
standards-based Control and Provisioning of Wireless Access Points (CAPWAP). This shift
offloaded configuration and control functions from the AP to a centralized WLC.

• Unified Access and Converged Access: Cisco integrated wireless functions into other
platforms such as switches and routers, supporting embedded controllers that simplified
deployments in branch offices or distributed environments.

• Cisco DNA (Digital Network Architecture): With the introduction of DNA Center, Cisco
moved toward a software-defined approach, providing advanced analytics, automation,
and assurance capabilities across both wired and wireless networks.

The evolution of Cisco’s Unified Wireless Network has been driven by the pressing demands of
scalability, security, and simplicity. This framework offers network engineers a robust platform
for delivering reliable wireless access in enterprise environments of all sizes.

The Role of Cisco Unified Wireless Network in Improving Wireless Infrastructure

A primary goal of the Cisco Unified Wireless Network is to unify multiple aspects of wireless
infrastructure—radio management, security, mobility, and troubleshooting—under a single
umbrella. By separating the data plane from the control plane, Cisco’s solution allows access
points to handle user traffic efficiently while relying on the WLC for centralized intelligence. This
distributed model fosters both operational simplicity and high performance.

The net improvements include:

• Enhanced Security: Unified policies ensure consistent security enforcement, mitigating


common vulnerabilities related to misconfigurations.
• Streamlined Operations: Bulk configuration and automated updates reduce the
operational overhead compared to managing autonomous APs.

• Better Performance and Roaming Experience: Features like Radio Resource


Management (RRM) and Dynamic Channel Assignment (DCA) optimize channel usage
and transmission power, while fast roaming protocols (802.11r, 802.11k, and 802.11v)
ensure minimal latency as clients move across the network.

• Centralized Visibility: Network managers gain insights into performance, client behavior,
and potential interference sources, speeding up troubleshooting and capacity planning.

Lightweight Access Points (LAPs)

Definition and Functionality of Lightweight Access Points

A Lightweight Access Point (LAP) is a device that provides wireless connectivity to end clients
but relies on a centralized Wireless LAN Controller for management and control functions. In
other words, rather than storing network and security configurations locally, the LAP downloads
these settings from the controller during the boot process. The AP maintains a secure tunnel to
the controller for continual updates, real-time configuration changes, and monitoring.

Key functionalities:

1. Forwarding Traffic: LAPs forward client data traffic, often encapsulating it in CAPWAP
tunnels to send back to the WLC (depending on the chosen deployment model).

2. Beaconing and Probe Responses: The LAP broadcasts SSIDs and responds to probe
requests, but the management intelligence behind these actions originates from the
controller.

3. RF Scanning: LAPs gather RF data—such as noise floor, interference, and rogue AP


signals—and forward these metrics to the WLC for analysis and automatic radio
adjustments.
Benefits of Using LAPs Over Traditional Autonomous Access Points

Compared to the older model of autonomous APs, using lightweight APs confers several
advantages:

1. Centralized Configuration: Administrators set up and manage all wireless parameters


(SSID, security, VLAN mappings) on the WLC, which pushes these configurations to every
connected LAP.

2. Simplified Provisioning: Rolling out new APs is straightforward; the device automatically
discovers and registers with the WLC, downloads its configuration, and becomes
operational with minimal manual intervention.

3. Consistent Security Policies: Security settings remain uniform across the entire WLAN,
reducing the risk of misalignment in encryption standards or authentication methods.

4. Scalability: Large networks can include hundreds or even thousands of APs under the
same management umbrella, streamlining updates and maintenance.

5. Efficient RF Management: Centralized intelligence (e.g., RRM, DCA) ensures each AP


operates on the optimal channel and power setting, preventing co-channel interference
and maximizing coverage.

Technical Description of the CAPWAP Protocol

Control and Provisioning of Wireless Access Points (CAPWAP) is a standards-based (IETF RFC
5415) protocol that enables communication between LAPs and WLCs. CAPWAP was born out of
Cisco’s earlier Lightweight Access Point Protocol (LWAPP) and is designed to address the needs
of large-scale WLAN deployments.

CAPWAP includes two key tunnels between an AP and a WLC:

1. Control Tunnel: Encrypted via DTLS (Datagram Transport Layer Security), it carries
management traffic such as AP configuration, status, and control messages.

2. Data Tunnel: Also encrypted, it carries end-user traffic (depending on the forwarding
mode configured). In some cases, local switching (or FlexConnect mode) may be used,
where data traffic from the AP is bridged onto a local network instead of being tunneled
to the WLC.

Key aspects of CAPWAP:

• Discovery and Join Process: LAPs discover WLCs through methods like DHCP Option 43,
DNS lookup (cisco-capwap-controller.localdomain), or broadcast. Once discovered, the
AP joins the controller, establishes a CAPWAP tunnel, and downloads the relevant
configuration and firmware if needed.

• Heartbeat and Keepalives: The AP and WLC exchange periodic keepalive messages to
maintain tunnel health.

• Firmware Management: The WLC manages AP firmware versions, ensuring consistency


across all connected LAPs.

Deployment Considerations and Best Practices for LAPs

When deploying LAPs, consider the following best practices:

1. Controller Redundancy: Ensure multiple WLCs or high availability configurations so that


if one controller fails, APs can failover without impacting service.

2. AP Placement: Conduct a thorough site survey to place APs optimally for coverage and
capacity. Avoid physical obstructions, external interference sources, and channel
overlaps.

3. Power and Cabling: In many deployments, APs draw power via Power over Ethernet
(PoE). Make sure switches support the required PoE standard (e.g., 802.3af, 802.3at, or
802.3bt) and have enough power budget.

4. Network Segmentation: Place APs on dedicated VLANs or subnets, and configure DHCP
option 43 or DNS to help APs discover controllers quickly.
5. Security Configurations: Always use secure management protocols and encryption
standards (e.g., WPA2/WPA3 Enterprise) to mitigate wireless threats.

Wireless LAN Controllers (WLCs)

Role and Importance of Wireless LAN Controllers in Network Management

A Wireless LAN Controller (WLC) is the cornerstone of the Cisco Unified Wireless Network,
responsible for the centralized management and control of all connected lightweight APs.
Rather than distributing WLAN configuration across multiple devices, the WLC consolidates
these tasks into a single platform, simplifying operations and ensuring network-wide
consistency.

The WLC’s importance stems from its ability to:

1. Manage Configurations: Administrators can create or modify WLAN profiles, SSIDs, and
security parameters at one centralized console.

2. Enforce Policies: Security policies, QoS rules, and VLAN mappings are all controlled from
one location.

3. Monitor Health and Performance: The WLC provides real-time visibility into radio
metrics, client connectivity statistics, and bandwidth usage across the wireless network.

4. Handle Radio Optimization: Leveraging features such as RRM, the WLC constantly
evaluates environmental factors and adapts power and channel assignments for optimal
performance.

5. Streamline Troubleshooting: Because the WLC is aware of all connected APs and clients,
it can expedite troubleshooting, log collection, and fault isolation.

Key Features and Capabilities

Cisco WLCs offer a range of sophisticated features that address the demands of enterprise
WLANs:

1. Centralized Policy Enforcement: Administrators can define Access Control Lists (ACLs),
firewall rules, and role-based access policies that are universally enforced.

2. High Availability (HA): WLCs support redundancy models, allowing multiple controllers
to back each other up. In active-standby scenarios, APs can failover seamlessly.

3. Advanced Security: Integration with the Cisco Identity Services Engine (ISE) for 802.1X
authentication, posture assessment, and guest portal capabilities.
4. Mobility Management: Coordinated roaming policies that ensure minimal disruption
when wireless clients move across subnets or geographic locations.

5. QoS and Traffic Prioritization: The WLC can mark or prioritize traffic for latency-sensitive
applications such as voice and video.

6. Mesh Networking Support: Some WLC platforms support outdoor mesh deployments,
where certain APs communicate wirelessly with each other to extend coverage to hard-
to-reach areas.

7. Application Visibility and Control (AVC): Deep packet inspection to classify and control
application traffic, aiding in bandwidth management and security enforcement.

Types of Cisco WLCs

Cisco offers various Wireless LAN Controller models and deployment options to suit different
organizational sizes and use cases. Common categories include:

1. Physical Appliances: Purpose-built hardware devices such as the Cisco 3504, 5508, 5520,
8540, and the newer Catalyst 9800 series. These appliances often vary in capacity
(number of supported APs and client devices) and performance metrics (throughput,
CPU, memory).

2. Virtual Controllers: Cisco offers virtualized WLCs (vWLC or Catalyst 9800-CL) that can run
on hypervisors like VMware ESXi or in cloud environments. This approach helps
organizations leverage existing virtualization infrastructure.
3. Embedded Solutions: Some Cisco switches and routers (e.g., Catalyst 3850/9300 series,
5760 Wireless Controller) have built-in controller capabilities. This converged approach
can simplify deployments in branch offices or distributed environments.

4. Cloud-Managed Solutions (Meraki): While technically a separate product line from


traditional CUWN, Cisco Meraki provides a cloud-first model for AP and network
management, appealing to organizations that prefer off-premise management.

Configuration Basics and Common Deployment Models

Configuring a WLC typically involves the following basic steps:

1. Initial Setup: Assign an IP address, default gateway, and management VLAN to the WLC.
This can be done through the console port or a web-based setup wizard.

2. Management Interfaces: Configure interfaces and VLANs on the WLC to map to specific
traffic types (e.g., management, AP manager, and dynamic interfaces for WLANs).

3. WLAN Creation: Define SSIDs and associate them with specific security settings and
VLANs. This includes selecting encryption modes (WPA2, WPA3), key management (PSK,
802.1X), and setting advanced parameters.

4. RF Profiles and RRM Settings: Adjust parameters for channels, power, and coverage
thresholds if needed, or rely on Cisco’s default RRM configuration for automated
adjustments.

5. AP Registration and Grouping: APs discover the WLC, join it, and automatically
download the relevant configuration. Administrators can then group APs (AP groups,
FlexConnect groups) for more granular control.

Common Deployment Models:

1. Centralized (Local Mode): All AP traffic is tunneled back to the controller. This model
simplifies policy enforcement but can increase WAN bandwidth utilization in remote site
scenarios.

2. FlexConnect (Local Switching): APs switch data traffic locally at the remote site but still
rely on the WLC for control and management. This reduces WAN load and offers
resiliency if the WLC connection is temporarily lost.

3. Converged Access: The WLC function resides on a Catalyst switch (e.g., 3850, 9300) or
router, integrating wireless and wired policies under a single infrastructure device.

4. Cloud-Managed (Meraki): While not strictly part of the CUWN, this model pushes most
control-plane functionality into the cloud, with simple on-site APs.
Designing Wireless Networks with LAPs and WLCs

Network Planning and Site Survey Fundamentals

Proper planning and site surveys are foundational for a robust wireless network design. The goal
is to determine the appropriate quantity, placement, and configuration of APs to meet coverage
and capacity requirements.

1. Identify Coverage Areas and Requirements: Define the physical space, the number of
users, types of devices, and typical throughput needs.

2. Predictive Modeling: Utilize software tools (e.g., Ekahau, AirMagnet) to model coverage
based on floor plans, wall attenuation values, and antenna patterns.

3. On-Site Survey: Validate predictive models with actual measurements, testing signal
strength (RSSI), signal-to-noise ratio (SNR), and interference sources.

4. Capacity Considerations: Factor in the number of concurrent users, bandwidth-intensive


applications, and potential QoS demands (voice/video).

5. Environmental Constraints: Identify structural elements (thick walls, metal shelves) or


external sources of interference (microwave ovens, neighboring Wi-Fi networks).

RF Design Considerations: Coverage, Capacity, and Interference Management

Radio Frequency (RF) design aims to balance coverage with capacity while minimizing
interference:

1. Coverage vs. Capacity: Overlapping coverage ensures seamless roaming, but excessive
overlap can cause co-channel interference (CCI). Striking the right balance is crucial.

2. Channel Planning: In the 2.4 GHz band, only three non-overlapping channels exist (1, 6,
11 in most regions). The 5 GHz band offers many more channels, enabling better channel
reuse. Automated channel assignment from Cisco’s RRM can significantly simplify
management.

3. Transmit Power Control (TPC): TPC algorithms adjust AP transmit power based on
feedback from neighboring APs and client devices, balancing coverage with interference
mitigation.

4. Load Balancing: Controllers can steer clients from congested APs/channels to less-
utilized ones. Band steering can encourage dual-band clients to use 5 GHz for improved
performance.
5. Antenna Selection and Orientation: Depending on the environment, use directional
antennas to target specific coverage areas or omnidirectional antennas for broad
coverage. Antenna orientation and gain must match the site survey plan.

Integration of LAPs and WLCs into an Enterprise Environment

Integrating LAPs and WLCs into an existing enterprise network involves both physical and logical
considerations:

1. Physical Network Integration: Ensure each AP can reach the WLC via Layer 2 or Layer 3.
PoE-capable switches simplify AP deployment.

2. VLAN and IP Subnetting: Map SSIDs to VLANs to segregate traffic types (e.g., employee
vs. guest). The WLC itself may have multiple interfaces for management, AP traffic, and
guest services.

3. Security Integration: Deploy 802.1X for corporate WLANs in conjunction with Cisco ISE
or other RADIUS servers. Use WPA2-Enterprise or WPA3-Enterprise encryption for
maximum security.

4. Guest Access Design: Isolate guest traffic either via tunneling to a DMZ or by using a
dedicated VLAN and captive portal.

5. Policy Enforcement: Leverage ACLs on the WLC or the downstream firewall to restrict
unauthorized traffic.

VLANs, SSIDs, and Security Configurations

VLAN and SSID design is crucial in a well-architected WLAN:

• Minimize SSIDs: Each SSID beacon adds overhead. Best practice typically recommends
three to four SSIDs per band to avoid excessive management traffic.

• SSID to VLAN Mappings: Associate each SSID with a unique VLAN to segment traffic
(e.g., corporate, guest, BYOD).

• Security Mechanisms:

o Enterprise Encryption: WPA2/WPA3-Enterprise with 802.1X authentication


ensures strong encryption and granular access control.

o Pre-Shared Keys (PSK): Suitable for small or less critical networks, but not
advisable for large enterprises where unique user or device credentials are
preferred.
o Captive Portals: Commonly used for guest access, requiring users to accept
terms of service or authenticate via a web page.

• Integration with NAC (Network Access Control): Tools like Cisco ISE can perform posture
assessments and dynamically assign VLANs or ACLs based on user/device profiles.

Redundancy and Failover Strategies

High availability (HA) is a critical aspect of enterprise WLAN design:

1. N+1 Redundancy: Have an extra WLC on standby. In case the active WLC fails, APs
automatically rejoin the standby, preserving wireless service.

2. SSO (Stateful Switchover): In certain controller pairs (e.g., Catalyst 9800 series), stateful
switchover allows client sessions to persist without reauthentication in the event of a
controller failover.

3. Geographical Redundancy: For distributed enterprises, place backup controllers in


separate data centers or locations to maintain service continuity in site-level outages.

4. Multiple AP Manager Interfaces: Ensure enough AP Manager interfaces and associated


VLANs to handle AP registration if one interface becomes unavailable.

Advanced Features and Optimization

Cisco CleanAir Technology for Interference Detection and Mitigation

Cisco CleanAir technology leverages specialized silicon within APs to continuously monitor the
RF environment for potential interference sources—such as microwave ovens, Bluetooth
devices, or rogue APs. When interference is detected, CleanAir identifies the source, quantifies
its severity, and relays this data to the WLC. Based on this intelligence, the WLC can trigger
automated responses such as channel changes or power adjustments through RRM. This helps
maintain optimal performance and reduces the manual workload for network administrators.

Radio Resource Management (RRM) and Dynamic Channel Assignment (DCA)

RRM is the umbrella feature in Cisco’s WLC software that optimizes the use of RF resources. It
encompasses several sub-features:

1. Dynamic Channel Assignment (DCA): Automatically selects the best channel for each AP
by analyzing noise levels, interference, and AP density.

2. Transmit Power Control (TPC): Adjusts AP transmit power to balance coverage and
mitigate interference.
3. Coverage Hole Detection and Correction (CHDC): Identifies areas where clients receive
weak signals and increases power (or flags administrators) to remedy coverage gaps.

4. 802.11 Band Steering: Encourages dual-band clients to associate with 5 GHz for less
congestion.

RRM operates continuously, adapting to changing environmental conditions such as newly


introduced interference or shifting client density.

Fast Secure Roaming (e.g., 802.11r, 802.11k, 802.11v)

Seamless roaming is essential for latency-sensitive applications like voice over Wi-Fi (VoWLAN)
and real-time video:

1. 802.11r (Fast BSS Transition): Speeds up roaming by allowing clients and APs to cache
and reuse security credentials, reducing the time needed for reauthentication.

2. 802.11k (Radio Resource Measurement): Enables clients to query APs for optimized
roaming decisions, providing details like neighbor AP channels and signal strength.

3. 802.11v (Wireless Network Management): Allows the network to guide clients towards
better APs based on load, signal, or application type.

Together, these protocols substantially reduce roaming latency, resulting in smoother voice calls
and video sessions as users move around.

Application Visibility and Control (AVC)

Application Visibility and Control (AVC) provides deep packet inspection (DPI) at the WLC level,
identifying and classifying application traffic. By recognizing applications such as Skype,
YouTube, Office 365, or custom business apps, administrators can enforce granular policies. For
instance, video streaming traffic can be throttled during peak hours, or business-critical
applications can receive priority. AVC is instrumental in optimizing bandwidth usage and
enforcing corporate compliance requirements.

Integration with Cisco Prime Infrastructure and Cisco DNA Center

Cisco Prime Infrastructure and Cisco DNA Center are powerful management solutions that
extend visibility and automation across the entire enterprise network:

1. Cisco Prime Infrastructure: Offers unified management for wired and wireless networks.
It provides features like performance reporting, topology maps, and automated
configuration templates for APs and WLCs.
2. Cisco DNA Center: Represents Cisco’s next-generation approach to software-defined
networking. DNA Center supports intelligent automation, assurance, and analytics for
both wired and wireless networks. Advanced features include:

o AI/ML-Driven Insights: Proactive identification of potential network issues,


guided remediation steps, and recommended optimizations.

o Policy-Based Automation: Granular policy enforcement aligning with business


intent.

o Assurance and Analytics: Continuous data collection to analyze user experience,


application performance, and device health.

By integrating the WLC and APs with these platforms, organizations can leverage a holistic,
policy-driven approach to network management and streamline troubleshooting processes.

Common Challenges and Troubleshooting

Potential Deployment Issues and Common Misconfigurations

Even the most thorough design can encounter issues during deployment or ongoing operations.
Common pitfalls include:

1. Misaligned VLANs or Subnets: Incorrect interface mappings on the WLC can lead to
clients receiving IP addresses from the wrong subnet or failing to obtain addresses
altogether.

2. Overlapping SSIDs: Deploying too many SSIDs or overlapping configurations can cause
management overhead and confusion for users.

3. Insufficient Power at APs: If switches do not supply adequate PoE, APs may shut down
radios or fail to operate at full capacity (e.g., 4x4:4 radio might revert to 3x3:3).

4. Controller Discovery Failures: Improper DHCP or DNS settings can lead to APs failing to
discover the WLC.

5. Rogue AP Interference: Unauthorized APs or neighboring networks that operate on the


same channels can degrade performance if not properly mitigated.

Debugging CAPWAP Communication Issues

When CAPWAP tunnels fail to establish, APs cannot join the WLC. Common steps to
troubleshoot include:
1. Check Network Connectivity: Ensure Layer 3 reachability between APs and the WLC.
Verify IP addresses, default gateways, and routing paths.

2. Validate Discovery Options: Confirm that DHCP option 43 or DNS (for cisco-capwap-
controller) is configured correctly.

3. Firewall Filters: CAPWAP uses UDP ports 5246 (control) and 5247 (data). Any firewalls on
the path must allow these ports.

4. Check Certificates: CAPWAP encryption may fail if the AP’s certificate is invalid or
expired, or if the WLC’s trust settings are not updated.

Access Point Registration Failures and Controller Connectivity Issues

AP registration may fail if:

1. Image Mismatch: The WLC may attempt to push a new firmware image to the AP. If the
AP fails to download or install it, registration stalls.

2. Rogue AP Policy: The WLC might detect an AP as rogue if it’s not listed in its MAC filter.
Administrators should verify the AP is approved to join.

3. Exceeded License Count: If the WLC license capacity is full, additional APs cannot
register.

4. HA Configuration Errors: In HA setups, APs may get “stuck” attempting to join a


secondary controller if the primary is unreachable and configurations are mismatched.

Performance Optimization Tips

Ensuring peak WLAN performance involves a combination of best practices and continuous
optimization:

1. Regular RRM Reviews: While RRM automates many tasks, periodic reviews of channel
assignments and power levels can reveal anomalies in high-density or dynamic RF
environments.

2. Load Balancing and Band Steering: Encourage dual-band clients to connect at 5 GHz.
Monitor client distribution and usage to avoid overloading any single channel or band.

3. Use of Latest Firmware: Ensure the WLC and APs are on recommended software
versions, taking advantage of performance enhancements and bug fixes.

4. Monitoring Tools: Leverage Cisco Prime or DNA Center to track performance metrics like
throughput, latency, and coverage holes. Use these insights to adjust AP placement or
configuration.
5. QoS Configuration: Prioritize real-time applications such as voice and video to maintain
quality under heavy load.

Conclusion

Recap of the Key Principles of Designing Cisco Unified Wireless Networks

Designing a robust Cisco Unified Wireless Network requires a careful balance of coverage,
capacity, security, and manageability. By leveraging Lightweight Access Points (LAPs) and
Wireless LAN Controllers (WLCs), organizations gain centralized control, intelligent radio
management, and consistent policy enforcement. Key principles include:

• Thorough Planning: Conduct predictive site surveys and on-site measurements to


validate coverage and capacity goals.

• Centralized Management: Use WLCs for unified configuration, security, and


troubleshooting.

• Advanced RF Optimization: RRM, DCA, and CleanAir collectively help maintain an


optimal wireless environment.

• Security and Segmentation: Implement WPA2/WPA3-Enterprise, VLAN mappings, and


integration with Cisco ISE for comprehensive security.

• High Availability: Employ redundancy and failover strategies to ensure minimal


downtime.

Future Trends in Cisco Wireless Solutions and Next-Generation Wireless Standards

As wireless technology advances, Cisco continues to evolve its portfolio to meet the demands of
future networks:

1. Wi-Fi 6 (802.11ax) and Wi-Fi 6E: Offering improved efficiency, higher throughput, and
access to 6 GHz spectrum (in Wi-Fi 6E), these standards promise better performance in
dense environments.

2. Cisco Catalyst 9800 Series: Represents the next generation of wireless controllers,
offering powerful hardware and advanced software features, including full integration
with Cisco DNA Center.

3. Software-Defined Access (SD-Access): Extends the principles of software-defined


networking (SDN) to the campus, unifying wired and wireless under a single policy and
automation framework.
4. Multi-Cloud and Edge Computing Integration: Wireless networks are increasingly
integrated with edge services for IoT, analytics, and distributed computing. Cisco
solutions aim to simplify orchestration across hybrid environments.

5. AI-Driven Insights: As machine learning capabilities grow, Cisco’s management platforms


will offer more predictive and proactive recommendations for wireless optimization and
security.

By staying abreast of these trends and continually refining designs, network engineers can
ensure their Cisco Unified Wireless Networks are positioned to support emerging applications,
devices, and business needs.
1. Introduction

Wireless communication has revolutionized the way individuals and organizations connect and
exchange information. From the early days of analog cellular networks to today’s sophisticated,
high-speed broadband cellular and short-range wireless connections, the demand for mobility,
convenience, and rapid data transfer has consistently risen. However, this accelerated progress
has also introduced numerous security challenges. Cyber threats to wireless communications
have become more pervasive, demanding robust protocols, stronger encryption methods, and
standardized best practices to protect data integrity, confidentiality, and user privacy.

This chapter provides an in-depth exploration of key wireless security protocols and standards,
tracing their evolution over time and highlighting the measures implemented to guard against
emerging threats. We begin with the foundational cellular network standard, GSM, examining
its architecture, encryption, and known vulnerabilities. We then move to UMTS, illustrating how
it improved upon GSM, particularly in areas of mutual authentication and confidentiality. Next,
we turn our attention to Bluetooth—a short-range wireless technology—discussing various
versions, pairing mechanisms, common attack vectors, and mitigation strategies. We then delve
into Wi-Fi security standards, beginning with WEP (Wired Equivalent Privacy) and moving on to
WPA2 (Wi-Fi Protected Access 2), outlining encryption methods, known attacks, and best
practices for deployment.

Through detailed technical explanations, real-world case studies, and references to industry
standards, this chapter aims to provide readers with a thorough understanding of both historical
and modern approaches to wireless security. By the end, readers should have a clear
perspective on the evolution of wireless security protocols—from early vulnerabilities to today’s
multifaceted defenses—and gain insight into future considerations as wireless technologies
continue to advance.

2. Security in GSM (Global System for Mobile Communications)

2.1 GSM Architecture Overview

The GSM (Global System for Mobile Communications) standard was developed by the European
Telecommunications Standards Institute (ETSI) in the late 1980s and became the dominant 2G
cellular network system worldwide (ETSI, 1992). GSM comprises several key components:

1. Mobile Station (MS): The end-user device, commonly a mobile phone or other cellular-
enabled device. Each MS houses a Subscriber Identity Module (SIM) that stores user
credentials, such as the International Mobile Subscriber Identity (IMSI) and the
authentication key Ki.

2. Base Transceiver Station (BTS): The radio access point that communicates directly with
the MS. The BTS handles the radio link protocols, transmitting and receiving data over
the air interface (commonly known as Um interface).

3. Base Station Controller (BSC): Manages multiple BTSs, handling tasks such as radio
resource allocation, frequency management, and handovers between BTSs.

4. Mobile Switching Center (MSC): Acts as the core switching node for voice calls, SMS,
and other services. It performs functions such as routing calls, managing mobility, and
interfacing with external networks (e.g., PSTN).

5. Home Location Register (HLR): A central database that contains details about each
subscriber, including their IMSI, phone number (MSISDN), subscribed services, and
location information.

6. Visitor Location Register (VLR): A regional database that temporarily stores subscriber
data for MSs currently roaming in its coverage area. It reduces the need for frequent
queries to the HLR.

7. Authentication Center (AUC): A protected database that stores the secret


authentication key Ki for each SIM and generates security triplets used for
authentication and encryption.

8. Equipment Identity Register (EIR): Stores the International Mobile Equipment Identity
(IMEI) of mobile equipment and classifies devices as white-listed, blacklisted, or gray-
listed based on their status.

Together, these components create a robust, hierarchical system enabling seamless voice and
data services. However, as with many early standards, GSM’s security mechanisms were
designed under certain assumptions that did not foresee modern threat landscapes.
2.2 GSM Encryption Mechanisms

This diagram provides a concise but complete view of how GSM (Global System for Mobile
Communications) authentication and encryption works at a high level. Below is a step-by-step
breakdown of each component and how they interact:

1. Key Inputs and Entities

1. KiK_iKi (Subscriber’s Secret Key)

o A unique secret key permanently stored on the SIM (Subscriber Identity Module).

o The same secret key is also stored in the GSM operator’s Authentication Center
(AuC) database.

2. RAND\text{RAND}RAND (Random Challenge)


o A random number generated by the GSM network (via the AuC) and sent to the
mobile device during the authentication challenge.

3. A3 Algorithm (Authentication)

o A function/algorithm used to compute the Signed Response (SRES).

o Takes KiK_iKi and RAND\text{RAND}RAND as inputs.

4. A8 Algorithm (Cipher Key Generation)

o Used to derive the session cipher key KcK_cKc.

o Also takes KiK_iKi and RAND\text{RAND}RAND as inputs.

5. A5 Algorithm (Encryption/Decryption)

o A stream cipher used to encrypt and decrypt user voice/data over the air
interface in GSM.

o Uses the session key KcK_cKc.

6. Mobile SIM vs. Base Station

o The diagram shows the SIM (in the mobile phone) and the base station
(representing the GSM network’s radio base station and backend authentication
system working together).

2. Authentication Process (A3)

1. Network Sends the Challenge

o The GSM network (via its Base Station) sends a random number
RAND\text{RAND}RAND to the mobile station.

2. SIM Computes the Signed Response (SRES)

o Inside the phone, the SIM uses the A3 algorithm with inputs
RAND\text{RAND}RAND and KiK_iKi.

o This results in SRES\text{SRES}SRES (the Signed Response).

3. Network Computes Its Own SRES


o Simultaneously, the network uses the same RAND\text{RAND}RAND and the
same KiK_iKi (from the operator’s AuC) to run the same A3 algorithm, obtaining
its own SRESnetwork\text{SRES}_{\text{network}}SRESnetwork.

4. Comparison for Authentication

o The phone sends its computed SRES back to the network.

o The network compares it with


SRESnetwork\text{SRES}_{\text{network}}SRESnetwork.

o If the two match, the network concludes that the subscriber is genuine.

o This completes the authentication phase.

3. Cipher Key Generation (A8)

1. Parallel Key Derivation

o In parallel (or immediately after computing SRES), the mobile station’s SIM and
the GSM network both use the A8 algorithm to generate the cipher key KcK_cKc.

o Inputs are again RAND\text{RAND}RAND and KiK_iKi.

2. Outcome:

o The SIM obtains KcK_cKc.

o The network obtains the same KcK_cKc.

4. Encryption/Decryption (A5)

1. Establishing Encrypted Channel

o Once both the mobile SIM and the network have derived KcK_cKc, they have a
shared secret key for that session.

2. Using A5

o The A5 stream cipher is used over the air interface:

▪ The phone’s A5 takes KcK_cKc and the user’s data (voice/data traffic) and
encrypts it before transmission.
▪ The Base Station’s A5 algorithm decrypts the incoming data using the
same KcK_cKc.

▪ For the uplink (phone-to-network), data is encrypted by the phone and


decrypted by the network.

▪ For the downlink (network-to-phone), data is encrypted by the network


and decrypted by the phone.

3. Purpose

o Prevents over-the-air eavesdropping: someone listening in on the radio link


would see only encrypted data.

5. Putting It All Together

1. Authentication: Confirming the SIM is genuine via the SRES comparison.

2. Key Management: Generating a session key KcK_cKc that is unique to each


session/challenge.

3. Encrypted Communication: Protecting user data (voice, SMS, and basic data services in
2G GSM) with A5 encryption on both ends.

In practice, the operator’s Home Location Register (HLR) and Authentication Center (AuC) store
each subscriber’s secret key KiK_iKi. Whenever a mobile device requests service, the network
issues RAND\text{RAND}RAND, calculates SRES and KcK_cKc, and checks the device’s response.
If correct, the network and phone can then encrypt traffic, ensuring confidentiality and integrity
over the radio link.

Why This Matters

• Security: Without this challenge-response mechanism, it would be easy to clone devices


or impersonate subscribers.

• Privacy: By encrypting the over-the-air traffic, GSM ensures that casual eavesdroppers
can’t simply tune in to phone conversations.

• Simplicity: Using symmetric keys (stored on the SIM and in the AuC) and relatively
straightforward algorithms (A3, A8, A5) made early GSM networks practical.
Notably, GSM encryption only protects the air interface. Once data reaches the base station, it
may traverse network segments in plaintext, depending on the operator’s infrastructure. This
lack of end-to-end encryption beyond the radio link is a known shortcoming in GSM’s design.

2.3 GSM Authentication Processes

GSM’s authentication process is one-sided: the network authenticates the subscriber, but the
subscriber does not authenticate the network (3GPP TS 03.20). This is achieved through a
challenge-response mechanism:

1. Challenge Generation: The AUC generates a 128-bit random number (RAND).

2. Response Calculation: The SIM computes the Signed Response (SRES) by applying the A3
algorithm (often operator-specific) to the RAND and the subscriber’s secret key (Ki).

3. Verification: The network checks the received SRES against the value computed in the
AUC. If they match, the subscriber is granted access.

While effective at preventing unauthorized access to the network, the lack of mutual
authentication exposes GSM to rogue base station or IMSI-catcher (commonly called “Stingray”)
attacks, where attackers mimic a legitimate network to intercept or manipulate user traffic.

2.4 Known Vulnerabilities in GSM

1. IMSI-Catchers: The device known colloquially as a “Stingray” can impersonate a


legitimate BTS, forcing nearby devices to connect and reveal their IMSIs. Since GSM does
not mandate network authentication to the handset, devices cannot differentiate
between legitimate and fake base stations.

2. Over-the-Air Encryption Weaknesses (A5/1 Attacks): Researchers have demonstrated


practical attacks on A5/1 encryption, leveraging time-memory trade-offs to crack
sessions in near real-time (Nohl & Paget, 2010). This vulnerability is partially mitigated
when operators implement stronger algorithms like A5/3, though not all networks did so
promptly.

3. Lack of End-to-End Encryption: Data is typically unencrypted once it leaves the BTS,
making it susceptible to eavesdropping within the operator’s infrastructure if additional
security measures are not in place.

4. Replay Attacks: Because GSM authentication relies on single challenges that can
potentially be replayed, attackers with knowledge of keys could replay certain messages
in specific scenarios. However, the use of fresh RAND values typically mitigates simple
replay attacks, unless operators reuse RAND or do not maintain robust random
generation.
5. Downgrade Attacks: In some implementations, devices can be forced or tricked into
using weaker encryption algorithms (e.g., from A5/3 down to A5/1 or A5/2), thus
simplifying cryptanalysis.

2.5 Security Improvements Over Time

GSM has undergone enhancements with the introduction of 3G (UMTS) and later 4G (LTE)
standards. Some key improvements include:

• A5/3 Algorithm: Stronger encryption based on the KASUMI block cipher.

• 3G Authentication and Key Agreement (AKA): Introduced mutual authentication and


stronger key management in UMTS.

• Network Hardware Upgrades: Operators have implemented IP-based backhaul and


secure tunnels (e.g., IPsec) to protect traffic beyond the BTS.

• Improved IMSI Privacy: Temporary identities such as TMSI (Temporary Mobile


Subscriber Identity) reduce the frequency of transmitting the IMSI in cleartext over the
air.

While GSM remains in use, the gradual global shift to UMTS (3G), LTE (4G), and now 5G
networks has reduced the window of opportunity for exploiting older GSM vulnerabilities.
Nonetheless, many developing regions still rely on GSM extensively, and the risks outlined
remain pertinent in those contexts.

3. UMTS Security (Universal Mobile Telecommunications System)

UMTS, often referred to as 3G, introduced significant security enhancements over GSM.
Standardized by the 3rd Generation Partnership Project (3GPP), UMTS aimed to address the
known weaknesses of GSM, particularly the lack of mutual authentication and the
vulnerabilities in the A5 encryption family (3GPP TS 33.102).

3.1 Enhancements Over GSM

Where GSM relied on a single-sided authentication scheme, UMTS introduced a more


comprehensive set of security features:

1. Mutual Authentication: Both the network and the user authenticate each other,
mitigating rogue base station attacks.

2. Longer Cryptographic Keys: UMTS uses 128-bit keys, offering stronger protection against
brute-force or time-memory trade-off attacks.
3. Integrity Protection: Integrity checks on signaling messages ensure they are neither
tampered with nor replayed.

4. Fresh Encryption Algorithms: UMTS introduced a new set of algorithms, including


KASUMI-based encryption (UEA1) and MILENAGE-based authentication functions.

3.2 Mutual Authentication and Key Agreement

The UMTS Authentication and Key Agreement (AKA) process is a cornerstone of 3G security
(3GPP TS 33.102). It involves the following steps:

1. Authentication Vector Generation: The Home Environment (HE), which may still be
referred to as the HLR/AUC in some architectures, generates an authentication vector
containing five elements: RAND (random challenge), XRES (expected response), CK
(ciphering key), IK (integrity key), and AUTN (authentication token).

2. Device Verification: The device (USIM) checks the AUTN to verify the network’s
authenticity. If valid, the device generates a response (RES) using the RAND and the
shared secret key (Ki) through the MILENAGE algorithms.

3. Network Verification: The network compares the RES with XRES. If they match, the
device is authenticated.

4. Session Key Establishment: Both the device and the network derive encryption and
integrity keys (CK and IK) to protect subsequent communication.

This mutual authentication drastically reduces the effectiveness of IMSI-catchers, though


sophisticated attackers may still attempt man-in-the-middle strategies if they can force a device
to downgrade to GSM.

3.3 Integrity Protection and Confidentiality

UMTS introduces separate keys for encryption and integrity. Signaling data from the mobile
device to the network is protected by message authentication codes, ensuring the messages
cannot be altered in transit. Once integrity is verified, ciphering is applied to protect
confidentiality. The standard ciphers used in UMTS include:

• UEA1 (KASUMI-based): Derived from the MISTY1 block cipher, optimized for use in
UMTS.

• UEA2 (SNOW 3G-based): A stream cipher offering improved performance and security
compared to KASUMI in certain implementations.
By separating the integrity key (IK) from the ciphering key (CK), UMTS ensures that the
compromise of one does not necessarily lead to the compromise of the other. This layered
approach significantly improves security over GSM.

Layered Architecture

• Transport Stratum (blue area):


This bottom layer handles radio access and transport. In the diagram, you see:

o ME (Mobile Equipment): The user’s physical device (phone, tablet, etc.).

o AN (Access Network): The radio access network responsible for getting user data
from the mobile device into the core network.

• Home/Serving Stratum (yellow area):


This middle layer corresponds to functions residing either in the user’s home network or
in the visited (serving) network. The components are:

o USIM (Universal Subscriber Identity Module): A security module on the user’s


device that stores subscriber identity and cryptographic keys.
o SN (Serving Network): The network currently serving the user (e.g., a visited
network if roaming).

o HE (Home Environment): The user’s home network (the operator with which the
user has a subscription).

• Application Stratum (green area):


This top layer contains end-user or operator-provided applications. The figure illustrates:

o User Application: Any software or service the subscriber uses on the device (e.g.,
an app that the user directly interacts with).

o Provider Application: The counterpart on the provider’s side (e.g., the service
logic and backend).

2. Main Security-Relevant Interfaces

The diagram labels several communication flows with roman numerals (I), (II), (III), and (IV).
While exact labeling may differ among references, typical UMTS security interfaces work along
these lines:

1. (I) Between ME/AN ↔ SN ↔ HE

o ME ↔ AN: The device sends encrypted data over the radio interface to the
access network.

o SN (Serving Network) ↔ HE (Home Environment): The serving network


communicates with the home network for authentication and key agreement.

o In UMTS, there is a set of authentication protocols ensuring that both the


network and the user are verified, and session keys are established.

2. (II) SN ↔ HE

o Sometimes highlighted separately to emphasize the subscriber authentication


and authorization operations that take place between the serving network and
the home network.

o This path is where subscriber profile data or authentication vectors are sent from
the home network to the serving network.

3. (III) ME ↔ USIM
o This is the interface between the physical handset and the SIM/USIM card inside
it.

o The USIM holds the subscriber’s secret key and is in charge of secure operations
such as generating response tokens during authentication.

4. (IV) User Application ↔ Provider Application

o High-level application traffic (e.g., user data sessions, IP-based services) can be
protected independently at this layer (for instance, end-to-end encryption over
TLS).

o This sits above the standard UMTS authentication and encryption in the lower
layers.

3. Roles of Each Component

• USIM:

o Stores long-term keys and identity.

o Performs cryptographic operations (e.g., generating authentication responses).

o Ensures that sensitive keys never leave the secure module.

• ME (Mobile Equipment):

o Implements UMTS radio and communication protocols.

o Routes authentication challenges/responses to the USIM.

o Handles user-plane encryption and integrity on the device side.

• AN (Access Network):

o Manages radio resources (the base stations, RNC in older 3G systems, etc.).

o Passes authenticated and encrypted user traffic to the core network.

• SN (Serving Network):

o The “visited” or “serving” core network for the user.

o Initiates authentication requests by contacting the user’s home network.

o Applies the security context (keys, algorithms) for the ongoing session.
• HE (Home Environment):

o The user’s home operator, which contains the AuC (Authentication Center) and
HLR/HSS (subscriber databases).

o Generates authentication vectors for each user request and sends them to the
serving network.

o Maintains the master secrets for each subscriber.

• User Application / Provider Application:

o Reside at the “top” of the stack.

o May apply end-to-end security measures (e.g., SSL/TLS) in addition to the UMTS
security below.

4. Security Mechanisms in UMTS

• Authentication and Key Agreement (AKA):

o UMTS employs a challenge–response mechanism where the serving network


uses vectors from the home network.

o The device and USIM produce a response, and if the network receives the
expected token, the authentication is successful.

o Session keys are derived to encrypt and protect integrity of traffic over the radio
link.

• Integrity Protection:

o UMTS adds integrity protection of signaling messages in addition to encryption.


This ensures the messages themselves are verified and not spoofed or tampered
with.

• Encryption:

o A confidentiality key is derived from the AKA procedure.

o Radio-interface data can be encrypted between the ME and the network, so no


one can eavesdrop over the air.

• Mutual Authentication:
o Unlike older GSM systems, UMTS introduced mutual authentication. The network
checks the user’s credentials, and the user also verifies the network is legitimate.

5. Putting It All Together

1. Power On & Network Attach

o The ME detects a UMTS network (SN).

o The SN contacts the HE with an authentication request.

o The HE returns an authentication vector.

2. Authentication Exchange

o The SN sends a challenge to the ME, which passes it to the USIM over (III).

o The USIM calculates a response with a secret key and returns it to the SN.

o If correct, the user is authenticated, and keys are established for


encryption/integrity.

3. Secure User Traffic

o All user-plane traffic now travels over an encrypted radio link.

o Signaling messages are protected with integrity checks.

o Optionally, further encryption (e.g., TLS) can be done end-to-end at the


application stratum (IV).

4. Roaming Scenarios

o If a subscriber is roaming, the visited SN still relies on the subscriber’s HE for


generating authentication vectors.

o The user sees a seamless experience, but behind the scenes, the serving network
and home network cooperate to authenticate the device securely.

3.4 Notable Security Challenges

Despite these improvements, UMTS is not without its challenges:

1. Downgrade Attacks: In areas where both GSM and UMTS coexist, malicious base
stations can force devices to switch to GSM, exposing them to older vulnerabilities.
2. Implementation Flaws: Real-world security often depends on proper implementation.
For example, weak random number generation or poor handling of key material can
undermine UMTS’s inherent strengths.

3. Man-in-the-Middle Attacks: While mutual authentication reduces this risk, sophisticated


attackers with the ability to manipulate network signals may still attempt advanced
MITM strategies.

4. Roaming Interfaces: When subscribers roam between networks, the handover process
must ensure consistent security policies. Complex roaming relationships create potential
areas for misconfiguration or incomplete security.

Overall, UMTS represents a substantial leap forward in wireless security architecture compared
to GSM. Its security model laid much of the groundwork for 4G (LTE) and 5G, both of which
extend and refine the mutual authentication and confidentiality concepts.

4. Bluetooth Security

Bluetooth is a short-range wireless technology standard used for a wide range of personal area
network (PAN) applications—ranging from wireless headsets and keyboards to Internet of
Things (IoT) devices and medical sensors. Because of its ubiquity, Bluetooth security has drawn
significant scrutiny. Various protocol versions have been released to address evolving security
concerns and performance requirements (Bluetooth SIG, 2021).

4.1 Bluetooth Protocol Versions

Bluetooth technology can be broadly classified into two major families:

1. Bluetooth Classic (BR/EDR): The original Bluetooth design (Core Specification versions
1.x to 3.x), which focuses on continuous, high-throughput connections for voice and
data.

2. Bluetooth Low Energy (BLE): Introduced in Bluetooth 4.0, BLE is optimized for low-
power applications, making it ideal for IoT and wearable devices.

Both families employ frequency-hopping spread spectrum in the 2.4 GHz ISM band, shifting
among 79 channels (Classic) or 40 channels (BLE) to reduce interference.

Notable Versions and Their Security Features

• Bluetooth v2.1 + EDR: Introduced Secure Simple Pairing (SSP), which improved the
pairing process and provided protection against passive eavesdropping and man-in-the-
middle attacks when correctly implemented.
• Bluetooth v4.0: Added Bluetooth Low Energy (BLE). Early BLE implementations faced
security challenges, such as limited support for strong encryption due to hardware
constraints on low-power devices.

• Bluetooth v4.2 and v5.x: Increased data rates, extended range (in Bluetooth 5), and
introduced features like LE Secure Connections with Elliptic Curve Diffie-Hellman (ECDH)
for more robust key exchange.

4.2 Pairing Mechanisms

Pairing is the process through which two Bluetooth devices establish shared keys for secure
communication. Common pairing methods include:

1. Just Works: Simplified pairing with no user confirmation, but susceptible to man-in-the-
middle (MITM) if an attacker is within range.

2. PIN Code Entry: One device displays or contains a PIN, which the user inputs on the
other device. Vulnerable to eavesdropping if the PIN is short.

3. Numeric Comparison (Bluetooth 2.1+): Each device displays a six-digit number. Users
confirm the numbers match, significantly reducing MITM risk when users follow correct
procedures.

4. Out of Band (OOB) Pairing: Uses an external channel (e.g., NFC) to securely transmit
cryptographic parameters, often considered the most secure if the OOB channel itself is
secure.

The choice of pairing mechanism often depends on device capabilities and user convenience
requirements.

4.3 Encryption Standards in Bluetooth

Once paired, Bluetooth devices use link keys derived from the pairing process to establish an
encrypted channel. Early versions used the E0 stream cipher, which had known weaknesses
under certain conditions. Modern implementations (especially in BLE Secure Connections
mode) rely on AES-CCM (Counter with CBC-MAC) with 128-bit keys, providing strong encryption
and data integrity when properly configured (Bluetooth SIG, 2021).

In BLE Secure Connections, ECDH is used for key agreement, which significantly increases
security by making it computationally infeasible to derive the link key from a passive eavesdrop
or to mount an active MITM without detection.

4.4 Common Attack Vectors and Mitigation Techniques


1. Bluejacking: Involves sending unsolicited messages (often business cards) to Bluetooth-
enabled devices. While more of an annoyance than a serious breach, it can be the
precursor to other attacks if users inadvertently accept malicious content.

2. Bluebugging: Allows attackers to gain unauthorized access to a device’s features, such as


reading messages or initiating calls, typically by exploiting older, unpatched firmware or
default settings with weak PIN codes.

3. Bluesnarfing: Involves unauthorized access to data on a Bluetooth-enabled device, such


as contact lists or files, again exploiting older Bluetooth stacks or insecure
configurations.

4. Man-in-the-Middle Attacks: If devices use the “Just Works” pairing method or have no
user interaction, an attacker could intercept or alter data in transit, particularly if they
can trick users into pairing with a rogue device.

5. Battery Drain Attacks: Especially in low-energy devices, attackers can keep forcing
connections or sending requests to drain battery life.

Mitigation Techniques

• Use Secure Pairing Methods: Prefer numeric comparison or OOB pairing over “Just
Works.”
• Regularly Update Firmware: Many Bluetooth security vulnerabilities stem from
outdated implementations in device firmware.

• Enable Device Visibility Controls: Set devices to “non-discoverable” mode unless


actively pairing.

• Implement Access Controls and Permissions: Prompt users to accept or deny


connection requests.

• Monitor for Unusual Activity: Especially in enterprise or medical contexts, logging and
anomaly detection can identify rogue connections.

By adhering to best practices and using updated hardware that supports modern cryptographic
standards, Bluetooth devices can significantly mitigate common attacks.

5. WEP (Wired Equivalent Privacy)

Wired Equivalent Privacy (WEP) was introduced as part of the original IEEE 802.11 wireless LAN
standard, aiming to provide data confidentiality comparable to traditional wired networks.
Despite its intentions, WEP is now widely recognized as fundamentally flawed (IEEE, 1999). Its
cryptographic weaknesses led to widespread real-world exploits, resulting in its deprecation in
favor of more secure standards like WPA and WPA2.

5.1 Overview of WEP Encryption


This diagram illustrates the standard Wired Equivalent Privacy (WEP) encryption process used in
802.11 (Wi-Fi) networks. Below is a step-by-step explanation of how the encryption is
performed according to the figure:

1. Initialization Vector (IV) Generation

1. IV Generation Algorithm: A 24-bit IV (initialization vector) is created for each packet.

2. This 24-bit IV is sent (often in the clear) along with the encrypted data so that the
receiver can decrypt properly.

Key Points

• Because it is only 24 bits, the IV space is not very large, which leads to one of WEP’s
well-known weaknesses: frequent IV reuse.

• Each frame (packet) in WEP uses a new IV, but it is trivially small, so collisions (repeated
IVs) are likely in busy networks.

2. Per Packet Key Generation

1. The 24-bit IV is concatenated (i.e., appended) with a shared secret key (sometimes
called the WEP key or shared key).

o The shared secret key is typically 40 bits (in older legacy WEP) or 104 bits (in
“128-bit WEP,” which actually has a 104-bit key plus 24-bit IV).

2. This concatenation of [IV || shared key] produces the Per Packet Key.

Key Points

• The same shared secret key is used for many packets, but the IV is supposed to change
with each packet.

• Because the IV is short, it may be reused over time in a busy network, exposing
vulnerabilities.

3. RC4 Algorithm

1. The per packet key—which is the concatenation of IV and shared key—is fed into the
RC4 keystream generator.
2. RC4 outputs a keystream of pseudo-random bytes.

3. This keystream will be the same length as the payload plus integrity check (IC)
combined.

Key Points

• RC4 is a stream cipher that generates a byte-by-byte (or bit-by-bit) keystream.

• Any weaknesses in the way WEP uses RC4 (such as predictable IV) expose the encryption
to statistical attacks.

4. CRC Generation (IC/ICV)

1. Separately, the plaintext payload (the actual user data) is passed to the CRC Generation
Algorithm.

2. The result is an Integrity Check field—often referred to as the ICV (Integrity Check
Value)—which is appended to the plaintext payload.

3. Conceptually, the data to be encrypted is now (payload + ICV) as a block.

Key Points

• WEP uses a simple CRC-32 for integrity checking. Unfortunately, it is not


cryptographically secure—it is easy for attackers to manipulate bits in the ciphertext and
update the CRC to match, thus bypassing integrity protection.

• In modern Wi-Fi security (WPA, WPA2), much stronger integrity checks (Michael, CCMP,
etc.) are employed.

5. Encryption (XOR with Keystream)

1. The keystream from the RC4 algorithm is XORed with the combined data (payload +
ICV).

2. This produces the ciphertext (often referred to as the encrypted payload).

3. This ciphertext is sent along with the IV (in cleartext) to the receiver.

Key Points

• XOR encryption is straightforward: Ciphertext = Plaintext ⊕ Keystream.


• Decryption on the receiving side simply reverses this: Plaintext = Ciphertext ⊕
Keystream (since A ⊕ B ⊕ B = A).

6. Transmission and Reception

1. Once the ciphertext is formed, the 24-bit IV is prepended to it (sometimes placed in the
header) and transmitted.

2. On the receiver’s side, the IV is used (along with the shared key) to re-generate the same
RC4 keystream.

3. The receiver XORs the received ciphertext with the keystream to recover (payload + ICV).

4. The receiver verifies the ICV to check integrity—though this check can be bypassed in
known attacks.

5.2 Key Weaknesses

1. IV Exhaustion: Because the IV is only 24 bits, it repeats frequently in busy networks.


Attackers can capture numerous packets to gather enough IV collisions and derive the
secret key.

2. RC4 Key Scheduling Vulnerabilities: The combination of the IV and the static key in RC4’s
Key Scheduling Algorithm (KSA) is susceptible to known statistical attacks (Fluhrer,
Mantin, & Shamir, 2001). By analyzing patterns in how RC4 initializes for different IV
values, attackers can reconstruct the key.

3. Weak Integrity Mechanism: WEP’s ICV is a simple CRC-32, which is linear and does not
provide cryptographic integrity. Attackers can flip bits in ciphertext and then recalculate
a new ICV without knowing the key, leading to message injection or forgery attacks.

4. Static Keys: Many older network configurations used a single WEP key shared among
multiple users. If any user or device is compromised, the entire network is at risk.

5.3 Real-World Exploits

Shortly after WEP’s adoption, researchers and hobbyists demonstrated practical tools to crack
WEP keys in mere minutes using readily available hardware:

1. AirSnort and WEPCrack: Early open-source tools that automated the process of
capturing IV collisions and performing cryptanalysis.
2. Fragmentation Attacks: Exploited how 802.11 fragmentation interacts with WEP,
allowing partial decryption and eventual key recovery.

3. ARP Injection Attacks: Leveraged the predictable nature of ARP requests to rapidly
increase IV collection, speeding up key recovery efforts.

These tools underscored that WEP, once thought to provide “wired equivalent” security, could
be quickly and systematically broken.

5.4 Reasons for Deprecation

By the mid-2000s, IEEE had formally deprecated WEP in favor of WPA/WPA2. Key reasons
include:

• Insufficient Key Length and IV Size: 40-bit and 104-bit WEP keys with a 24-bit IV proved
inadequate against modern computing power.

• Statistical Weaknesses of RC4 Implementation: The design of WEP neglected critical


aspects of secure key scheduling.

• Lack of Robust Integrity Checks: WEP offers no real protection against tampering and
injection.

In modern networks, WEP is considered obsolete. Regulatory bodies and industry best practices
strongly discourage its use (Wi-Fi Alliance, 2004). Devices supporting only WEP pose a security
liability and often need upgrading to support WPA2 or higher.

6. WPA2 (Wi-Fi Protected Access 2)

WPA2, standardized under IEEE 802.11i, is widely recognized as the benchmark for securing Wi-
Fi networks (IEEE, 2004). It addressed many of the shortcomings of WEP and introduced robust
encryption and authentication mechanisms. Though WPA3 has since emerged, WPA2 remains in
broad use worldwide, making it a focal point for wireless security.
6.1 Improvements Over WEP

1. Strong Encryption (AES-CCMP): WPA2 mandates the use of the Advanced Encryption
Standard (AES) with the Counter Mode with Cipher Block Chaining Message
Authentication Code Protocol (CCMP). This offers 128-bit keys and robust cryptographic
integrity checks.

2. Robust Key Management: WPA2 uses a four-way handshake to dynamically derive


unique encryption keys for each session, reducing the risk of key reuse.

3. 802.1X/EAP for Enterprise: In enterprise deployments, WPA2 can integrate with an


authentication server (e.g., RADIUS) using Extensible Authentication Protocol (EAP)
methods, providing per-user credentials and dynamic key distribution.

4. Backward Compatibility with TKIP: While not recommended for new deployments,
WPA2 can support the Temporal Key Integrity Protocol (TKIP) for older hardware,
allowing gradual transition from WEP.
6.2 Key Management and Encryption Methods

WPA2 uses a four-way handshake to establish fresh session keys, also referred to as Pairwise
Transient Keys (PTKs), each time a device joins the network:

1. AP sends ANonce: The Access Point (AP) generates a random number (ANonce).

2. Client sends SNonce and MIC: The client (STA) generates its own random number
(SNonce) and calculates a Message Integrity Check (MIC) using the Pairwise Master Key
(PMK).

3. AP sends Group Key (GTK): The AP securely delivers the Group Temporal Key (GTK), used
for broadcast and multicast traffic, encrypted with the PTK.

4. Client Confirms Key Installation: The client sends a final message indicating it has
installed the keys.

AES-CCMP provides both confidentiality and integrity using a block cipher mode that counters
replay attacks. Each packet has a unique packet number (PN) used in the AES counter,
preventing reuse of the same keystream.

6.3 Known Vulnerabilities (e.g., KRACK Attack)

Despite its robustness, WPA2 has faced notable attacks:

1. KRACK (Key Reinstallation Attack): Discovered by Vanhoef and Piessens (2017), KRACK
targets the four-way handshake. By manipulating and replaying handshake messages, an
attacker can trick a client into reinstalling an already-in-use key with a reset packet
number, effectively decrypting or injecting data. Patches to client devices are critical in
mitigating KRACK.

2. Weak Passphrase Vulnerabilities: WPA2 in Personal Mode (PSK) relies on a shared


passphrase. If the passphrase is weak (e.g., a dictionary word), attackers can capture the
four-way handshake and perform offline brute-force attacks.

3. Implementation and Misconfiguration Flaws: In some devices, improper random nonce


generation or incomplete patches can introduce new vulnerabilities or reintroduce old
ones.

6.4 Best Practices for Secure Implementation

1. Use Strong Passphrases (WPA2-Personal): Prefer long, complex passphrases or use


random key generators.
2. Enable WPA2-Enterprise Where Possible: Implement 802.1X/EAP with a RADIUS server
for dynamic per-user authentication and keying.

3. Regularly Update Firmware: Patch vulnerabilities like KRACK and keep devices current
with security updates.

4. Disable WPS (Wi-Fi Protected Setup): WPS PIN-based setups are vulnerable to brute-
force attacks. If required, ensure only push-button or NFC-based pairing is allowed.

5. Monitor and Audit Networks: Conduct regular wireless security assessments (e.g., using
WPA2 handshake capture and offline analysis to ensure passphrase strength).

By following these best practices, organizations and individuals can significantly reduce the risk
of wireless compromise under WPA2 networks.

7. Conclusion: Evolution of Wireless Security and Future Considerations

The progression from GSM to UMTS, from Bluetooth 1.0 to newer versions, and from WEP to
WPA2 reflects a broader narrative in wireless security: as technologies mature and threats
become more sophisticated, protocols must evolve to maintain confidentiality, integrity, and
availability. Early implementations like GSM and WEP focused on basic encryption and
authentication but failed to anticipate large-scale surveillance, advanced cryptanalysis, and the
explosion of connected devices we see today. UMTS introduced mutual authentication,
improving resilience against rogue base stations. Similarly, newer Bluetooth versions leveraged
stronger pairing methods and encryption to address vulnerabilities like Bluejacking,
Bluebugging, and Bluesnarfing. On the Wi-Fi front, WEP’s fundamental flaws gave way to
WPA2’s robust AES-based encryption and dynamic key management.

Key Takeaways

• GSM to UMTS: This transition showcased the shift from unilateral authentication to
mutual authentication, demonstrating the necessity of verifying both network and
subscriber to combat devices that impersonate legitimate network elements.

• Bluetooth Security Evolution: Pairing methods became more sophisticated (e.g., Secure
Simple Pairing, LE Secure Connections), acknowledging that user interaction is often a
critical element in preventing MITM attacks.

• WEP to WPA2: The rapid demise of WEP underscored the importance of cryptographic
robustness and proper key management. WPA2’s AES-CCMP and four-way handshake
significantly raised the bar.
Despite these advances, wireless security remains a moving target. Emerging standards like LTE,
5G, and Wi-Fi 6 (802.11ax) continue to refine authentication procedures, encryption algorithms,
and frequency utilization. The proliferation of IoT devices adds complexity: not all devices can
handle the computational overhead of robust encryption, leaving low-powered sensors and
consumer gadgets exposed if not thoughtfully designed.

Future Considerations

• Post-Quantum Cryptography: As quantum computing progresses, current encryption


algorithms may become vulnerable. Wireless standards bodies are already examining
quantum-resistant algorithms.

• Device Identity Management: With billions of IoT devices joining networks, robust
methods to identify and authenticate devices at scale—beyond traditional SIM-based
models—are critical.

• Security-by-Design: Protocols and devices must incorporate security from inception,


rather than retrofitting patches as threats emerge. This cultural shift is especially vital in
consumer-focused products.

• User Education and Policy: Even the strongest protocols falter with weak passphrases,
outdated firmware, or user ignorance. Effective training and clear security policies
remain essential.

In conclusion, wireless security is in a constant state of evolution. The lessons learned from the
vulnerabilities and subsequent enhancements in GSM, UMTS, Bluetooth, WEP, and WPA2
continue to inform modern standards and best practices. As technological innovation
accelerates—driven by the demands for higher data rates, lower latency, and massive device
connectivity—the security community must remain vigilant. Establishing robust, future-proof
encryption and authentication schemes, along with user awareness and policy enforcement, will
ensure that wireless communications remain both accessible and secure in the years to come.

8. References

• Babbage, S., & Maximov, A. (2008). An Analysis of the KASUMI Block Cipher. Selected
Areas in Cryptography.

• Bluetooth SIG. (2021). Bluetooth Core Specification v5.2. Retrieved from


https://www.bluetooth.com/specifications

• ETSI. (1992). GSM Recommendations 02.xx and 03.xx Series. European


Telecommunications Standards Institute.
• Fluhrer, S., Mantin, I., & Shamir, A. (2001). Weaknesses in the Key Scheduling Algorithm
of RC4. Selected Areas in Cryptography.

• IEEE. (1999). IEEE 802.11 Standard for Wireless LAN Medium Access Control (MAC) and
Physical Layer (PHY) Specifications.

• IEEE. (2004). IEEE 802.11i-2004: Medium Access Control (MAC) Security Enhancements.

• 3GPP TS 03.20. (n.d.). Security Related Network Functions. 3rd Generation Partnership
Project.

• 3GPP TS 33.102. (n.d.). 3G Security; Security Architecture. 3rd Generation Partnership


Project.

• Nohl, K., & Paget, C. (2010). GSM: SRLabs Security Research Presentations. CCC
Conference.

• Vanhoef, M., & Piessens, F. (2017). Key Reinstallation Attacks: Forcing Nonce Reuse in
WPA2. ACM Conference on Computer and Communications Security (CCS).

• Wi-Fi Alliance. (2004). WPA2 (Wi-Fi Protected Access 2) Specification.

You might also like