0% found this document useful (0 votes)
19 views22 pages

Chapter 1

Uploaded by

elhocine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views22 pages

Chapter 1

Uploaded by

elhocine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Chapter 1

INTRODUCTION

Great heights by men reached and kept


were not attained in sudden flight
but they, while their companions slept
were toiling upwards in the night.
—Henry Wadsworth Longfellow

Abstract This chapter examines the current state of affairs in wireless communications, and
establishes the dual pressure on limited radio spectrum capacity: a growing num-
ber of users and an increasing diversification of increasingly bandwidth-intensive
wireless services (multimedia et al.). A convergence trend is also exposed which
leads to a unifying paradigm designated herein as broadband wireless access.
This naturally leads us to the main subject matter, adaptive antenna arrays and
space-time processors, which constitute in many ways the most promising so-
lutions to capacity shortages and spectral congestion problems facing wireless
networks of today and tomorrow. The main benefits of space-time processors are
outlined, and a historical review of their development closes the chapter.

This material to appear as part of the book Space-Time Methods, vol. 1:


Space-Time Processing
c
!2007 by Sébastien Roy. All rights reserved.

Ever since the dawn of the modern age, electronics and telecommunications
have evolved in an intertwined, mutually dependent fashion. Indeed, modern
electronics (being the art of using electricity to represent and process informa-
tion) essentially started with the invention of the triode (three-electrode vacuum
tube) in 1906 by Lee de Forest. The first widespread application of the triode
was as an amplifier in radio transmitters and receivers and this revolutionized

1
2 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

radio broadcasting. Afterwards, the triode served as the basic component in the
first digital computers.
The development of telecommunication technology in general and wireless
communications in particular was continuously fueled by innovations in elec-
tronics. The invention of the transistor in 1947 and the integrated circuit in 1958
made possible more powerful and smaller radio transceivers. Keeping in step
with Moore’s law 1 , radio transceivers have kept shrinking, while becoming
more complex yet less demanding in terms of electrical power.
The cellular concept, pioneered in the 70s, has ushered in the era of personal
mobile communications. The young cellular industry quickly became the most
important telecom sector, with a growth rate surpassing even personal comput-
ers throughout most of the 80s and 90s. As discussed later, such tremendous
growth puts pressure on the limited radio spectrum ressources. In a sense,
cellular telephony is a victim of its own success and faces a plurality of techno-
logical challenges revolving around spectrum congestion. It is within the related
fields of adaptive antenna arrays, space-time processing, space-time coding and
MIMO systems that the most promising solutions to these challenges reside.

1. Curent trends in wireless communication and the


capacity problem
Despite having experienced somewhat of a rollback in sales in 2001, the
cellular telephony industry is poised for continued growth into the first decades
of the 21st century. From 403 million units sold worldwide in 2000 (387
million units in 2001) and 984 million units sold in 2006, it is expected that
shipments will reach 1274 million units in 2010 and that the total number of
subscribers worldwide at that time will pass the 3 billion mark as early as 2008.
It is noteworthy that the original analog cellular telephony system based on
FM technology (AMPS and its equivalents worldwide) is slowly disappearing
and will be all but extinct by 2005. It has been superseded mostly by second
generation (2G) digital cellular and the collection of “in-between” technologies
collectively known as 2.5G. Third generation (3G) cellular, which provides
high-speed digital connectivity (promising bit rates from 144 kbps to 2 Mbps),
is slowly appearing in markets around the world. Sales were initially timid (less
than a million units in 2002 according to Micrologic Research [Micrologic,
2002]) but have crossed the 100 million units mark in 2006.

1 Moore’s law is a technological trend, predicted in 1965 by Gordon Moore, which states that density (the
number of transistors per unit of area) of integrated circuits doubles every 18 months, where 18 months is
roughly the time required to develop, learn and put in production the new tools required. Up to now, reality
has been remarkably close to this trend, at least for digital circuits. RF circuits, however, are not subject to
Moore’s prediction and have not shrunk nearly as fast.
Introduction 3

The appearance of fourth generation (4G) cellular technology, which promises


true broadband access with bit rates up to 100 Mbps, has been delayed some-
what by recent turmoils in the wireless industry. It is expected to appear on
the telecom horizon anywhere between 2006 and 2020 and it will most likely
appear in Japan first, a country which benefits from a head start in that direction.
In parallel with this continued growth in the cellular industry, wireless tech-
nology is becoming increasingly popular as a substitute for subscriber loops
to connect fixed, house-based, telephones to their local exchange and hence,
to the public switched telephone network (PSTN). The technology designated
wireless local loop (WLL) is attractive for a variety of reasons including rapid
deployment (e.g. in emerging countries where little or no wired infrastructure
exists), flexibility, low initial capital investment, fast financial return, easy and
lower-cost maintenance. In 2002, it is estimated that there were 440 million
WLL subscribers worldwide. As of 2003, more than 50% of new fixed lines
installed worldwide are wireless and by 2006, approximately 10% of all fixed
subscribers will use WLL as their primary means of access.
Likewise, fixed broadband wireless access systems, spurred on by standards
committees like 802.162 and BRAN (Broadband Radio Access Network), are
increasingly considered viable alternatives to DSL (Digital Subscriber Line)
or cable-based networking for internet access, videoconferencing, etc. In the
mobile computing arena, wireless LANs (Local Area Networks) have experi-
enced explosive growth recently, thanks in part to the hugely succesful 802.11
standards (WiFi) and their European counterpart, HiperLAN.
These various trends draw a picture of the wireless world of tomorrow which
is characterized by:
(1) a diversity of coexisting standards;
(2) an increase in the number of subscribers, and hence, in the number of active
links deployed systems will have to support;
(3) a wider diversity of communication services with various bit rates and
quality-of-service requirements;
(4) an increase by an order of magnitude of the average bandwidth consumed
by a single subsriber.
Items (2) and (3) in particular are problematic in light of the fact that radio
spectrum is a finite ressource which is already largely congested in many areas
of the world. Simply stated, the response from an engineering perspective to
this evolutionary bottleneck is two-fold:

2 The technologies falling under the umbrella of the IEEE 802.16 standards group are also collectively denoted

by the trademark WiMAX which stands for Wireless interoperability for microwave access.
4 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

(1) Next generation systems and new services are being located at increasingly
higher frequency bands which are still relatively free of traffic. Indeed,
the wide-spread usage of millimeter bands has until recently been delayed
partly because of the hardware difficulties and costs involved. However, the
development of RF technology has kept in step with the times and systems
operating in the 28-30 GHz band are now relatively common. Broadband
wireless systems in bands as high as 60-66 GHz are currently being discussed
and tested.
(2) Both existing and proposed systems are being designed or retrofitted to
support techniques that bring more efficient spectrum usage, i.e. increased
capacity. In the author’s opinion, this constitutes in fact the main thrust
behind the further development of space-time processing techniques as de-
scribed in this book.
While the first approach certainly has its role, hostile propagation conditions
above 10 GHz and limited coverage are essentially restricting the usage of these
bands to fixed wireless access.
In this present-day context, the future development of wireless networks
faces many difficult challenges. Two issues stand out as being of foremost
importance, and it will be seen that they are strongly related problems:
1 The hostility of the wireless channel. The opening words in W. C. Jakes’
classic reference book [Jakes, 1993] describe eloquently the magnitude of
the problem, as it was perceived by the pioneers of the field:
Nature is seldom kind. One of the most appealing uses for radio-telephone
systems—communication with people on the move—must overcome radio trans-
mission problems so difficult they challenge the imagination. A microwave radio
signal transmitted between a fixed base station and a moving vehicle in a typical
urban environment exhibits extreme variations in both amplitude and apparent
frequency. Fades of 40 dB or more below the mean level are common, with suc-
cessive minima occurring about every half wavelength (every few inches) of the
carrier transmission frequency. A vehicle driving through this fading pattern at
speeds up to 60 mi/hr can experience random signal fluctuations occurring at rates
of 100-1000 Hz, thus distorting speech when transmitted by conventional meth-
ods. These effects are due to the random distribution of the field in space, and arise
directly from the motion of the vehicle. If the vehicle is stationary, the fluctuation
rates are orders of magnitude less severe.

While the technology certainly has evolved since these words were written,
the mobile radio propagation channel is still a daunting environment since
we now demand more bandwidth efficiency and higher bit rates out of it.
Parsons and Bajwa sum up nicely the magnitude of the challenge in the
context of wideband, high frequency operation [Parsons and Bajwa, 1982]:
. . . in heavily built-up areas, particularly at higher operating frequencies, multipath
propagation is probably the single most destructive influence.
Introduction 5

2 The increasingly crowded spectrum space. It is possible to accommodate


large numbers of users through spatial reuse of frequencies. In effect, the
limited reach of radio waves (whose power decreases with the square of the
distance in free space) is already used to advantage, since it allows many
users to utilize the same channel provided they and their respective base
stations are separated from each other by a sufficient distance. In fact, this
is the basis of the cellular concept [Macdonald, 1979]. Nonetheless, as the
number of users increases, interference from users of the same channel (co-
channel interference or CCI) becomes a significant impediment. Adjacent
channels are also a source of interference due to spectral overlap. This is
termed adjacent channel interference or ACI.
It is a generally accepted fact that wireless cellular systems are interference-
limited. Specifically, co-channel interference from the closest cells where
channels are re-used and adjacent channel interference are the most im-
portant limiting factors. It follows that the implementation of techniques
to reduce interference or to reduce the impact of interference in existing
systems is equivalent in many cases to boosting capacity.
In fact, advanced signal processing and / or system design techniques are
required in order to augment both user capacity, i.e. the number of simul-
taneously active users supportable within a given spectrum space, and the
data rate capacity, i.e. the data rate for a given user per unit of bandwidth
(bits / second / Hertz ).
Another aspect of the capacity issue is designated as the multiple-access
problem and consists of allocating in the most efficient manner the lim-
ited spectrum space amongst users in a dynamic fashion. Basic multiple
access strategies include TDMA (Time Division Multiple Access), FDMA
(Frequency Division Multiple Access) and CDMA (Code Division Multiple
Access) as well as combinations thereof.
To conclude, it is noteworthy that the pressure on spectral ressources leading
to congestion is two-fold:
1. ever-increasing numbers of users must be accomodated;
2. these users require increasingly bandwidth-hungry services.

2. Broadband wireless access


Over the last 20 years, much has been said about the issue of convergence
between computers and digital communications. It was postulated that the
boundary between these two areas of development would essentially vanish,
that it would become difficult, if not impossible, to tell them apart. Indeed, a
modern digital communication system must include a fair amount of computa-
tional power to perform functions such as compression, error-correction coding
6 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

and decoding, routing, etc. On the other hand, computers make ample usage of
communication resources. For example, the core of a computer is really its bus,
which is a high-speed communication system (in essence a network) linking the
processor, memory and other peripherals into a coherent whole. Furthermore,
the computer itself is normally part of an external local area network (LAN)
which is typically part of the global Internet infrastructure. The said Internet
cannot exist without thousands of individual computers providing the required
services, and individual computers lose much of their usefulness if deprived of
Internet access. Hence, the convergence is a reality; computer and communica-
tions systems are thus often treated as a whole, under the heading “information
technology”.
A related convergence is occurring in the field of wireless communications.
Indeed, several current development trends in wireless, which originally corre-
sponded to very different wireless products and services, are slowly converging
towards a unified paradigm, which we will refer to as broadband wireless ac-
cess. It has the following approximate characteristics:

(1) data rates in excess of 10 Mbps and eventually in excess of 100 Mbps;

(2) universal, transparent and ubiquitous access, perhaps through the use of
multistandard reconfigurable radio terminals;

(3) available anytime, anywhere;

(4) providing varying degrees of mobility.

Three such trends can be readily identified: cellular telephony evolving to-
wards 4G, fixed wireless systems being developped by IEEE committee 802.16
and BRAN (Broadband Radio Access Networks), and wireless LANs à la
802.11 (see Figure 1.1.1). Convergence between these three currents can easily
be demonstrated.

4G Cellular telephony
In 1G and 2G, cellular telephony was oriented mostly towards voice services.
With 2.5 and 3G, a shift to a generic packet-oriented data infrastructure is clearly
present, which will culminate with 4G.
In terms of high-data rate digital links, 3G technology turns out to be rather
disappointing. It was expected to provide full duplex links at 2 Mbps. In fact,
according to the U. S. Federal Communications Commission’s (FCC) definition
of 3G, it should provide:
(1) 144 kilobits/second or higher in high mobility (vehicular) traffic;

(2) 384 kilobits/second for pedestrian traffic;


Introduction 7

Wireless LANs Fixed wireless systems Cellular telephony

(802.11, HiperLAN2) (LMCS, MMDS, 802.16)


(4G)

BROADBAND WIRELESS

ACCESS

Figure 1.1. Three parallel development trends are converging towards broadband wireless access
services with similar characteristics.

(3) 2 Megabits/second or higher for indoor traffic.


However, this turned out to be more challenging than expected; current 3G
systems provide at best 384 kbps on the downlink and 64 kbps on the uplink.
Also, 3G is being deployed relatively slowly since service providers, faced
with difficult economic conditions, are reluctant to invest in a completely new
infrastructure. It is expected that 3G will only reach significant penetration
(approximately 10% of new handheld sales worldwide) in 2005. In the mean
time, service providers are leveraging their investment in 2G equipment by
providing incremental upgrades allowing, among other benefits, higher bit rate
data links than standard 2G. Such improvements are globally designated “2.5G”.
The fourth generation of cellular services (4G) promises a true break from
the classic voice-oriented telephone paradigm. Bit rates in excess of 100 Mbps
are envisaged. A packet-oriented multiservice philosophy forms the backbone
of 4G, as opposed to the current connection-oriented infrastructure. 4G also
provides a tight integration with an IP (Internet Protocol) infrastructure, most
likely IPv6, and not with the standard Public Switched Telephone Network
(PSTN). Commercial deployment of 4G is expected to begin anywhere between
2006 and 2020.
Thus, 4G will provide true broadband wireless access to mobile subscribers.
However, it seems that it will be a latecomer to this game, partly due to industry
woes which are delaying the deployment of 3G.

Fixed-subscriber wireless
A typical fixed-subscriber wireless system which is in current usage is the
Microwave Multipoint Distribution Service (MMDS). MMDS is used mostly
to broadcast TV channels to residential subscribers, as a low infrastructure cost
alternative to cable. It uses the 2.5-2.7 GHz band which is typically divided
8 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

into 6 MHz channels, being the bandwidth consumed by a conventional TV


channel. However, some providers are offering bidirectionnal data links up to
10 Mbps, which is an order of magnitude more than what is currently offered in
the cellular arena. Additionally, MMDS may have to share its band with future
3G systems.
The Local Multipoint Distribution System (LMDS) is a similar system which
was designed from the ground up for broadband wireless access. Each service
provider can exploit over 1 GHz of bandwidth between 28 and 31 GHz. In the
initial stages of its development / deployment, it was targeted mostly at residen-
tial customers and it is designed to provide between 2 and 40 Mbps on typically
asymmetric links (i.e. downlink faster than uplink). However, coverage prob-
lems in this band have essentially restricted LMDS to a business service model,
capable of delivering links up to 500 Mbps between large buildings (where
coverage is less of an issue) to corporate clients for internet access and or to
implement metropolitan area networks or MANs.
MMDS and LMDS systems are now under the umbrella of the IEEE 802.16
working group on broadband wireless access standards. The work of the 802.16
group is specifically targeted at the development and deployment of broadband
wireless metropolitan area networks and covers bands between 2 GHz and 66
GHz. Individual data links’ rates are between 10 and 100 Mbps. In addi-
tion, while these developments stem from the fixed-subscriber wireless trend,
nomadic aspects are now being considered for these networks. The fact, the
subgroup 802.16e specifically supports roaming of users from one base station
to the next. The low bands between 2 and 6 GHz provide the most potential for
mobile applications. Under the derivative standard 802.16a, which covers the
2-11 GHz range, such systems may exploit the unlicensed bands at 2.4 GHz
and 5.8 GHz, thus having to coexist with other broadband alternatives such as
wireless LANs. It is also of interest that 802.16a directly supports advanced
antenna array systems such as space-time processors.
Therefore, these systems already provide broadband wireless access and are
poised to provide varying degrees of mobility, thus competing directly with
cellular.

Wireless LANs
It is a fact that wireless LANs (WLANs) constitute today the bulk of the
broadband wireless access market, thanks to the hugely succesful IEEE 802.11b
standard. It is not only widespread but also quasi-universal, being used not only
for private LANs but also to support wireless Internet access in metropolitan
areas by service providers in Australia (Skynet), China (NetCom), Finland
(Jippi) and the United States (NetNearU). This relative homogeneity stands in
stark contrast with the diversity and disparity of cellular telephony standards
throughout the world.
Introduction 9

Also, 802.11b (also known as “Wi-Fi”) provides an effective data rate of 6


Mbps and a nominal data rate of 11 Mbps, making it a good complement to a
standard 10 Mbps wired Ethernet network. This implies that 802.11b provides
data rates today in the 3.5G-4G ballpark. However, some caveats apply:
(1) It operates in the 2.4 GHz unlicensed band, which implies that it must
coexist with other unlicensed systems and may eventually fall victim to its
own success by crowding its spectrum space.
(2) In its current state, it does not address mobility at all. Hence, it has limited
robustness against mobility-related channel impediments such as Doppler
spread and it provides no mechanism for roaming or handover, i.e. contin-
uous operation when crossing the boundary between two hotspots3 .
(3) Some countries do not allow operation of unlicensed wireless LANs in the
2.4 GHz band4 .
In addition, products have been on the market for a few years which conform
to 802.11b’s big brother, 802.11a. This standard provides an effective data rate
of 31 Mbps and a nominal data rate of 54 Mbps in the 5 GHz unlicensed band.
Products conforming to 802.11a also typically support proprietary extensions
at up to 72 Mbps. These are in the process of being standardized.
An alternative system, HiperLAN2, was developed and is in use in Europe,
although it is not clear how successful it is commercially. HiperLAN2 uses the
same modulation scheme (OFDM), the same band and provides the same data
rate as 802.11a. However, it is more evolved in terms of higher network func-
tions, providing more sophisticated quality-of-service (QoS) as well as roaming
features designed to integrate into a Wide-Area Network (WAN). These fea-
tures are putting pressure on, and slowly creeping into 802.11a. Both standards
are thus developping to support large network infrastructures and some degree
of mobility, thus converging with cellular.
More recently, 802.11g has made its way onto the marketplace, also provid-
ing a nominal data rate of 54 Mbps, but at 2.4 GHz, thus allowing backward
compatibility with 802.11b. While this has been extremely succesful, many
believe that 54 Mbps may not be fast enough in the long run for wireless LANs,
especially with the development of various real-time video applications. To go
beyond the current data rates within the same limited unlicensed bandwidth,

3 A hotspot is the WiFi equivalent of a cell in the cellular telephony world. It is the area serviced by a

single access point, characterized by a relatively small size (due to the transmit power constraint) and ad hoc
deployment.
4 There are increasingly fewer such countries, at least in the industrialized nations. The United Kingdom was

one such country that eventually opened up its 2.4 GHz band for unlicensed networking, industrial, scientific,
and medical applications. Not long thereafter, they also opened up the 5 GHz band, thus becoming the 17th
out of 19 countries in the European union to allow unlicensed WLANs in both bands.
10 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

the spatial dimension will be exploited. Indeed, it seems that 802.11 wireless
LANs will set the stage for the first widespread usage of antenna arrays since
the 802.11n working group is currently developing standards for multi-antenna
WLAN access points and terminals capable of supporting bit rates in excess of
200 Mbps, perhaps even 500 Mbps.

Ultra-Wide-Band (UWB) radio


The radically different modulation technology dubbed Ultra Wide-Band
(UWB) has stirred a lot of interest in the research and standardisation com-
munities since it offers potentially huge data rates at low cost and low power.
The use of this modulation type has recently been approved by the Federal
Communications Commission as an overlay system occupying the frequencies
between 3.1 and 10.6 GHz. Since such systems must coexist with various con-
ventional wireless systems in the same band, its expansive bandwidth comes at
the cost of a strict transmit power restriction. Ideally, the energy of an UWB
signal is spread over such a large spectrum that it can be transmitted with very
low power and still be detectable, while being perceived as nothing more than
background noise by competing systems.
However, the power restriction does limit the range of UWB systems and
they are being targeted mostly at home-area or personal-area networks. In fact,
it is likely that UWB will form the basis of Bluetooth II, the successor to the
popular short-range communication standard Bluetooth5 .
Despite the range limitation, UWB is a promising mechanism to bring broad-
band wireless access to users under certain circumstances. Indeed, the range of
802.11 access points is not very large either.

The big picture


The study of the above four converging axes of development leads inexorably
toward a consensus. It seems clear that broadband access systems of the future
will have the following characteristics:

Ubiquity will be achieved by supporting several standards, each with its


specific purpose, strengths and weaknesses. All these standards will be
made to fit in a cohesive framework. Terminals will be able to intelligently
and transparently switch from one standard to another according to the user’s
needs and / or geographic availability.

5 Bluetooth is one technology in the Personal Area Network (PAN) domain, characterized by ranges even
smaller than WLANs and exploited for applications such as wireless keyboards, wireless mice, headset to
cell phone links, and other device-to-device links.
Introduction 11

All broadband access systems will be integrated directly in a global net-


working infrastructure, the latter being based on IPv6, the latest incarnation
of the Internet Protocol.

Access systems will have to operate in highly hostile (i.e. millimeter-


wavelength, serious Doppler effects and multipath fading due to mobility)
propagation environments. The problem is compounded by the high bit-
rates which, combined with mobility, result in highly dispersive channels.
Cutting-edge signal processing techniques will be required to compensate
these distortions.

Access systems will be exposed to hostile and unknown interference con-


ditions. Indeed, the existence of parallel systems, possibly of unknown
nature, will have to be tolerated in the same band. Such parallel systems
could be based on UWB which, by definition, operates as an overlay system.
There could also be various unknown systems sharing an unlicensed band
with the system of interest. Finally, the system of interest itself may cause
unpredictable interference to some or all of its users if it is an unstructured,
ad-hoc type network.

Access systems will have to impose a tight quality-of-control service, but


to garantee the availability of system ressources when requested, and differ-
entiate between different degrees of quality according to the users’ needs.

It seems clear that to deal with this type of changing, evolving and hostile
environment, terminals and base stations will have to incorporate increasing
degrees of intelligence. Space-time processing is bound to play a major role,
especially with respect to unknown / unpredictable interference as well as effi-
cient and flexible usage of spectral ressources.

3. Strategies for upgrading capacity


Sectoring and cell splitting are commonly-used methods to add capacity to
existing systems. They allow channel reuse at a closer range but require a more
costly infrastructure and may create additional control (handoff) overhead. Cell
splitting is possibly the most expensive option since it requires the installation
of complete new base stations as well as the expenses (rental of area on top
of high-rise, etc.) and potential legal complications associated with new radio
transmission sites. Adaptive spatial diversity at the base station is a third option
that is arguably the most promising way to significantly augment capacity.
An adaptive array can simultaneously combat fading, noise and interference
(both CCI and ACI), hence allowing more users to coexist. Typical arrays
work by minimizing a single performance measure — e.g. the mean-square
error between the output and the desired signal — and will therefore implicitly
12 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

allocate more degrees of freedom to the most severe impediment, be it fading


or interference.

4. Benefits of space-time processors


Winters, Salz and Gitlin [Winters et al., 1994] have shown that with 2 or 3
elements, an array can double the capacity of IS-54 mobile radio systems while
with 5 elements, it can be increased 7-fold (i.e. a frequency reuse factor of
1). Adaptive arrays can also be exploited in a more sophisticated manner by
forming highly narrow beams targeted at each individual user, thus allowing
channel reuse within cell (RWC). It effectively adds a new dimension (space)
in the multiple-access scheme and the capacity gains in space are multiplica-
tive with respect to the capacity in time and frequency. Antenna arrays can be
conceived in a modular fashion and can be introduced in existing systems in an
incremental fashion, progressively increasing the diversity order as the traffic
grows. However, to fully realize the potential promise of spatial diversity, it is
necessary to carefully select parameters such as the array topology, the adapta-
tion techniques, etc. Also, the introduction of an array modifies considerably
the multiple-access problem and requires, for maximal gains, changes in the
existing (non-array) multiple-access protocols.

Improved coverage
Since an antenna array with appropriate processing can be used in transmis-
sion to synthesize a pattern and steer a beam in an arbitrary direction, it can help
improve coverage. Indeed, the pattern gain in the direction of the beam may be
sufficient to reach a distant or partially obstructed target otherwise unreachable.
The array can also synthesize a pattern emphasizing one or many reflected paths
to a target for which the direct path is obstructed.
Likewise, appropriate processing and combining of the array elements’ out-
puts can yield similar benefits in reception.
Coverage enhancement is generally considered a marginal benefit of antenna
arrays. However, it exists implicitely and does not need to be engineered into
the transceiver processor driving an array. For these reasons, the coverage
aspect has received relatively little attention in the literature. Nonetheless, see
[Liang and Paulraj, 1995] for a discussion of the impact of the array topology
on coverage extension.

Robustness against fading


Multipath fading is one of the main impediments affecting wireless channels.
It results in rapid fluctuations in the power and phase of the received signal,
caused by the variation in the phase relationships of the various copies of the
signals (travelling through different reflected paths) which add up at the receiver,
Introduction 13

either constructively or destructively. Occasionally, multipath fading will cause


the power of the received signal to drop significantly lower than the receiver
threshold for a certain period of time — this is termed a deep fade.
An antenna array can mitigate fading because of the spatial diversity it offers.
Indeed, the multipath reflections add up differently at each antenna element,
leading to signal envelopes which fade at different times. It is then highly
unlikely that all antenna elements experience a deep fade at the same time and
array processing can be designed to exploit this fact.

Interference supression
An antenna array is capable of interference rejection, i.e. it is capable of
eliminating an unwanted signal existing on the same carrier (or leaking into
the said carrier from an adjacent band) as the desired signal by exploiting the
fact that it is arriving from a different direction, or has a slightly different
spatial signature than the desired signal. In effect, the array can be employed
to synthesize a pattern that will emphasize the desired signal, and reduce or
eliminate one or more interfering signals based on all signals power angular
distribution at the array.
This is arguably the single most important benefit of array processing.

Capacity increase
In the context of wireless multiuser communications, robustness against in-
terference is a factor of paramount importance since interference limits the
capacity of wireless systems. Indeed, as the number of users grow, so does the
interference level. Therefore, the capability of arrays to suppress interference
outlined above can be exploited to augment the user capacity of wireless sys-
tems. For example, the use of an array might make a reduction in the reuse
distance possible, thus augmenting the maximum density of users.
For further capacity increases, the array can support carrier reuse within cell
(RWC), also referred to as Space Division Multiple Access (SDMA). This can
potentially support capacity increases of an order of magnitude, even within
a single isolated cell. It is a promising avenue, but it is more complex and
involves more practical problems than simply using arrays to reduce the reuse
distance by mitigating out-of-cell interferers.

Gain in effective signal power


Adaptive arrays also implicitely offer a certain effective gain in useful signal
power. Indeed, the adaptation scheme insures that the desired signal adds up
constructively across the array, while the white noise does not, thus resulting in
a net gain in SNR at the array’s output. This is similar to a single antenna pro-
viding an antenna gain by integrating the desired signal across a large aperture.
14 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

Intelligent reduction of radiated power


Arrays used for transmission can intelligently steer a pattern towards the
desired user. On the other hand a conventional omnidirectional or sectored
antenna must send its signal over a wide range of directions, thus wasting a lot of
energy. Adaptively steering a pattern also has the advantage of minimizing the
interference to other users. If all transmitters in a complex multi-user wireless
network intelligently direct their energies only where needed, the overall level of
multi-user interference drops, thus allowing even further decreases in transmit
signal power.

Position tracking (geolocation)


By examining the phase relations between the different elements and pro-
vided that the array topology satisfies certain criteria, it is possible to estimate
the directions of arrival of the signals impinging at the array. In fact, direction-
of-arrival (DOA) estimation is a rich and relatively mature research branch. If
two or more arrays in different geographic positions are put to contribution, it
is possible to pinpoint the position of signal sources.
This capability is becoming increasingly important now that governments
are demanding that cellular systems be capable of precisely locating a handset
placing a 911 (emergency) call.

5. MIMO and space-time coding


The electromagnetic spectrum is a finite, limited resource which is already
congested. It is widely recognized that the use of antenna arrays (also desig-
nated as spatial diversity) at one or both ends of wireless links is the single best
solution to actual and future congestion problems facing wireless networks of
all kinds. Indeed, it was seen in section 4 that the use of antenna arrays at the
base station end only of cellular networks can bring an order of magnitude in-
crease in the maximum number of simultaneous wireless links supportable. An
additional evolutionary threshold is crossed when we consider the use of multi-
ple antenna elements at both the base station and the terminal (subscriber) end
of wireless links, in conjunction with advanced signal processing techniques at
both ends. Such a link is designated a MIMO (multi-input, multi-output) system
and it is known through formal mathematical arguments that it can potentially
support an order of magnitude increase in the bit rate of the overall link, without
consuming additional bandwidth. Much of this promised potential can be ac-
complished through appropriate array processing and / or so-called space-time
coding.
It follows that the combination of an array at the base station (with a large
number of elements and a deep embedding of array processing algorithms in
order to achieve carrier reuse) and smaller arrays at the terminals to support
Introduction 15

the MIMO paradigm, can potentially yield the twin benefits of augmented user
capacity and augmented information rate capacity. This combination is desig-
nated multiuser-MIMO or joint MIMO-beamforming processing.

6. Historical evolution of antenna arrays and space-time


processors
The birth of radio
The first man-made radio transmission was performed by Heinrich Hertz
in his laboratory in 1886. This experiment sparked considerable interest in
the scientific community and wireless transmission technology evolved rapidly
from that point.
The first usage of an antenna array for telecommunications dates back to
the first transatlantic transmission performed by G. Marconi [Bondyopadhyay,
2000] in December 1901. In those early days, radio reception relied solely
on the power of the received signal. It is the arrival of Lee deForest’s triode
vacuum tube in 1906 which enabled amplification at both the transmitter and
receiver, thus making widespread radio broadcasting possible.

Radar
Hertz himself experimented with the use of radio waves to measure the
distance to an object. The sinking of the Titanic in 1912 (caused by a collision
with an iceberg) further reinforced the motivation to develop technology capable
of detecting unseen objects. This eventually lead to early radar experiments in
the 1920s and 1930s. During that same time period, considerable research was
performed on the use of antenna arrays. Early radars were highly ineffective and
thus unsuccessful at attracting military research funding. The looming threat
of war would change this, however, and the first useful radar system was build
in 1935 in Great Britain by Sir R. A. Watson-Watt.
At the beginning of World War II in 1939, all major powers had some form of
radar technology: France, Great Britain, Germany, the United States, Italy and
Japan. Radar would play a determining role in WWII. At the onset of the war,
Germany had superior radar technology but did not feel it necessary to pursue
further development during the conflict. However, the allies (especially the U.
S. and Great Britain) perfected their radar science considerably and surpassed
the Germans with the invention of microwave radar. This technology was an
order of magnitude more precise than previous radars and gave the allies a clear
strategic advantage in the latter portion of the war.

Phased arrays
The strategic importance of radar (and microwave radar in particular) in win-
ning the war stimulated continued development in radar and associated antenna
16 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

technology. World War II radars involved mechanically steerable antennas.


After the war, one motivation to further develop antenna arrays was the desire
to build electrically-steerable antennas for radars with no moving parts. An-
other motivation behind antenna array research was deep space communication,
which called for antenna structures with extremely large apertures. Electrically-
steerable antennas were eventually built in the form of phased arrays. A phased
array is simply an antenna array with each element equipped with a phase shift-
ing device. By adjusting the phase of each element, it is possible to steer the
array pattern in the desired direction.
The earliest practical phased arrays were the basis of the SPS-32 and SPS-33
shipborne radars and were operated by the US navy starting in 1962. These
could be steered both in elevation (using frequency scan) and in azimuth (using
phase scan). Arrays with large numbers of elements were also developed to
track space objects [Reed, 1969] and for airborne radar applications.
Arrays were also developed capable of signal processing operations more
sophisticated than simple phase shifting. Early examples were targeted at radio
astronomy and were effectively spatial filters since they multiplied and averaged
the array outputs [Covington and Broten, 1957], [Mills and Little, 1953].

Information theory and coding


After having spent the WWII years working on military covert wireless com-
munication problems, Claude E. Shannon published in 1948 his famous paper
“A Mathematical Theory of Communication” [Shannon, 1948]. The impor-
tance of this paper, which spawned the field of information theory, modern
communication science and considerably influenced computer science as well,
cannot be overemphasized. Two fundamental principles emerge from this pa-
per:

1. To better control and quantify system performance and reception error, it is


preferable to represent the information to be transmitted using a finite alpha-
bet, i.e. digital representation, rather than continuous analog waveforms.

2. There is a fundamental limit to the quantity of information that can be


transmitted through a given channel with a given signal-to-noise ratio and
a given bandwidth.

The said fundamental limit is easily calculated based on Shannon’s work and is
designated the channel capacity or Shannon capacity. It is an attainable limit,
i.e. it is in theory possible to transmit at channel capacity (but not above) with
an arbitrary low probability of error. To approach the Shannon capacity, it is
necessary to employ error-correction coding and other signal processing means.
It is a strange and beautifully unique fact in the annals of science that Shannon’s
result tell us what is ultimately possible, with little indication as to how it can
Introduction 17

be achieved. This open problem is at the basis of coding theory, an important


subset of information theory.
The relatively recent advent of turbo-codes [Berrou and Glavieux, 1996]
has made possible practical near-Shannon-limit communications. Also, low-
density-parity-check (LDPC) codes, which have been known since 1962 [Gal-
lager, 1962], were rediscovered recently and it was shown that they could per-
form nearly as well as turbo-codes [Mackay, 1999]. Subsequently, LDPC codes
were designed which perform within 0.0045 dB of the Shannon limit, thus sur-
passing the best turbo codes [Richardson et al., 2001]

Adaptive filtering
In parallel with the evolution of antenna arrays, the science of adaptive fil-
tering was born in the late 50s. At that time, there was already a considerable
body of literature concerning optimal estimation and filtering which can be
traced back to the method of least squares (invented by Gauss in 1795 at age 18
[Gauss, 1809] and rediscovered shortly after by Legendre [Legendre, 1805]) and
work on minimum mean-square error estimation carried out independently by
Kolomogorov [Kolmogorov, 1939], Krein [Krein, 1945] and Wiener [Wiener,
1949].
What exactly is optimal filtering? It essentially boils down to the determi-
nation of a filter which maximizes or minimizes a given performance criterion.
With least squares filtering, the filter transfer function is derived in a determin-
istic fashion. With minimum mean-square error filtering, the transfer function
is derived based on the statistics of the signals involved. The original works
in this area assume that the filters are fixed. For example, Wiener’s solution
is based on the assumption that the statistics of the input and reference signals
are stationary. Optimal filters can be used to predict future values of a signal
(predictor filter) or to estimate a desired signal which is corrupted by additive
noise and / or interference.
Wiener studied the linear prediction problem and formulated the optimal
(in the mean-square error sense) linear continuous-time predictor filter. The
transfer function of such a filter can be obtained by solving an integral equation
known as the Wiener-Hopf equation [Wiener and Hopf, 1931]. This solution
was recast for discrete-time filters by Levinson [Levinson, 1947].
Adaptive filtering entered the scene as it became desirable to have the transfer
function of filters dynamically change over time to adapt to changing signal
statistics. One solution is to extract a static (e.g. Wiener) solution on a block-
by-block basis, exploiting short-term stationarity in the signal. This is termed
block adaptation. However, block adaptation is not always practical since it
requires multi-pass processing, e.g. storage of the block, estimation of the
statistics, computation of the filter coefficients and finally, computation of the
filter output.
18 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

On-the-fly, continuously updated solutions with little or no storage require-


ments were called for. Stochastic gradient algorithms constitute a large class of
such solutions whose best known representative is undoubtebly the LMS (least-
mean-square) algorithm developed by Widrow and Hoff in 1959 [Widrow and
Hoff, 1960]. LMS is based on the Wiener solution. However, it functions in
an iterative, sample-by-sample basis, thus adjusting to changing statistics. Fur-
thermore, it does not employ the mean-square error as its performance criterion
but rather the instantaneous squared error. This is suboptimal (in the sense that
the instantaneous error is employed here as a noisy approximation of the mean-
square error to determine the direction and magnitude of filter tap changes at
each iteration) but it simplifies the algorithm considerably.
One form of adaptive filtering which became important in the 60s is adaptive
equalization. It had been known for a long time that telephone lines introduced
distortions in transmitted data signals that caused intersymbol interference.
That is, the transfer function of the telephone channel had (and still has!) a
memory length which is an order of magnitude larger than the duration of a data
symbol. The solution to this problem was to use a filter at the receiver which
could equalize the transfer function of the channel, thus effectively untangling
the ISI. Under ideal conditions, the combination of the telephone channel and
the equalizer formed an essentially memoryless, i.e. ISI-free, channel.
Until the advent of adaptive filtering, equalizers were fixed filters which
could not adjust to dynamic changes in the telephone channel. In 1965, Robert
Lucky devised a procedure to derive the optimal filter according to the so-
called zero-forcing criterion. Being somewhat related to Wiener filtering, zero-
forcing is based on minimizing the peak aggregate distortion on a given symbol
caused by ISI. Lucky produced a second paper in 1966 [Lucky, 1966] detailing
adaptive operation of zero-forcing equalizers. Eventually, the mean-square
error criterion was also applied to the adaptive equalization problem [Gersho,
1969], [Proakis and Miller, 1969].

Adaptive arrays and diversity combiners


In the mean time, military imperatives drove the development of arrays ca-
pable of nulling an unwanted signal, i.e. a hostile jammer. Indeed, a strong
enough hostile signal was sometimes employed to render radar inoperative. To
counter this practice, Howells developed in 1957 a sidelobe canceller which
had the ability to null out a single jammer. This sidelobe canceller was essen-
tially a two-element array capable of producing a deep null in the array pattern
at the desired direction.
It can be argued that the modern adaptive array was born in 1966 when
Applebaum derived an adaptive closed-loop control system which could max-
imize the array output signal-to-noise ratio (SNR) regardless of the nature of
the noise and unwanted interference [Applebaum, 1966]. Shortly thereafter
Introduction 19

(1967), Widrow and al. published a description of a similar antenna array, but
based on the LMS algorithm [Widrow et al., 1967]. This constitutes the first
paper in the scientific literature on adaptive arrays.
While phased arrays were developing into adaptive arrays, another form of
antenna array, the diversity combiner, was devised to counter hostile channel
conditions. Diversity refers to the availability at the receiver of many copies
of the desired signal, each being affected by different channel characteristics.
These copies can then be somehow combined to improve overall system perfor-
mance. Indeed, many circumstances exist where the received signal power at
one antenna can momentarily drop below the receiver’s threshold. This can be
due to destructive interference (i.e. mutual cancellation) between several copies
of the signal arriving through different propagation paths (multipath fading).
In such a case, an antenna array can be designed to have sufficient spacing
between the elements to insure that multipath fading is uncorrelated across
the array. Hence, the probability that all branches will undergo a deep fade
simultaneously is minimal. The first such experimental spatial diversity systems
were reported as far back as 1927.
In 1974, Reed, Mallett and Brennan published a landmark paper “Rapid
convergence rate in adaptive arrays” [Reed et al., 1974] which for the first time
proposed the use of Wiener filter principles in the spatial dimension to form
an adaptive array. Since the Wiener solution assumes stationarity, it must be
applied on a block-by-block basis where the length of a block is chosen to be
small enough to exclude any significant changes in the channels. It also involves
the existence of a training sequence, i.e. a known sequence of bits of a certain
length which is present in every block and is used to estimate the statistical
characteristics of the channels. The said characteristics are then employed
to compute the optimal weight vector in the mean-square error sense, which
constitutes a valid solution for the entire block. This is a radical departure from
the iterative solutions of Widrow, Applebaum et al.
The paper’s title mentions “rapid convergence” because it is possible to
obtain a good quality estimate of the optimal weight vector after a relatively
short training sequence. Many more symbols would be required for the basic
iterative algorithms to converge to a solution.
Spatial Wiener filtering is rediscovered in 1980 by Bogachev and Kiselev,
who designate it “optimum combining” [Bogachev and Kiselev, 1980]. Their
paper is the first to address adaptive antenna arrays in a modern wireless com-
munication context. That is, they show that the mean-square error minimiza-
tion implicitely combats both interference (i.e. unwanted man-made signals in
the band of interest) and multipath fading, the two main enemies of modern
multiuser wireless networks. Thus, it is possible to combine the functions of
traditional diversity combiners and interference-rejection adaptive arrays while
minimizing a single cost function.
20 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

Subsequently, Winters [Winters, 1984] studied optimum combining in the


context of digital mobile radio (cellular), obtaining simulation results in the
presence of multiple co-channel interferers and multipath fading. Subsequent
works by Winters and others have shown that the interference suppression ca-
pacity of an array can be utilized to augment the user-capacity (i.e. the maxi-
mum number of simultaneously active transmitters in a cell) of cellular wireless
networks.

MIMO systems
The advent of MIMO systems in 1996 with the pioneering work of Foschini
[Foschini, Jr., 1996] signals a profound paradigm shift. Up to that point, an-
tenna arrays were used mostly at one end of communication links to receive and
separate signals coming from a plurality of single-antenna terminals (the users).
The original MIMO (Multiple-Input, Multiple-Output) concept, however, pos-
tulates a point-to-point link between two antenna arrays. Foschini was the first
to formally show that the Shannon capacity of such a link grows linearly with
the number of antennas at both ends, without consuming additional spectrum.
This is possible because the combined arrays are implicitely capable of creating
through spatial discrimination several independent channels through space.
Foschini also proposed the Layered-space-time receiver structure which,
through a combination of successive interference cancellation and conventional
error-correction coding, could achieve some of the potential MIMO capacity.
Meanwhile, Alamouti [Alamouti, 1998] proposed a deceptively simple scheme
for obtaining the advantages of diversity combining with a transmitting 2-
element array. It would become apparent later that this constituted the first
space-time block coding (STBC) scheme and, in fact, the first space-time code.
Shortly after, a first paper on space-time trellis coding (STTC) appeared [Tarokh
et al., 1998]. Tarokh et al. [Tarokh et al., 1999] also generalized Alamouti’s
scheme to arbitrary numbers of antennas.

7. Book outline
References

[Alamouti, 1998] S. Alamouti, “A simple transmit diversity technique for wireless communi-
cations,” IEEE J. Select. Areas Comm., vol. 16, pp. 1451-1458, Oct. 1998.

[Applebaum, 1966] S. P. Applebaum, Adaptive arrays, Syracuse University Research Corpo-


ration, Rep. SPL TR 66-1.

[Berrou and Glavieux, 1996] C. Berrou and A. Glavieux, “Near optimum error correcting cod-
ing and decoding: turbo-codes,” IEEE Trans. Comm., vol. 44, no. 10, Oct. 1996.
REFERENCES 21
[Bogachev and Kiselev, 1980] V. M. Bogachev and I. G. Kiselev, “ Optimum combining of
signals in space-diversity reception,” Telecommunications Radio Engineering, vol. 34 /
35, p. 83, Oct. 1980.

[Bondyopadhyay, 2000] P. K. Bondyopadhyay, “The first application of array antenna,” Proc.


IEEE Int. Conf. Phased Array Systems and Tech., pp. 29-32, May 2000.

[Reed, 1969] J. E. Reed, “The AN/FPS-85 radar system,” Proc. IEEE, vol. 57, no. 3, pp. 324-
335, Mar. 1969.

[Covington and Broten, 1957] A. E. Covington and N. W. Broten, “An interferometer for radio
astronomy with a single lobed radiation pattern,” IRE Trans. Antennas and Prop., vol.
AP-5, pp. 247-255, Jul. 1957.

[Foschini, Jr., 1996] G. J. Foschini, Jr., “Layered space-time architecture for wireless commu-
nication in a fading environment,” AT&T Bell Labs Tech. J., vol. 1, pp. 41-59.

[Gallager, 1962] R.G. Gallager, “Low-density parity-check codes,” IRE Trans. Inform. Theory,
vol. 8, pp. 21-28.

[Gauss, 1809] C. F. Gauss, Theoria motus corporum coelestium in sectionibus conicus solem
ambientum, Hamburg (english version – New York: Dover, 1963).

[Gersho, 1969] A. Gersho, “Adaptive equalization of highly dispersive channels for data trans-
mission,” Bell Syst. Tech. J., vol. 48, pp. 55-70.

[Jakes, 1993] W. C. Jakes, Microwave mobile communications, IEEE Press, Piscataway, NJ,
1993.

[Kolmogorov, 1939] A. N. Kolmogorov, “Sur l’interpolation et extrapolation des suites station-


naires,” C. R. Acad. Sci., Paris, vol. 208, pp. 2043-2045.

[Krein, 1945] M. G. Krein, “On a problem of extrapolation of A. N. Kolmogorov,” C. R. (Dolk.)


Akad. Nauk SSSR, vol. 46, pp. 306-309, 1945.

[Legendre, 1805] A. M. Legendre, “Méthode des moindres carrés, pour trouver le milieu le plus
probable entre les résultats de différentes observations,” Mem. Inst. France, pp. 149-154.

[Levinson, 1947] N. Levinson, “The Wiener RMS (root-mean-square) error criterion in filter
design and prediction,” J. Math. Phys., vol. 25, pp. 261-278.

[Liang and Paulraj, 1995] J. W. Liang and A. J. Paulraj, “On optimizing base station antenna
array topology for coverage extension in cellular radio networks,” in Proc. 45th IEEE
Vehic. Tech. Conf. (VTC), v. 2, pp. 866-870, 1995.

[Lucky, 1966] R. W. Lucky, “Techniques for adaptive equalization of digital communication


systems,” Bell Syst. Tech. J., vol. 45, pp. 225-286.

[Macdonald, 1979] V. H. Macdonald, “The cellular concept,” Bell Syst. Tech. J., vol. 58, no. 1,
Jan. 1979.

[Mackay, 1999] D.J.C. MacKay, “Good error-correcting codes based on very sparse matrices,”
IEEE Trans. Inform. Theory, vol. 45, pp. 399-431.
22 SPACE-TIME METHODS, VOL. 1: SPACE-TIME PROCESSING

[Micrologic, 2002] Cellular2002: a study of the worldwide cellular telephone market, Micro-
logic Research, 2002. 388 pages.

[Mills and Little, 1953] B. Y. Mills and A. G. Little, “A high resolution aerial system of a new
type,” Aust. J. Phys., vol. 6, p. 272, 1953.

[Parsons and Bajwa, 1982] J. D. Parsons and A. S. Bajwa, “Wideband characterization of fading
radio channels,” IEE Proceedings Part F, vol. 29, no. 2, Apr. 1982.

[Proakis and Miller, 1969] J. G. Proakis and J. H. Miller, “ An adaptive receiver for digital
signaling through channle swith intersymbol interference,” IEEE Trans. Info. Theory,
vol. IT-15, pp. 484-497.

[Reed et al., 1974] I. Reed, J. D. Mallett and L. E. Brennan, “Rapid convergence rate in adaptive
arrays,” IEEE Trans. Aerosp. Electron. Syst., vol. 10, no. 6, pp. 853-862, 1974.

[Richardson et al., 2001] S.-Y. Chung, D. Forney, T. J. Richardson and R. L Urbanke, “On the
Design of Low-Density Parity Check Codes within 0.0045 dB of the Shannon Limit,”
IEEE Trans. Info. Theory, vol. 47, no. 2, pp. 619-637.

[Shannon, 1948] C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech.


J., vol. 27, pp. 379-423 and 623-656, 1948.

[Tarokh et al., 1998] V. Tarokh, N. Seshadri and A. R. Calderbank, “Space-time codes for high
data rate wireless communications: Performance criterion and code construction,” IEEE
Trans. Info. Theory, vol. 44, no. 2, pp. 744-765.

[Tarokh et al., 1999] V. Tarokh, H. Jafarkhani and A. R. Calderbank, “Space-time block codes
from orthogonal designs,” IEEE Trans. Info. Theory, vol. 45, no. 5, pp. 1456-1467.

[Widrow and Hoff, 1960] B. Widrow and M. E. Hoff, Jr., “Adaptive switching circuits,” IRE
WESCON Conv. Rec., Pt 4, pp. 96-104.

[Widrow et al., 1967] B. Widrow et al., “Adaptive antenna systems,” Proc. IEEE, vol. 55, pp.
2143-2159.

[Wiener and Hopf, 1931] N. Wiener and E. Hopf, “On a class of singular integral equations,”
Proc. Prussian Acad. Math-Phys. Ser., p. 696.

[Wiener, 1949] N. Wiener, Extrapolation, interpolation and smoothing of stationary time series.
Cambridge, MA: MIT Press (Public version of classified National Defense Research
Report originally produced in 1942).

[Winters, 1984] J. H. Winters, “Optimum combining in digital mobile radio with cochannel
interference,” IEEE J. Select. Areas Comm., vol. 2, no. 4, July 1984.

[Winters et al., 1994] J. H. Winters, J. Salz and R. D. Gitlin, “The impact of antenna diversity
on the capacity of wireless communication systems,” IEEE Trans. on Comm., vol. 42, no.
2/3/4, Feb/Mar/Apr 1994.

You might also like