0% found this document useful (0 votes)
2 views21 pages

Traffic Profile

The document discusses the importance of managing network traffic in IaaS cloud infrastructures to prevent issues like DoS attacks and resource contention among tenants. It emphasizes the use of Quality of Service (QoS) technologies to ensure fair bandwidth allocation and isolation of tenant traffic from infrastructure traffic. Additionally, it outlines security considerations for protecting tenant environments and infrastructure traffic, including the use of IPsec and virtual firewalls.

Uploaded by

khaled.0103877
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views21 pages

Traffic Profile

The document discusses the importance of managing network traffic in IaaS cloud infrastructures to prevent issues like DoS attacks and resource contention among tenants. It emphasizes the use of Quality of Service (QoS) technologies to ensure fair bandwidth allocation and isolation of tenant traffic from infrastructure traffic. Additionally, it outlines security considerations for protecting tenant environments and infrastructure traffic, including the use of IPsec and virtual firewalls.

Uploaded by

khaled.0103877
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Traffic Profile

Related terms:

Cloud Infrastructure, Simple Network Management Protocol, Virtual Machine, In-


frastructure Traffic, Network Traffic, Service Level Agreement, Traffic Management

View all Topics

Cloud Security
Thomas W. Shinder, ... Debra Littlejohn Shinder, in Windows Server 2012 Security
from End to Edge and Beyond, 2013

Network Bandwidth Control


Our team has documented in Private Cloud Security Challenges, which is part of
our Private Cloud Reference architecture that one of the concerns of a designer
of an IaaS cloud infrastructure is that: “A rogue application, client, or DoS attack
might destabilize the datacenter by requesting a large amount of resources. How do
I balance the requirement that individual consumers/tenants have the perception of
infinite capacity with the reality of limited shared resources?”

What you want to prevent is a situation where one or more of the tenants step
on each other in terms of network access. A compromised tenant might be able
to DoS the network by using some type of network flooding exploits. In this case,
even though a single tenant is compromised, that single tenant could end up
compromising the entire tenant network infrastructure because other tenants will
no longer have their workloads accessible to client systems.

One way to address this concern is by controlling the network traffic by using
bandwidth shaping or quality of service (QoS) technologies. QoS in Windows Server
2012 is designed to help manage network traffic on the physical network and on
the virtual network, as there is both a Windows QoS and a Hyper-V virtual switch
QoS. Policy-based QoS is designed to manage network bandwidth allocations on the
physical network and can be leveraged by both the virtual machine tenants and the
host systems that comprise the IaaS cloud infrastructure. In that way, you can get very
granular in terms of how you shape traffic at both the host and guest perspectives.

The use of policy-based QoS allows you to specify network bandwidth control based
on the type of application, users, and computers. You can also use Policy-based
QoS to help control bandwidth costs and negotiate service levels with bandwidth
providers or departments (which would be represented as different tenants in the
IaaS cloud infrastructure). Hyper-V QoS enables administrators of an IaaS cloud
infrastructure to provide specific network performance values based on service-level
agreements (SLAs) you set with your tenants. Most importantly, Hyper-V QoS also
helps ensure that no tenant is impacted or compromised by other tenants on their
shared infrastructure, since the tenant virtual machines can have their bandwidth
limited by setting an absolute high limit or by allowing them a certain percentage
of the total available bandwidth on the link.

QoS is also useful for making sure that all the infrastructure traffic profiles have the
bandwidth they need. For example, you do not want the Live Migration traffic to step
on the storage traffic and vice versa. Both these traffic profiles are high-throughput,
low-latency traffic profiles, and thus they require QoS in order to operate effectively.

When designing security for your IaaS cloud infrastructure, consider developing
a QoS plan that describes how to create a network fair share environment that
incorporates both the IaaS cloud infrastructure and the tenants.

A typical QoS plan might include the following sections:

▪ SLA: Plan QoS policy-based on the tenants’ SLA.

▪ IaaS cloud infrastructure QoS policy: determine absolutely min/max band-


width requirements for each infrastructure traffic profile or create percentage
of bandwidth values that can be assigned to each infrastructure traffic profile.
▪ Network utilization: Put together a plan to measure the bandwidth utilization
for each of the infrastructure and tenant traffic profiles. Then adjust QoS
policies so that they are in line with your objective findings.

You have the ability to apply policies on a per tenant basis. You do this by creating
multiple virtual NICs in Hyper-V and specify QoS9 on each virtual NIC individually.
An example on how to establish QoS per virtual NIC is shown below:

New-NetQosPolicy—Name “NIC Name Description”—NICName—MinBand-


widthWeightAction 20.

> Read full chapter

Multiple Access Techniques


Vijay K. Garg, in Wireless Communications & Networking, 2007

6.10 Multicarrier DS-CDMA (MC-DS-CDMA)


Future wireless systems such as a fourth-generation (4G) system will need flexibility
to provide subscribers with a variety of services such as voice, data, images, and
video. Because these services have widely differing data rates and traffic profiles,
future generation systems will have to accommodate a wide variety of data rates.
DS-CDMA has proven very successful for large-scale cellular voice systems, but
there are concerns whether DS-CDMA will be well-suited to non-voice traffic. The
DS-CDMA system suffers inter-symbol interference (ISI) and multi-user interference
(MUI) caused by multipath propagation, leading to a high loss of performance.

With OFDM, the time dispersive channel is seen in the frequency domain as a set
of parallel independent flat subchannels and can be equalized at a low complexity.
There are potential benefits to combining OFDM and DS-CDMA. Basically the
frequency-selective channel is first equalized in the frequency domain using the
OFDM modulation technique. DS-CDMA is applied on top of the equalized channel,
keeping the orthogonality properties of spreading codes. The combination of OFDM
and DS-CDMA is used in MC-DS-CDMA. MC-DS-CDMA [4, 5, 12, 25] marries the
best of the OFDM and DS-CDMA world and, consequently, it can ensure good
performance in severe multipath conditions. MC-DS-CDMA can achieve very large
average throughput. To further enhance the spectral efficiency of the system, some
form of adaptive modulation can be used.

Basically, three main designs exist in the literature, namely, MC-CDMA,


MC-DS-CDMA, and multitone (MT)-CDMA. In MC-CDMA, the spreading code is
applied across a number of orthogonal subcarriers in the frequency domain. In
MC-DS-CDMA, the data stream is first divided into a number of substreams. Each
substream is spread in time through a spreading code and then transmitted over
one of a set of orthogonal subcarriers. In MT-CDMA the system undergoes similar
operations as MC-DS-CDMA except that the different subcarriers are not orthogonal
after spreading. This allows higher spectral efficiencies and longer spreading
codes; however, different substreams interfere with one other. The MC-DS-CDMA
transmitter spreads the original data stream over different orthogonal subcarriers
using a given spreading code in the frequency domain.

> Read full chapter

Traffic Conditioning
Deep Medhi, Karthik Ramasamy, in Network Routing (Second Edition), 2018
18.6.1 Graded Profiles
So far, our discussion assumed that a single average rate and a burst size are used to
govern the traffic in a flow. However, this might be insufficient for expressing the
traffic into different grades based on its temporal characteristics. Such grading of
traffic allows us to apply different types of marking or combinations of marking or
policing for each grade. For instance, a graded profile might specify that the traffic
exceeding a rate of M bytes per sec is simply marked and if the excess traffic rate
becomes greater than the N bytes per sec, it should be immediately discarded.

When the traffic is graded, colors can be used to describe marking of packets. That
is, the color of the packet identifies whether it is conforming to the traffic profile.
For example, a green packet means if it is conforming to the committed rate of the
traffic profile; a yellow packet means that it is not conforming to the committed
rate, but meets the excess rate of the traffic profile; however, a red packet does not
meet the committed nor the excess rates of the traffic profile. Then, green packets
are processed as specified in the SLA and are not candidate for discarding. Yellow
packets are typically candidates for discarding only if the network is congested. Red
packets are immediately discarded.

Next we describe two marking algorithms: single-rate tricolor marking (srTCM) and
two-rate tricolor marking (trTCM). For these, we need to use a few terms: committed
information rate (CIR), committed burst size (CBS), excess information rate (EIR),
and excess burst size (EBS). We have already discussed CIR and CBS. The excess
information rate (EIR) specifies the average rate that is greater than or equal to the
CIR; this is the maximum rate up to which packets are admitted into the network.
Excess burst size (EBS) is the maximum number of bytes allowed for incoming
packets. Packets are in-profile if they meet CIR and CBS (“CIR-conformant”) while
they are out-of-profile if they meet EIR And EBS (“EIR-conformant”).

> Read full chapter

Virtualization Security
Thomas W. Shinder, ... Debra Littlejohn Shinder, in Windows Server 2012 Security
from End to Edge and Beyond, 2013

Networking Security
Similar to the compute security considerations, many of the same principles that
you use to secure the traditional datacenter network can be used when securing
the virtualization infrastructure in your cloud. Some key considerations to consider
when designing networks security in your private cloud include:

▪ Isolate tenant traffic from cloud infrastructure traffic

▪ Isolate tenant traffic from other tenant traffic

▪ Prevent one tenant from using up all bandwidth on the shared network
connection
▪ Securing the different infrastructure traffic profiles

▪ Protect against common network attacks such as ARP spoofing and rogue
DHCP servers
▪ Enable network IDS/IPS for the virtual switch

It is critical that tenant and infrastructure traffic are isolated from one another.
No tenant should ever be able to connect to a host node in the Hyper-V cluster
that forms the basis of the cloud infrastructure. When we speak of infrastructure
traffic, we are referring specifically to cluster/CSV traffic, Live Migration traffic,
management traffic, and storage traffic. There are several approaches you can take
to isolating these various traffic profiles:

▪ Use the Windows Server 2008 R2 approach, where each traffic profile has a
physical NIC dedicated to it. The problem with this approach is that it con-
sumes a lot of PCI slots and complicates the networking in terms of cabling,
switch port consumption, and switch port configuration. In general, we do not
recommend this approach when securing the virtualization infrastructure for
a Windows Server 2012-based cloud.
▪ Use two separate networks—one for the infrastructure traffic and one for the
tenant traffic. For example, you can create one NIC team for the infrastructure
traffic and one NIC team for the tenant traffic. You can then place each of these
teams on different VLANs. The infrastructure NIC team can then handle all the
infrastructure traffic profiles and the tenant team handles all of the traffic to
and from the tenants. This gives us the critical isolation we require between
the infrastructure and tenant networks. You can take advantage of Windows
QoS to make sure each of the infrastructure traffic profiles gets the bandwidth
it requires.
▪ Use a single network and run both tenant and infrastructure traffic through
the Hyper-V virtual switch. In this network security design pattern, you have
simplified the physical port configuration and the cabling significantly, since
you are dealing with a single NIC team for all traffic profiles. In this case
you take advantage of Port ACLs, 802.1q VLAN tagging, Private VLANs, and
Hyper-V QoS to make sure that all traffic profiles are isolated from each other
and have the bandwidth allotment they require.
Tenants need to be protected from each other. The reason for this is that in the best
of all possible worlds the cloud infrastructure administrators and the cloud service
provide (which would be corporate IT in the example of the private cloud) only the
infrastructure on which users can deploy their services (at least in the example of
Infrastructure as a Service). In that case, you provide the consumers of your cloud
service with the virtual machines they need to stand up their services, but what they
do with those services is up to them. If they do not want to deploy security best
practices or do not want to update their machines with monthly security update,
then that is up to the consumer of the service. What is not up to the consumer is
making sure that rogue or compromised virtual machines cannot compromise other
tenants or the cloud infrastructure. In Windows Server 2012 Hyper-V, you can use
port ACLs and Hyper-V QoS to make sure that tenants are not able to communicate
with one another or the infrastructure and apply QoS policies to make sure that no
tenant is able to execute a network flood-based denial of service attack. In addition,
you might consider using IPsec to isolate the tenants from each other or from the
infrastructure network—in which case you can take advantage of the new Windows
Server 2012 IPsec Task Offload feature (IPsecTO). This enables the virtual machines
to offload IPsec processing from the main processor and put that processing on to
a NIC that can perform this offload function.

In addition, you might want to enable more sophisticated firewalling on the Hy-
per-V virtual switch than just port ACLs. In that case, you can introduce third-party
add-ins that can provide this functionality. We imagine in most cases virtualization
infrastructure admins for private cloud will want to introduce these virtual firewalls
and network security management devices to make sure that tenants are protected
from each other and that the infrastructure is protected from the tenants.

Securing the various forms of infrastructure traffic is important. Consider the traffic
profiles:

▪ Live Migration traffic. Live Migration traffic contains whatever is running in


host memory on a particular node in the Hyper-V cluster. When you move
that information from one node to another, that information must go over
the network. You can rest assured that there is a lot of information in that
data stream that your cloud consumers do not want available to anyone,
including the administrators of the cloud infrastructure and whatever network
IDS systems might be running on the network. Because of the potentially
sensitive nature of the information moving over the Live Migration path,
you will want to secure that with IPsec. While we do not have hard coded
information on the performance impact of using IPsec to protect the Live
Migration traffic, with the advent of modern main processors that have IPsec
processing code in them and IPsec task offload NICs, it is expected that the
performance impact should be nominal.
▪ Cluster/CSV traffic. Without going into the details of Hyper-V clustering, it is
important to note that a lot of traffic moving over the infrastructure network
will be redirected I/O traffic from one cluster node to another, depending on
which node is the coordinate or “owner” of the storage that contains the virtual
machine files. Similar to Live Migration traffic, there is a significant chance
that this data steam will contain propriety information that the consumer of
the cloud service would prefer not accessible to anyone. In this case, you will
also want to consider using IPsec to isolate this traffic from network analyzers
run by both legitimate and illegitimate users.
▪ Storage traffic. Approaches to securing storage traffic vary with the storage
protocols and infrastructures you plan to use. For example, if you are using
Fibre Channel, that fibre channel infrastructure is going to be isolated from
your Ethernet network, thus creating a physical segmentation similar to what
you see in the traditional datacenter. Similarly, if you choose to use an Infin-
iband infrastructure to connect to storage, you get physical isolation. But if you
choose iSCSI, that traffic is going to be running over your Ethernet network
and over IP. However, it is not a simple task to sniff iSCSI traffic and determine
the contents of the communications; therefore, encryption of this traffic over
the wire may not be a strong requirement. Windows Server 2012 introduces
the new SMB 3 protocol where you can store virtual machine files in a storage
cluster and have that storage be continuously available. This SMB traffic should
be secured. However, you will not need to use IPsec in this scenario because
SMB 3 supports, out of the box, SMB encryption. Enabling SMB encryption is
as easy as putting a checkmark in a checkbox when you enable the scale-out
file server for applications role in your file server cluster.
▪ Management traffic. In general, management traffic does not contain highly
sensitive information and the information that is contained within it is ac-
cessible only to infrastructure administrators and thus does not need to be
isolated or protected from the administrators, which is a different proposition
compared to the tenant related traffic (Live Migration and CSV), where the
consumers of the service do NOT want infrastructure admins to have access to
their traffic. In this case, it is up to you where you think that your management
traffic should be secured on the wire.

You do want to be able to secure the tenants from common network attacks such
as ARP spoofing and rogue DHCP servers. As discussed earlier in this chapter, you
can do that with the new Windows Server 2012 Hyper-V ARP spoof protection and
DHCP authorization features.

Finally, you will want to be able to make sure that you can deploy the same network
security and analysis tools on the Hyper-V virtual network that you deploy on your
physical networks. This means that you will want to be able to hook up IDS/IPS
systems, sophisticated bandwidth management and control systems, and other
network systems that need visibility into all the traffic traversing the Hyper-V virtual
switch. You can do this by enabling the port mirroring feature now available in the
Windows Server 2012 Hyper-V virtual switch.

> Read full chapter

Advances in Computers: Improving the


Web
Kostas Pentikousis, ... Susana Sargento, in Advances in Computers, 2010

5.5 Quality of Service Support for VoIP over WiMAX


As discussed earlier in Section 2.2, the WiMAX MAC ensures that the QoS require-
ments of packet flows belonging to different service classes, such as, for example,
VoIP, video streaming, and data transfers are met. Currently, the WiMAX MAC
supports five different QoS classes (see Table II), which allow the system behavior to
be quite customizable with respect to different traffic profiles and the corresponding
applications. The use of QoS classes aims to serve constant and variable bitrate
real-time applications, non-real-time data transmission applications, and applica-
tions that do not require any service level. In practice, the best-effort service class is
still commonly used for all types of packet flows, irrespective of their traffic profile
characteristics.

For VoIP, IEEE 802.16-2009 [4] defines three attractive QoS classes, the use of which
depends on the codec. For CBR voice codecs, the first choice should be the Unsolicit-
ed Grant Service (UGS). UGS is designed for services that generate CBR traffic. This
is the case with simple VoIP codecs that do not support silence suppression and do
not employ a layered structure to dynamically scale VoIP quality. UGS assures that
the fixed-size grants offered to the traffic flow based on its real-time needs are met.
The mandatory QoS parameters involved in UGS are minimum reserved traffic rate
and maximum latency.

The Real-Time Polling Service (rtPS) class provides QoS assurance to real-time network
services generating variable size packets on a periodic basis, while requiring strict
data rate and delay levels. In rtPS, the WiMAX BS can use unicast polling so
that mobile hosts can request bandwidth. Latency requirements are met when the
provided unicast polling opportunities are frequent enough. The rtPS service class
is more demanding in terms of request overhead when compared to UGS, but is
more efficient for variable size packet flows. The Extended Real-Time Polling Service
(ertPS) combines the advantages of UGS and rtPS. This QoS service class enables the
accommodation of packet flows whose bandwidth requirements vary with time. The
ertPS QoS class parameters include maximum latency, tolerated jitter, and minimum
and maximum reserved traffic rate.

It is important to keep in mind that these QoS classes can assure the required QoS
levels only over the WiMAX link, not the end-to-end delay. For example, maximum
latency here refers to the period between the time that a packet is received by the
convergence sublayer (WiMAX MAC) and until the packet is handed over to the PHY
layer for transmission.

> Read full chapter

Introduction to Cisco’s Quality-of-Ser-


vice Architecture for IP Networks
Vinod Joseph, Brett Chapman, in Deploying QoS for Cisco IP and Next Generation
Networks, 2009

2.20.1 Explicit NULL Advertisement by PE


Let’s take an example of an MPLS virtual private network (VPN) scenario in which
the ingress PE imposes two labels on the incoming IP packet, the innermost label
identifying the VPN, and the outermost label identifying the IGP label switched path
in the core. In the short-pipe and pipe modes, the imposed labels are per the SP’s
QoS policy. The penultimate hop pops off one label before forwarding the packet
with the remaining packet to the egress PE. The PE receives the packet and classifies
it per service provider QoS policy using the single remaining label that still carries
the EXP per service provider classification.

However, if the ingress PE imposes a single label on an incoming label (as can
happen in some applications), the penultimate hop router pops off the label and
forwards the IP packet without any label to the PE. In the absence of a label,
the PE router has no knowledge of how the packet was classified by the service
provider during transit (e.g., was it marked as out-of profile traffic?). The PE can only
take decisions based on the IPP/DSCPs of the IP packet, which in reality identifies
the packet classification per the enterprise’s QoS policy. This might not always be
desirable, in which case an option exists to let the PE advertise an explicit NULL to
the penultimate-hop P (PHP) router via configuring the following command on its
interface toward the P router:

mpls ldp explicit null


Because of explicit NULL advertisement, the PHP P router adds a label to the IP
packet it sends to the PE router. Now the PE router can get the SP QoS classification
information from the label. Note that this will also work in the MPLS VPN case; the
PHP router imposes another label above the single one it sends to the PE router. So,
instead of getting a packet with one label, the PE now gets it with two labels. Both
labels can carry the SP classification information if set by the PHP router. The PE
router uses this information and pops both labels.

This process can also be used in a managed CE setup, where the CE imposes an
MPLS explicit NULL label on the packet being transmitted to the PE and marks the
EXP bits per SP classification policy. Therefore, the PE router classifies the packet
simply by using this label. Note that this is a Cisco-specific feature on the CE and
PE. It is configured on the CE router via the same command:

mpls ip impose explicit null

on the egress interface toward the PE. This model is suitable when the SP provides
a managed CE service to an enterprise. Please note that, though traffic from CE to
PE carries the explicit NULL label, there is no label on the packets from the egress
PE to CE. This is illustrated in Figure 2.35.

Figure 2.35. CE Imposed Explicit NULL

> Read full chapter

Routing and Traffic Engineering in


Software Defined Networks
Deep Medhi, Karthik Ramasamy, in Network Routing (Second Edition), 2018
11.4.4 Remark: Using Optimization Models in Practice
In the above sections, we presented a number of different models that can be used
for traffic engineering of SDN networks with aggregated flow routing. Here, we
briefly discussed when and how these models may be used.

A general approach is to run such models periodically, based on changes in the traffic
profiles or changes in the service profiles so that the network is traffic re-engineered.
A second option is to use the model as a change model. What this means is that
traffic demands that are working well are untouched, but the model is used only for
traffic demands that have significant changes. In this case, the capacity would be
replaced by residual capacity, and would be for the traffic groups for which the new
optimal paths are to be obtained.

Another factor to consider is to conduct a careful statistical study of the current


network and determine if there are certain patterns. For example, there may be
certain traffic demands that remain fairly static, but others change, or, there might
be some bursty elephant flows that need to be treated differently. Thus, in such
situations, a global optimization model might not be able to give a turn around
on the paths needed if the SDN network is very large. For such situations, other
approaches such as MicroTE [96] are worth considering.

Finally, we point out that there is a general misconception that ECMP (Equal-Cost
Multipath) or multipath routing is better to have in a network as it allows load
balancing for traffic engineering. In a recent work [501], it was shown that for large
networks, at a particular instant, it is actually sufficient if almost all demand pairs
use single-path routing, leading to optimal routing. This result is counter-intuitive,
but has been shown to hold for large networks where the number of links are in the
order of number of nodes, which is typical in most practical networks.

> Read full chapter

QoS in Wireless Networks


XiPeng Xiao, in Technical, Commercial and Regulatory Challenges of QoS, 2008

WiMAX QoS model


The designers of WiMAX have adopted a QoS model similar to ATM's. That is, the
system uses a connection-oriented architecture. This is different from that in Wi-Fi
or Ethernet, both of which follow a connection-less model. At application setup,
the subscriber establishes a connection with the BS. The advantage of a connec-
tion-oriented architecture is that the resource allocator can associate a particular
connection with a set of QoS parameters and traffic profiles. The classification of
IP/Ethernet/ATM packets to 802.16 packets has been defined in the 802.16 standard.
While the QoS model is similar to ATM's, because of the dynamic nature of the
wireless link, the following differences exist:

1. Dynamic resource allocation: In WiMAX networks, resource allocation can be


done on a frame-by-frame basis. In other words, a connection can get different
amount of resource at different times.
2. Adaptive Modulation and Coding (AMC): WiMAX allows for link quality feed-
back to be available to the transmitter, so that it may select appropriate burst
profiles for transmission. A burst profile is a combination of modulation, cod-
ing, and forward-error correction schemes that is used for data transmission.
For example, if the link quality is very poor, the transmitter can fall back to
more robust modulation and coding schemes. This will cause the data to be
sent at a lower rate, but will ensure that it is received correctly. This minimizes
the impact of errors on the wireless link. The available modulation methods
in 802.16d and their peak transmission rates are described in Table 13-3.Table
13-3. Modulation and coding schemes for IEEE 802.16dModulationOverall cod-
ing ratePeak data rate in5 MHz (Mbps)BPSK½1.89QPSK½3.95QPSK¾6.0016-QAM½8.06-
16-QAM¾12.1864-QAM2/316.3064-QAM¾18.36
Apart from the above fundamental characteristics, several other key QoS supporting
features were built into the system, as explained below.

First is the capability to have flexible framing. This is needed to optimally use the
available airtime on the wireless link by allowing a subscriber or base station to adapt
to changing conditions on the wireless link. For example, the relative portion of a
frame devoted to uplink/downlink transmission can be dynamically changed from
frame to frame depending on the traffic need. The relative portion of control/data in
sub-frames can also be dynamically changed depending on the network load. This
can make the use of the wireless spectrum more efficient.

Second is the ability to provide symmetric high-throughput in both uplink and


downlink directions. This is achieved by having scheduling intelligence at both the
subscriber and the base station, and by allowing the subscriber stations to com-
municate their QoS requirements to the base station. The symmetric throughput
is important because WiMAX networks are likely candidates for numerous backhaul
applications that require the transport of aggregated traffic in both directions.

Third is the capability for frame packing and fragmentation. These are effectively QoS
techniques because they allow WiMAX systems to pack many small-sized packets
into a frame, or to break up a large packet into multiple frames. This helps to
prevent a high-priority packet from having to wait a long time for the completion of
transmission of a large low-priority packet. This also enables a more effective use of
the varying bandwidth of the wireless link. For example, when the link between the
user and the BS is in good condition, a large volume of data (in the form of packed
frames) can be rapidly exchanged. When the link is in poor condition, packets may
be fragmented and transmitted to meet some minimum bandwidth goals, giving
the wireless medium for other users whose links to the BS are in good condition.

In summary, the WiMAX standard provides the following key QoS features:

1. Dynamic resource allocation to adapt to the varying condition of the wireless


link and real-time need of the application.
2. Adaptive modulation and coding to minimize wireless link errors and increase
throughput.
3. Flexible PHY/MAC framing to maximize the bandwidth available for actual data
transmission.
4. Support for symmetrical high throughput in both uplink and downlink direc-
tions
5. Efficient transport of MAC frames via packing and fragmentation.

QoS services
With these key QoS features, the IEEE 802.16 standard defines four services with
different performance requirements. Each service is associated with a set of perfor-
mance parameters. The four types of services are:

• Unsolicited Grant Service (UGS)

• Real-time Polling Service (rt-PS)

• Non-real-time Polling Service (nrt-PS)

• Best-Effort service (BE)

These services are mandatory, in the sense that any higher-layer application will have
to be mapped to a WiMAX connection that belongs to one of the above four services.
Thus, any standards-compliant WiMAX system must implement the above services.

It is worth noting that WiMAX does not confine an implementation to having


only four classes. Nor does it impose any specific queueing implementation at the
subscriber station or the base station. An operator/vendor may have multiple classes
of traffic, and queue them in any manner, e.g., per connection, per application, or
per class.

Unsolicited Grant Service (UGS)

This service supports real-time data streams consisting of fixed-size data packets
transmitted at periodic intervals, such as voice-over-IP without silence suppression.
It is analogous to ATM's CBR service. The mandatory QoS service-flow parameters
for this service are:

• Maximum Sustained Traffic Rate,

• Maximum Latency,

• Tolerated Jitter, and

• Request/Transmission Policy.

These applications require a fixed-size grant on a real-time periodic basis. Thus, to


a subscriber station with an active UGS connection, the BS provides a fixed-size
data grant at periodic intervals based on the Maximum Sustained Traffic Rate of the
service flow. The Request/Transmission Policy for UGS service is set such that the
subscriber station is prohibited from using any contention request opportunities
for a UGS connection. To reduce bandwidth consumption, the BS does not poll the
subscriber stations with active UGS connections. If a subscriber station with an active
UGS connection needs to request bandwidth for a non-UGS connection, it may do
so by setting a bit in the MAC header of an existing UGS connection to indicate to
the BS that it wishes to be polled. Once the BS detects this request, the process of
individual polling is used to satisfy the request.

Real-time Polling Service (rt-PS)

This service supports real-time data streams consisting of variable-sized data packets
that are transmitted at fixed intervals, such as MPEG video. It is analogous to ATM's
rt-VBR service. The mandatory QoS service-flow parameters for this service are:

• Minimum Reserved Traffic Rate

• Maximum Sustained Traffic Rate

• Maximum Latency

• Request/Transmission Policy

For this type of service, the BS must provide periodic unicast request opportunities
that meet the real-time needs of the flow. In these request opportunities, the
subscriber station can specify the size of the desired grant. The request overhead
for this service is more than that of UGS, but it supports a variable grant size,
thus improving its transmission efficiency. The Request/Transmission Policy is
set such that the subscriber station is prohibited from using any contention request
opportunities for such connections.

Non-real-time Polling Service (nrt-PS)


This service supports delay-tolerant data streams, consisting of variable-sized data
packets for which a minimum data rate is required, such as web browsing. It is
analogous to ATM's nrt-VBR service. The mandatory service flow parameters for this
type of service are:

• Minimum Reserved Traffic Rate

• Maximum Sustained Traffic Rate

• Traffic Priority

• Request/Transmission Policy

Such applications require a minimum bandwidth allocation. The BS must provide


unicast request opportunities on a regular basis that ensure that the service receives
request opportunities even during network congestion. The Request/Transmission
Policy is set such that subscriber stations are allowed to use contention request
opportunities. Thus, the subscriber station can use contention request opportunities
as well as unicast request opportunities to request bandwidth.

Best-Effort (BE) service

This service is designed to provide efficient transport for best-effort traffic with no
explicit QoS guarantees. The mandatory service flow parameters for this service are

• Maximum Sustained Traffic Rate

• Traffic Priority

• Request/Transmission Policy

These applications do not require any minimum service level and, therefore, can
be handled on a “space available” basis. These applications share the remaining
bandwidth after allocation to the previous three services has been completed. The
Request/Transmission Policy is set such that the subscriber station is allowed to use
contention request opportunities.

In summary, the four service classes, their target applications, and the performance
parameters are described in Table 13-4.

Table 13-4. 802.16 services classes and their characterizing parameters

Service Applications Parameters


UGS Uncompressed voice, TDM cir- •Maximum Sustained Traf-
cuits
fic Rate•Maximum La-
tency•Tolerated Jitter•Re-
quest/Transmission Policy
rt-PS Video, VoIP •Minimum Reserved Traf-
fic Rate•Maximum Sus-
tained Traffic Rate•-
Maximum Latency•Re-
quest/Transmission Policy
nrt-PS Web browsing, interactive data •Minimum Reserved Traf-
applications
fic Rate•Maximum Sus-
tained Traffic Rate•Traf-
fic Priority•Request/Trans-
mission Policy
Best Effort Email, FTP •Maximum Sustained Traf-
fic Rate•Traffic Priority•Re-
quest/Transmission Policy

A few points regarding WiMAX QoS are worth noting:

First, the traffic scheduler at the BS decides the allocation of the physical slots to
each subscriber station on a frame-by-frame basis. While making allocations, the
scheduler must account for the following:

• Scheduling service specified for the connection

• Values assigned to the connection's QoS parameters

• Queue sizes at the subscriber stations

• Total bandwidth available for all the connections

Second, although requests to the BS are made on a per-flow basis, the grants by the
BS are issued on a per-subscriber station basis, without distinguishing individual
flows at the subscriber station. This means that there needs to be a local scheduler at
each subscriber station that allocates the bandwidth grants from the BS among the
competing flows. This model was adopted in 802.16 for two reasons. (1) Granting
on a per-subscriber station basis reduces the amount of state information that the
BS must maintain. (2) Since the local and link conditions can change dynamically,
having per-subscriber station grants allows a subscriber station scheduler flexibility
to assign resource to more important new flows.

Third, the scheduler used at the BS/subscriber station, a critical component of


the WiMAX QoS architecture, is not specified in the standard. It is up to the
implementers to decide. Therefore, this is an area of considerable research [Iyengar]
[Rath1][Rath2][Sharma1][Sharma2].

> Read full chapter


On the self-similarity of traffic generat-
ed by network traffic simulators
Diogo A.B. Fernandes, ... Pedro R.M. Inácio, in Modeling and Simulation of Com-
puter Networks and Systems, 2015

1 Introduction
Network traffic simulators aim to imitate as faithfully as possible the many different
properties of real network traffic. This enables modeling network traffic accurately
and, therefore, makes it possible to study simulated network traffic that could be
otherwise impossible to obtain in real network environments. In addition, it might
also be useful to first design and study networks on a simulator in order to check
if the setup is correct and works well under the requirements, before setting up the
actual physical infrastructure. However, simulating networks and network traffic is
a complicated task because it requires modeling network components and equip-
ment, as well as links connecting them. Additionally, modeling networked applica-
tions is a highly complex task, to say the least, since the user and protocols influence
the behavior of the applications. For example, studies [1] have focused on matching
the best distributions and inherent parameters to model the packets lengths, the bit
count per time unit, and the interarrival times of network traffic at both sources and
aggregation points for web browsing, streaming, instant messaging, Voice over IP
(VoIP), and Peer-to-Peer (P2P) traffic profiles, for instance. The generation of network
traffic is directly dependent on the user behavior and interaction with the computer
and installed applications. On the other hand, simulating network protocols may
be a less arduous task since it is only required to implement them according to the
specifications.

Network traffic simulation is useful for both researchers and industry practitioners.
Several tools provide the capability of simulating network environments together
with live interactions within the simulation between network nodes and services or
applications installed atop. Such tools allow defining the duration of simulations and
feature monitoring functionalities to watch the simulation workflow evolve through
time according to the defined parameters. This is helpful to solve optimization and
simulation problems seeking suitable inputs to some desired outputs and searching
appropriate outputs given known inputs, respectively. However, how realistic the
simulated network traffic is may be somewhat questionable, because modeling all
the network details cannot be done either completely or perfectly. In this respect,
making simulation results more reassuring requires assessing whether the proper-
ties embedded within simulated network traffic flows are comparable and compliant
with the ones observed on real computer networks.
Self-similarity is known to be a statistical property of the bit count per time unit
of network traffic in network aggregation points of local area network (LAN) and
wide area network (WAN) environments. Self-similarity implies the network traffic is
characterized by a fractality character and by the well-known burstiness phenomenon.
The former means that an object appears the same regardless of the scale, while
the latter means that network traffic volume activity may be composed of lengthy
periods of data transmission followed by periods of weak activity. These two network
traits imply network traffic spikes to ride on bigger waves that, in turn, ride on even
larger swells [2]. The knowledge of self-similarity is crucial in order to efficiently
design routers in terms of both hardware and software, notably with respect to the
lengths of packet queues. Many methods available in the literature allow generation
of sequences of values with the self-similar property embedded by default, the
aggregation of network traffic mentioned above being one of those methods. In
turn, the intensity of the self-similar effect is usually measured by means of the
well-known Hurst parameter, for which several estimators also exist in the literature.

In the light of what was discussed above, it is therefore important to determine


if the self-similar property is present at aggregate traffic produced by network
simulators. As it will be best detailed in a subsequent section, the tools this chapter
studies are the Network Simulator 3 (NS3) and OMNeT++. Herein, these tools are
utilized to design network topologies and simulate aggregate traffic for posterior
analyses. Such analyses focus on estimating the Hurst parameter by means of the
Rescaled Range Statistics (R/S) and Variance Time (VT) methods, and on computing
the autocorrelation of the resulting datasets. Therefore, the contributions of this
chapter are twofold. First, the self-similarity property is explained with emphasis on
its influence and impact on network traffic aggregation points and on network traffic
analysis in general. Second, the mentioned network simulators are reviewed and are
studied for their compliance with self-similarity in simulated network traffic.

The remainder of this chapter is structured as follows. Section 2 explains the the-
oretical background of self-similarity and of the Hurst parameter. Section 3 then
describes the self-similar phenomena observed in network traffic and discusses
this property thoroughly. Section 4 demonstrates by means of empirical analyses
whether the self-similar effect is noticed on traffic simulated by popular tools. Finally,
Section 5 concludes the chapter.

> Read full chapter

Technical Challenges
XiPeng Xiao, in Technical, Commercial and Regulatory Challenges of QoS, 2008
Inter-Vendor Interoperability Challenge
People that have been involved in the design of a router or switch know that the
design and implementation of the QoS part (or to be exact, the traffic management
part) is one of the most difficult parts. One major reason is lack of traffic
management deployment in the field causes lack of feedback to the design team. As a
result, if different design team members have different ideas, there is no authority to
arbitrate. Consequently, implementations of the same traffic management function
at different vendors, or at different products of the same vendor, can be slightly
different. Sometimes, even the implementations at different parts (for example, a
POS line card and an Ethernet line card) of the same system can be different. Below
we give a few examples to show the challenge created by such difference. These
examples are well known among the developer community but are not so well known
among the user community.

Example 1. Different Handling of IPG in Shaping


Ethernet frames have clearly defined beginning and ending boundaries, or delim-
iters. These are marked by special characters and a 12-byte Inter-Packet Gap (IPG)
that dictates the minimum amount of space or idle time between packets. Some
shaping implementations take the IPG into consideration while some don't. As a
result, to shape a user traffic stream to 10 Mbps, the shaper may be configured at
10 Mbps if the shaping implementation already excludes IPG, or more than 10 Mbps
otherwise. In the second case, the exact value of the shaper has to depend on the
average packet size, making it un-deterministic. This creates network management
complexity. In the second case, even if the shaper is originally configured properly to
some value over 10 Mbps, another network operator unaware of the IPG difference
may think that it is a misconfiguration (because it doesn't match the user's traffic
profile), and mistakenly change it back to 10 Mbps.

Because of such implementation differences, configuring a single device can already


be challenging. Making devices from different vendors interoperable would be even
more challenging.

Example 2. Different Handling of Bandwidth Sharing among


WFQ Queues
A typically envisioned traffic contract involving multiple COS's, for example, a Pre-
mium class and a Best-Effort class, would generally go as follows. For the total
physical pipe of 10 Mbps, Premium class can go up to 2 Mbps, and if there is no
premium traffic, best effort can go up to 10 Mbps. This is normally enabled by a
WFQ mechanism, in which two queues are configured, one with 20 percent link
bandwidth and the other with 80 percent bandwidth. Traditional implementation
of WFQ would allow the premium queue to get full link bandwidth if there is no
best-effort traffic. After all, this is how WFQ is supposed to work. Some vendor's
“advanced” WFQ implementation would disallow the premium question to get more
than 20 percent of link bandwidth even if there is no packet in the best-effort queue.
After all, this is what the SLA specifies. While one can argue which implementation
is correct, in reality, both implementations exist. This again creates interoperability
complexity.

Other challenges of configuring traffic management involve:

1. Lack of guideline on how to configure WRED parameters, for example, what


discard probability should be configured at what average queue length. In
some cases, this is not configurable. That is not necessarily a bad thing because
few people know how to configure it anyway. But different vendors’ settings
can be different. Therefore, it is hard to make WRED really useful.

One would think that [RFC2309] entitled “Recommendation on Queue Management


and Congestion Avoidance in the Internet” would give some guidelines on how to
do such a configuration. But it didn't. It only provides some high-level guidelines as
quoted below.

• RECOMMENDATION 1:Internet routers should implement some ac-


tive queue management mechanism to manage queue lengths, reduce
end-to-end latency, reduce packet dropping, and avoid lock-out phenomena
within the Internet.The default mechanism for managing queue lengths to
meet these goals in FIFO queues is Random Early Detection (RED) [RED93].
Unless a developer has reasons to provide another equivalent mechanism, we
recommend that RED be used.
• RECOMMENDATION 2:It is urgent to begin or continue research, engineer-
ing, and measurement efforts contributing to the design of mechanisms
to deal with flows that are unresponsive to congestion notification or are
responsive but more aggressive than TCP.

To a certain extent, this testifies to the lack of serious deployment of RED and WRED.
Otherwise, there would be follow-up RFCs giving more specific instructions.

2. Lack of guidelines on how to configure the output rate of a queue, especially


for a queue at a device in the middle of the network. See the “Technical
Solution” section of Chapter 4 for more information on this topic.
3. Lack of statistics for various queues inside a network device, for example,
average queue length, how many packets are dropped. Counters to provide
statistics use mostly Static Random Access Memory, which is expensive. Con-
sequently, there may not be sufficient counters provisioned during hardware
design. This creates difficulty for the network operators to know what's
going on, or how to fine tune their traffic management parameters. To draw
an analogy, this is like trying to improve one's shooting skill in a completely
dark room. After you shoot, you don't even know whether it's a hit or a miss,
let alone how to improve.

> Read full chapter

ScienceDirect is Elsevier’s leading information solution for researchers.


Copyright © 2018 Elsevier B.V. or its licensors or contributors. ScienceDirect ® is a registered trademark of Elsevier B.V. Terms and conditions apply.

You might also like