Traffic Profile
Traffic Profile
Related terms:
Cloud Security
Thomas W. Shinder, ... Debra Littlejohn Shinder, in Windows Server 2012 Security
from End to Edge and Beyond, 2013
What you want to prevent is a situation where one or more of the tenants step
on each other in terms of network access. A compromised tenant might be able
to DoS the network by using some type of network flooding exploits. In this case,
even though a single tenant is compromised, that single tenant could end up
compromising the entire tenant network infrastructure because other tenants will
no longer have their workloads accessible to client systems.
One way to address this concern is by controlling the network traffic by using
bandwidth shaping or quality of service (QoS) technologies. QoS in Windows Server
2012 is designed to help manage network traffic on the physical network and on
the virtual network, as there is both a Windows QoS and a Hyper-V virtual switch
QoS. Policy-based QoS is designed to manage network bandwidth allocations on the
physical network and can be leveraged by both the virtual machine tenants and the
host systems that comprise the IaaS cloud infrastructure. In that way, you can get very
granular in terms of how you shape traffic at both the host and guest perspectives.
The use of policy-based QoS allows you to specify network bandwidth control based
on the type of application, users, and computers. You can also use Policy-based
QoS to help control bandwidth costs and negotiate service levels with bandwidth
providers or departments (which would be represented as different tenants in the
IaaS cloud infrastructure). Hyper-V QoS enables administrators of an IaaS cloud
infrastructure to provide specific network performance values based on service-level
agreements (SLAs) you set with your tenants. Most importantly, Hyper-V QoS also
helps ensure that no tenant is impacted or compromised by other tenants on their
shared infrastructure, since the tenant virtual machines can have their bandwidth
limited by setting an absolute high limit or by allowing them a certain percentage
of the total available bandwidth on the link.
QoS is also useful for making sure that all the infrastructure traffic profiles have the
bandwidth they need. For example, you do not want the Live Migration traffic to step
on the storage traffic and vice versa. Both these traffic profiles are high-throughput,
low-latency traffic profiles, and thus they require QoS in order to operate effectively.
When designing security for your IaaS cloud infrastructure, consider developing
a QoS plan that describes how to create a network fair share environment that
incorporates both the IaaS cloud infrastructure and the tenants.
You have the ability to apply policies on a per tenant basis. You do this by creating
multiple virtual NICs in Hyper-V and specify QoS9 on each virtual NIC individually.
An example on how to establish QoS per virtual NIC is shown below:
With OFDM, the time dispersive channel is seen in the frequency domain as a set
of parallel independent flat subchannels and can be equalized at a low complexity.
There are potential benefits to combining OFDM and DS-CDMA. Basically the
frequency-selective channel is first equalized in the frequency domain using the
OFDM modulation technique. DS-CDMA is applied on top of the equalized channel,
keeping the orthogonality properties of spreading codes. The combination of OFDM
and DS-CDMA is used in MC-DS-CDMA. MC-DS-CDMA [4, 5, 12, 25] marries the
best of the OFDM and DS-CDMA world and, consequently, it can ensure good
performance in severe multipath conditions. MC-DS-CDMA can achieve very large
average throughput. To further enhance the spectral efficiency of the system, some
form of adaptive modulation can be used.
Traffic Conditioning
Deep Medhi, Karthik Ramasamy, in Network Routing (Second Edition), 2018
18.6.1 Graded Profiles
So far, our discussion assumed that a single average rate and a burst size are used to
govern the traffic in a flow. However, this might be insufficient for expressing the
traffic into different grades based on its temporal characteristics. Such grading of
traffic allows us to apply different types of marking or combinations of marking or
policing for each grade. For instance, a graded profile might specify that the traffic
exceeding a rate of M bytes per sec is simply marked and if the excess traffic rate
becomes greater than the N bytes per sec, it should be immediately discarded.
When the traffic is graded, colors can be used to describe marking of packets. That
is, the color of the packet identifies whether it is conforming to the traffic profile.
For example, a green packet means if it is conforming to the committed rate of the
traffic profile; a yellow packet means that it is not conforming to the committed
rate, but meets the excess rate of the traffic profile; however, a red packet does not
meet the committed nor the excess rates of the traffic profile. Then, green packets
are processed as specified in the SLA and are not candidate for discarding. Yellow
packets are typically candidates for discarding only if the network is congested. Red
packets are immediately discarded.
Next we describe two marking algorithms: single-rate tricolor marking (srTCM) and
two-rate tricolor marking (trTCM). For these, we need to use a few terms: committed
information rate (CIR), committed burst size (CBS), excess information rate (EIR),
and excess burst size (EBS). We have already discussed CIR and CBS. The excess
information rate (EIR) specifies the average rate that is greater than or equal to the
CIR; this is the maximum rate up to which packets are admitted into the network.
Excess burst size (EBS) is the maximum number of bytes allowed for incoming
packets. Packets are in-profile if they meet CIR and CBS (“CIR-conformant”) while
they are out-of-profile if they meet EIR And EBS (“EIR-conformant”).
Virtualization Security
Thomas W. Shinder, ... Debra Littlejohn Shinder, in Windows Server 2012 Security
from End to Edge and Beyond, 2013
Networking Security
Similar to the compute security considerations, many of the same principles that
you use to secure the traditional datacenter network can be used when securing
the virtualization infrastructure in your cloud. Some key considerations to consider
when designing networks security in your private cloud include:
▪ Prevent one tenant from using up all bandwidth on the shared network
connection
▪ Securing the different infrastructure traffic profiles
▪ Protect against common network attacks such as ARP spoofing and rogue
DHCP servers
▪ Enable network IDS/IPS for the virtual switch
It is critical that tenant and infrastructure traffic are isolated from one another.
No tenant should ever be able to connect to a host node in the Hyper-V cluster
that forms the basis of the cloud infrastructure. When we speak of infrastructure
traffic, we are referring specifically to cluster/CSV traffic, Live Migration traffic,
management traffic, and storage traffic. There are several approaches you can take
to isolating these various traffic profiles:
▪ Use the Windows Server 2008 R2 approach, where each traffic profile has a
physical NIC dedicated to it. The problem with this approach is that it con-
sumes a lot of PCI slots and complicates the networking in terms of cabling,
switch port consumption, and switch port configuration. In general, we do not
recommend this approach when securing the virtualization infrastructure for
a Windows Server 2012-based cloud.
▪ Use two separate networks—one for the infrastructure traffic and one for the
tenant traffic. For example, you can create one NIC team for the infrastructure
traffic and one NIC team for the tenant traffic. You can then place each of these
teams on different VLANs. The infrastructure NIC team can then handle all the
infrastructure traffic profiles and the tenant team handles all of the traffic to
and from the tenants. This gives us the critical isolation we require between
the infrastructure and tenant networks. You can take advantage of Windows
QoS to make sure each of the infrastructure traffic profiles gets the bandwidth
it requires.
▪ Use a single network and run both tenant and infrastructure traffic through
the Hyper-V virtual switch. In this network security design pattern, you have
simplified the physical port configuration and the cabling significantly, since
you are dealing with a single NIC team for all traffic profiles. In this case
you take advantage of Port ACLs, 802.1q VLAN tagging, Private VLANs, and
Hyper-V QoS to make sure that all traffic profiles are isolated from each other
and have the bandwidth allotment they require.
Tenants need to be protected from each other. The reason for this is that in the best
of all possible worlds the cloud infrastructure administrators and the cloud service
provide (which would be corporate IT in the example of the private cloud) only the
infrastructure on which users can deploy their services (at least in the example of
Infrastructure as a Service). In that case, you provide the consumers of your cloud
service with the virtual machines they need to stand up their services, but what they
do with those services is up to them. If they do not want to deploy security best
practices or do not want to update their machines with monthly security update,
then that is up to the consumer of the service. What is not up to the consumer is
making sure that rogue or compromised virtual machines cannot compromise other
tenants or the cloud infrastructure. In Windows Server 2012 Hyper-V, you can use
port ACLs and Hyper-V QoS to make sure that tenants are not able to communicate
with one another or the infrastructure and apply QoS policies to make sure that no
tenant is able to execute a network flood-based denial of service attack. In addition,
you might consider using IPsec to isolate the tenants from each other or from the
infrastructure network—in which case you can take advantage of the new Windows
Server 2012 IPsec Task Offload feature (IPsecTO). This enables the virtual machines
to offload IPsec processing from the main processor and put that processing on to
a NIC that can perform this offload function.
In addition, you might want to enable more sophisticated firewalling on the Hy-
per-V virtual switch than just port ACLs. In that case, you can introduce third-party
add-ins that can provide this functionality. We imagine in most cases virtualization
infrastructure admins for private cloud will want to introduce these virtual firewalls
and network security management devices to make sure that tenants are protected
from each other and that the infrastructure is protected from the tenants.
Securing the various forms of infrastructure traffic is important. Consider the traffic
profiles:
You do want to be able to secure the tenants from common network attacks such
as ARP spoofing and rogue DHCP servers. As discussed earlier in this chapter, you
can do that with the new Windows Server 2012 Hyper-V ARP spoof protection and
DHCP authorization features.
Finally, you will want to be able to make sure that you can deploy the same network
security and analysis tools on the Hyper-V virtual network that you deploy on your
physical networks. This means that you will want to be able to hook up IDS/IPS
systems, sophisticated bandwidth management and control systems, and other
network systems that need visibility into all the traffic traversing the Hyper-V virtual
switch. You can do this by enabling the port mirroring feature now available in the
Windows Server 2012 Hyper-V virtual switch.
For VoIP, IEEE 802.16-2009 [4] defines three attractive QoS classes, the use of which
depends on the codec. For CBR voice codecs, the first choice should be the Unsolicit-
ed Grant Service (UGS). UGS is designed for services that generate CBR traffic. This
is the case with simple VoIP codecs that do not support silence suppression and do
not employ a layered structure to dynamically scale VoIP quality. UGS assures that
the fixed-size grants offered to the traffic flow based on its real-time needs are met.
The mandatory QoS parameters involved in UGS are minimum reserved traffic rate
and maximum latency.
The Real-Time Polling Service (rtPS) class provides QoS assurance to real-time network
services generating variable size packets on a periodic basis, while requiring strict
data rate and delay levels. In rtPS, the WiMAX BS can use unicast polling so
that mobile hosts can request bandwidth. Latency requirements are met when the
provided unicast polling opportunities are frequent enough. The rtPS service class
is more demanding in terms of request overhead when compared to UGS, but is
more efficient for variable size packet flows. The Extended Real-Time Polling Service
(ertPS) combines the advantages of UGS and rtPS. This QoS service class enables the
accommodation of packet flows whose bandwidth requirements vary with time. The
ertPS QoS class parameters include maximum latency, tolerated jitter, and minimum
and maximum reserved traffic rate.
It is important to keep in mind that these QoS classes can assure the required QoS
levels only over the WiMAX link, not the end-to-end delay. For example, maximum
latency here refers to the period between the time that a packet is received by the
convergence sublayer (WiMAX MAC) and until the packet is handed over to the PHY
layer for transmission.
However, if the ingress PE imposes a single label on an incoming label (as can
happen in some applications), the penultimate hop router pops off the label and
forwards the IP packet without any label to the PE. In the absence of a label,
the PE router has no knowledge of how the packet was classified by the service
provider during transit (e.g., was it marked as out-of profile traffic?). The PE can only
take decisions based on the IPP/DSCPs of the IP packet, which in reality identifies
the packet classification per the enterprise’s QoS policy. This might not always be
desirable, in which case an option exists to let the PE advertise an explicit NULL to
the penultimate-hop P (PHP) router via configuring the following command on its
interface toward the P router:
This process can also be used in a managed CE setup, where the CE imposes an
MPLS explicit NULL label on the packet being transmitted to the PE and marks the
EXP bits per SP classification policy. Therefore, the PE router classifies the packet
simply by using this label. Note that this is a Cisco-specific feature on the CE and
PE. It is configured on the CE router via the same command:
on the egress interface toward the PE. This model is suitable when the SP provides
a managed CE service to an enterprise. Please note that, though traffic from CE to
PE carries the explicit NULL label, there is no label on the packets from the egress
PE to CE. This is illustrated in Figure 2.35.
A general approach is to run such models periodically, based on changes in the traffic
profiles or changes in the service profiles so that the network is traffic re-engineered.
A second option is to use the model as a change model. What this means is that
traffic demands that are working well are untouched, but the model is used only for
traffic demands that have significant changes. In this case, the capacity would be
replaced by residual capacity, and would be for the traffic groups for which the new
optimal paths are to be obtained.
Finally, we point out that there is a general misconception that ECMP (Equal-Cost
Multipath) or multipath routing is better to have in a network as it allows load
balancing for traffic engineering. In a recent work [501], it was shown that for large
networks, at a particular instant, it is actually sufficient if almost all demand pairs
use single-path routing, leading to optimal routing. This result is counter-intuitive,
but has been shown to hold for large networks where the number of links are in the
order of number of nodes, which is typical in most practical networks.
First is the capability to have flexible framing. This is needed to optimally use the
available airtime on the wireless link by allowing a subscriber or base station to adapt
to changing conditions on the wireless link. For example, the relative portion of a
frame devoted to uplink/downlink transmission can be dynamically changed from
frame to frame depending on the traffic need. The relative portion of control/data in
sub-frames can also be dynamically changed depending on the network load. This
can make the use of the wireless spectrum more efficient.
Third is the capability for frame packing and fragmentation. These are effectively QoS
techniques because they allow WiMAX systems to pack many small-sized packets
into a frame, or to break up a large packet into multiple frames. This helps to
prevent a high-priority packet from having to wait a long time for the completion of
transmission of a large low-priority packet. This also enables a more effective use of
the varying bandwidth of the wireless link. For example, when the link between the
user and the BS is in good condition, a large volume of data (in the form of packed
frames) can be rapidly exchanged. When the link is in poor condition, packets may
be fragmented and transmitted to meet some minimum bandwidth goals, giving
the wireless medium for other users whose links to the BS are in good condition.
In summary, the WiMAX standard provides the following key QoS features:
QoS services
With these key QoS features, the IEEE 802.16 standard defines four services with
different performance requirements. Each service is associated with a set of perfor-
mance parameters. The four types of services are:
These services are mandatory, in the sense that any higher-layer application will have
to be mapped to a WiMAX connection that belongs to one of the above four services.
Thus, any standards-compliant WiMAX system must implement the above services.
This service supports real-time data streams consisting of fixed-size data packets
transmitted at periodic intervals, such as voice-over-IP without silence suppression.
It is analogous to ATM's CBR service. The mandatory QoS service-flow parameters
for this service are:
• Maximum Latency,
• Request/Transmission Policy.
This service supports real-time data streams consisting of variable-sized data packets
that are transmitted at fixed intervals, such as MPEG video. It is analogous to ATM's
rt-VBR service. The mandatory QoS service-flow parameters for this service are:
• Maximum Latency
• Request/Transmission Policy
For this type of service, the BS must provide periodic unicast request opportunities
that meet the real-time needs of the flow. In these request opportunities, the
subscriber station can specify the size of the desired grant. The request overhead
for this service is more than that of UGS, but it supports a variable grant size,
thus improving its transmission efficiency. The Request/Transmission Policy is
set such that the subscriber station is prohibited from using any contention request
opportunities for such connections.
• Traffic Priority
• Request/Transmission Policy
This service is designed to provide efficient transport for best-effort traffic with no
explicit QoS guarantees. The mandatory service flow parameters for this service are
• Traffic Priority
• Request/Transmission Policy
These applications do not require any minimum service level and, therefore, can
be handled on a “space available” basis. These applications share the remaining
bandwidth after allocation to the previous three services has been completed. The
Request/Transmission Policy is set such that the subscriber station is allowed to use
contention request opportunities.
In summary, the four service classes, their target applications, and the performance
parameters are described in Table 13-4.
First, the traffic scheduler at the BS decides the allocation of the physical slots to
each subscriber station on a frame-by-frame basis. While making allocations, the
scheduler must account for the following:
Second, although requests to the BS are made on a per-flow basis, the grants by the
BS are issued on a per-subscriber station basis, without distinguishing individual
flows at the subscriber station. This means that there needs to be a local scheduler at
each subscriber station that allocates the bandwidth grants from the BS among the
competing flows. This model was adopted in 802.16 for two reasons. (1) Granting
on a per-subscriber station basis reduces the amount of state information that the
BS must maintain. (2) Since the local and link conditions can change dynamically,
having per-subscriber station grants allows a subscriber station scheduler flexibility
to assign resource to more important new flows.
1 Introduction
Network traffic simulators aim to imitate as faithfully as possible the many different
properties of real network traffic. This enables modeling network traffic accurately
and, therefore, makes it possible to study simulated network traffic that could be
otherwise impossible to obtain in real network environments. In addition, it might
also be useful to first design and study networks on a simulator in order to check
if the setup is correct and works well under the requirements, before setting up the
actual physical infrastructure. However, simulating networks and network traffic is
a complicated task because it requires modeling network components and equip-
ment, as well as links connecting them. Additionally, modeling networked applica-
tions is a highly complex task, to say the least, since the user and protocols influence
the behavior of the applications. For example, studies [1] have focused on matching
the best distributions and inherent parameters to model the packets lengths, the bit
count per time unit, and the interarrival times of network traffic at both sources and
aggregation points for web browsing, streaming, instant messaging, Voice over IP
(VoIP), and Peer-to-Peer (P2P) traffic profiles, for instance. The generation of network
traffic is directly dependent on the user behavior and interaction with the computer
and installed applications. On the other hand, simulating network protocols may
be a less arduous task since it is only required to implement them according to the
specifications.
Network traffic simulation is useful for both researchers and industry practitioners.
Several tools provide the capability of simulating network environments together
with live interactions within the simulation between network nodes and services or
applications installed atop. Such tools allow defining the duration of simulations and
feature monitoring functionalities to watch the simulation workflow evolve through
time according to the defined parameters. This is helpful to solve optimization and
simulation problems seeking suitable inputs to some desired outputs and searching
appropriate outputs given known inputs, respectively. However, how realistic the
simulated network traffic is may be somewhat questionable, because modeling all
the network details cannot be done either completely or perfectly. In this respect,
making simulation results more reassuring requires assessing whether the proper-
ties embedded within simulated network traffic flows are comparable and compliant
with the ones observed on real computer networks.
Self-similarity is known to be a statistical property of the bit count per time unit
of network traffic in network aggregation points of local area network (LAN) and
wide area network (WAN) environments. Self-similarity implies the network traffic is
characterized by a fractality character and by the well-known burstiness phenomenon.
The former means that an object appears the same regardless of the scale, while
the latter means that network traffic volume activity may be composed of lengthy
periods of data transmission followed by periods of weak activity. These two network
traits imply network traffic spikes to ride on bigger waves that, in turn, ride on even
larger swells [2]. The knowledge of self-similarity is crucial in order to efficiently
design routers in terms of both hardware and software, notably with respect to the
lengths of packet queues. Many methods available in the literature allow generation
of sequences of values with the self-similar property embedded by default, the
aggregation of network traffic mentioned above being one of those methods. In
turn, the intensity of the self-similar effect is usually measured by means of the
well-known Hurst parameter, for which several estimators also exist in the literature.
The remainder of this chapter is structured as follows. Section 2 explains the the-
oretical background of self-similarity and of the Hurst parameter. Section 3 then
describes the self-similar phenomena observed in network traffic and discusses
this property thoroughly. Section 4 demonstrates by means of empirical analyses
whether the self-similar effect is noticed on traffic simulated by popular tools. Finally,
Section 5 concludes the chapter.
Technical Challenges
XiPeng Xiao, in Technical, Commercial and Regulatory Challenges of QoS, 2008
Inter-Vendor Interoperability Challenge
People that have been involved in the design of a router or switch know that the
design and implementation of the QoS part (or to be exact, the traffic management
part) is one of the most difficult parts. One major reason is lack of traffic
management deployment in the field causes lack of feedback to the design team. As a
result, if different design team members have different ideas, there is no authority to
arbitrate. Consequently, implementations of the same traffic management function
at different vendors, or at different products of the same vendor, can be slightly
different. Sometimes, even the implementations at different parts (for example, a
POS line card and an Ethernet line card) of the same system can be different. Below
we give a few examples to show the challenge created by such difference. These
examples are well known among the developer community but are not so well known
among the user community.
To a certain extent, this testifies to the lack of serious deployment of RED and WRED.
Otherwise, there would be follow-up RFCs giving more specific instructions.