0% found this document useful (0 votes)
21 views17 pages

SDN Unit 1 Material

The document discusses the evolution of network requirements leading to the adoption of Software Defined Networking (SDN), highlighting increasing demand from cloud computing, big data, mobile traffic, and IoT. It contrasts traditional network architectures with SDN, emphasizing the separation of control and data planes, centralized management, and programmability through APIs. Key components of SDN, including the data plane, control plane, and application plane, are outlined, along with the importance of protocols like OpenFlow for managing network devices.

Uploaded by

shivavetrivel643
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views17 pages

SDN Unit 1 Material

The document discusses the evolution of network requirements leading to the adoption of Software Defined Networking (SDN), highlighting increasing demand from cloud computing, big data, mobile traffic, and IoT. It contrasts traditional network architectures with SDN, emphasizing the separation of control and data planes, centralized management, and programmability through APIs. Key components of SDN, including the data plane, control plane, and application plane, are outlined, along with the importance of protocols like OpenFlow for managing network devices.

Uploaded by

shivavetrivel643
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

CEC354 – SOFTWARE DEFINED NETWORKS

UNIT – 1 : SDN: BACKGROUND AND DATA PLANE


1.1 Evolving Network Requirements
A number of trends are driving network providers and users to reevaluate traditional approaches
to network architecture. These trends can be grouped under the categories of demand, supply,
and traffic patterns.
1.1.1 Demand Is Increasing
A number of trends are increasing the load on enterprise networks, the Internet, and other
internets. Of particular note are the following:
(1) Cloud computing: There has been a dramatic shift by enterprises to both public and
private cloud services.
(2) Big data: The processing of huge data sets requires massive parallel processing on
thousands of servers, all of which require a degree of interconnection to each other.
Therefore, there is a large and constantly growing demand for network capacity within the
data canter.
(3) Mobile traffic: Employees are increasingly accessing enterprise network resources via
mobile personal devices, such as smartphones, tablets, and notebooks. These devices
support sophisticated apps that can consume and generate image and video traffic, placing
new burdens on the enterprise network.
(4) The Internet of Things (IoT): Most “things” in the IoT generate modest traffic, although
there are exceptions, such as surveillance video cameras. But the sheer number of such
devices for some enterprises results in a significant load on the enterprise network.
1.1.2 Supply Is Increasing
As the demand on networks is rising, so is the capacity of network technologies to absorb rising
loads. The increase in the capacity of the network transmission technologies has been matched
by an increase in the performance of network devices, such as LAN switches, routers, firewalls,
intrusion detection system/intrusion prevention systems (IDS/IPS), and network monitoring
and management systems. Year by year, these devices have larger, faster memories, enabling
greater buffer capacity and faster buffer access, as well as faster processor speeds.
1.1.3 Traditional Network Architectures are Inadequate
❖ The traditional internetworking approach is based on the TCP/IP protocol architecture.
Three noteworthy characteristics of this approach are as follows:
(1) Two-level end system addressing: The protocol architecture built around the TCP and
IP protocols, consisting of five layers: physical, data link, network/Internet (usually
IP), transport (usually TCP or UDP), and application.
(2) Routing based on destination
(3) Distributed, autonomous control
❖ The traditional architecture relies heavily on the network interface identity. At the physical
layer of the TCP/IP model, devices attached to networks are identified by hardware-based
identifiers, such as Ethernet MAC addresses.
❖ At the internetworking level, including both the Internet and private internets, the
architecture is a network of networks. Each attached device has a physical layer identifier
recognized within its immediate network and a logical network identifier, its IP address,
which provides global visibility.
❖ The design of TCP/IP uses this addressing scheme to support the networking of
autonomous networks, with distributed control. This architecture provides a high level of
resilience and scales well in terms of adding new networks. Using IP and distributed
routing protocols, routes can be discovered and used throughout an internet.
❖ Using transport-level protocols such as TCP, distributed and decentralized algorithms can
be implemented to respond to congestion.
❖ Traditionally, routing was based on each packet’s destination address. In this datagram
approach, successive packets between a source and destination may follow different routes
through the internet, as routers constantly seek to find the minimum-delay path for each
individual packet.
❖ More recently, to satisfy QoS requirements, packets are often treated in terms of flows of
packets. Packets associated with a given flow have defined QoS characteristics, which
affect the routing for the entire flow.
❖ A packet that is treated independently of other packets for packet switching. A datagram
carries information sufficient for routing from the source to the destination without the
necessity of establishing a logical connection between the endpoints.
❖ A unit of data sent across a network. A packet is a group of bits that includes data plus
protocol control information. The term generally applies to protocol data units at the
network layer.
❖ A sequence of packets between a source and destination that are recognized by the network
as related and are treated in a uniform fashion.
❖ A method of transmitting messages through a communications network, in which long
messages are subdivided into short packets.
❖ Each packet is passed from source to destination through intermediate nodes. At each node,
the entire message is received, stored briefly, and then forwarded to the next node.
❖ Based on these characteristics, the Open Networking Foundation (ONF) cites four general
limitations of traditional network architectures:
(1) Static, complex architecture: To respond for demands such as differing levels of
QoS, high and fluctuating traffic volumes, and security requirements, networking
technology has grown more complex and difficult to manage.
(2) Inconsistent policies: To implement a network-wide security policy, staff may have
to make configuration changes to thousands of devices and mechanisms.
(3) Inability to scale: Demands on networks are growing rapidly, both in volume and
variety. Adding more switches and transmission capacity, involving multiple vendor
equipment, is difficult because of the complex, static nature of the network.
(4) Vendor dependence: Given the nature of today’s traffic demands on networks,
enterprises and carriers need to deploy new capabilities and services rapidly in
response to changing business needs and user demands.
1.2 The SDN Approach
1.2.1 Requirements
The Open Data Center Alliance (ODCA) provides a useful, concise list of requirements, which
include the following:
(1) Adaptability: Networks must adjust and respond dynamically, based on application needs,
business policy, and network conditions.
(2) Automation: Policy changes must be automatically propagated so that manual work and
errors can be reduced.
(3) Maintainability: Introduction of new features and capabilities (software upgrades,
patches) must be seamless with minimal disruption of operations.
(4) Model management: Network management software must allow management of the
network at a model level, rather than implementing conceptual changes by reconfiguring
individual network elements.
(5) Mobility: Control functionality must accommodate mobility, including mobile user
devices and virtual servers.
(6) Integrated security: Network applications must integrate seamless security as a core
service instead of as an add-on solution.
(7) On-demand scaling: Implementations must have the ability to scale up or scale down the
network and its services to support on-demand requests.
1.2.2 SDN Architecture
❖ The central concept behind SDN is to enable developers and network managers to have
the same type of control over network equipment.
❖ The SDN approach splits the switching function between a data plane and a control plane
that are on separate devices as shown in Figure 1.1.

Figure 1.1 Control and Data Planes


❖ The data plane is simply responsible for forwarding packets, whereas the control plane
provides the “intelligence” in designing routes, setting priority and routing policy
parameters to meet QoS and QoE requirements and to cope with the shifting traffic
patterns.
❖ Open interfaces are defined so that the switching hardware presents a uniform interface
regardless of the details of internal implementation.
❖ Similarly, open interfaces are defined to enable networking applications to communicate
with the SDN controllers.
❖ Figure 1.2 shows the SDN Architecture.

Figure 1.2 SDN Architecture


SDN Components:
(1) Data Plane:
➢ The data plane consists of physical switches and virtual switches. In both cases, the
switches are responsible for forwarding packets.
➢ The internal implementation of buffers, priority parameters, and other data structures
related to forwarding can be vendor dependent.
➢ However, each switch must implement a model, or abstraction, of packet forwarding
that is uniform and open to the SDN controllers.
➢ This model is defined in terms of an open Application Programming Interface (API)
between the control plane and the data plane.
(2) Southbound API:
➢ A Southbound API in SDN enables communication between the SDN controller and
network devices (switches, routers) for configuration and management.
➢ The most prominent example of such an open API is OpenFlow. The OpenFlow
specification defines both a protocol between the control and data planes and an API
by which the control plane can invoke the OpenFlow protocol.
➢ A language and message format used by an application program to communicate with
the operating system or some other control program such as a database management
system (DBMS) or communications protocol.
➢ APIs are implemented by writing function calls in the program, which provide the
linkage to the required subroutine for execution. An open or standardized API can
ensure the portability of the application code and the vendor independence of the
called service.
(3) Control Plane:
➢ The Control Plane in SDN is responsible for making network decisions and managing
traffic flow. It is centralized in the SDN controller, which communicates with network
devices using Southbound APIs.
➢ SDN controllers can be implemented directly on a server or on a virtual server.
OpenFlow or some other open API is used to control the switches in the data plane.
➢ In addition, controllers use information about capacity and demand obtained from the
networking equipment through which the traffic flows.
(4) Northbound API:
➢ A Northbound Interface in SDN allows communication between the SDN controller
and applications or services, enabling network automation and programmability. It
helps applications request network resources and enforce policies.
➢ It allows developers and network managers to deploy a wide range of off-the-shelf
and custom-built network applications, many of which were not feasible before the
advent of SDN.
➢ As yet there is no standardized northbound API nor a consensus on an open
northbound API. A number of vendors offer a REpresentational State Transfer (REST)
based API to provide a programmable interface to their SDN controller.
➢ Also envisioned but not yet defined are horizontal APIs (east/westbound), which
would enable communication and cooperation among groups or federations of
controllers to synchronize state for high availability.
(5) Application Plane
➢ The Application Plane in SDN consists of network applications and services that
define policies and manage traffic. It communicates with the SDN controller via
Northbound APIs for network automation and optimization.
➢ At the application plane are a variety of applications that interact with SDN
controllers. SDN applications are programs that may use an abstract view of the
network for their decision-making goals.
➢ Examples of applications are energy-efficient networking, security monitoring, access
control, and network management.
1.2.3 Characteristics of SDN
❖ The control plane is separated from the data plane. Data plane devices become simple
packet-forwarding devices.
❖ The control plane is implemented in a centralized controller or set of coordinated
centralized controllers. The SDN controller has a centralized view of the network or
networks under its control. The controller is portable software that can run on commodity
servers and is capable of programming the forwarding devices based on a centralized view
of the network.
❖ Open interfaces are defined between the devices in the control plane (controllers) and those
in the data plane.
❖ The network is programmable by applications running on top of the SDN controllers. The
SDN controllers present an abstract view of network resources to the applications.
1.2.4 Difference Between Traditional Networking and SDN
Traditional Networking SDN
Traditional Networking is usually hardware
SDN is software based.
based.
Traditional networking has a distributed
SDN has logically centralized control plane.
control plane.
It works using protocols SDN uses APIs to configure as per need.
SDN are programmable networks during
Traditional networking are static and
deployment time as well as later stage based
inflexible networks.
on change in the requirements.
Traditional networking are not useful for new
SDN help new business ventures through
business ventures. They possess little agility
flexibility, agility, and virtualization.
and flexibility.
With a traditional network the physical When SDN virtualizes entire network, it
location of the control plane hinders an IT generates an abstract copy if physical
administrator’s ability to control the traffic network and lets we provision resources
flow. from a centralized location.
In traditional network architecture, the SDN decouples the control plane from the
control plane and data plane are integrated. data plane.

1.3 SDN and NFV related Standards


1.4 SDN Data Plane
❖ The SDN data plane (referred as the infrastructure layer) is where network forwarding
devices perform the transport and processing of data according to decisions made by the
SDN control plane.
❖ The important characteristic of the network devices in an SDN network is that these
devices perform a simple forwarding function, without embedded software to make
autonomous decisions.
1.4.1 Data Plane Functions
❖ Figure 1.3 illustrates the functions performed by the data plane network devices (also
called data plane network elements or switches).
Figure 1.3 Data Plane Network Device
❖ The network device m Figure 1.3 is shown with three I/O ports.
❖ One providing control communication with an SDN controller, and two for the input and
output of data packets.
❖ The network device may have multiple ports to communicate with multiple SDN
controllers, and may have more than two I/O ports for packet flows into and out of the
device.
❖ The principal functions of the network device are the following:
(1) Control support function: Interacts with the SDN control layer to support
programmability via resource-control interfaces. The switch communicates with the
controller and the controller manages the switch via the OpenFlow switch protocol.
(2) Data forwarding function: Accepts incoming data flows from other network devices
and end systems and forwards them along the data forwarding paths that have been
computed and established according to the rules defined by the SDN applications.
❖ These forwarding rules used by the network device are embodied in forwarding tables
that indicate for given categories of packets what the next hop in the route should be.
❖ In addition to simple forwarding of a packet, the network device can alter the packet
header before forwarding, or discard the packet.
❖ As shown, arriving packets may be placed in an input queue, awaiting processing by
the network device, and forwarded packets are generally placed in an output queue,
awaiting transmission.
1.4.2 Data plane Protocols
❖ Data packet flows consist of streams of IP packets.
❖ It may be necessary for the forwarding table to define entries based on fields in upper-level
protocol headers, such as TCP, UDP, or some other transport or application protocol.
❖ The network device examines the IP header and possibly other headers in each packet and
makes a forwarding decision.
❖ The other important flow of traffic is via the southbound application programming
interface (API), consisting of OpenFlow protocol data units (PDUs) or some similar
southbound API protocol traffic.
1.4.3 OpenFlow Logical Network Device
❖ OpenFlow is a Layer 2 communications protocol that gives access to the forwarding
plane of a network switch or router over the network
❖ It is the first standard communications interface defined between the control and
forwarding layers of an SDN architecture.
❖ It allows direct access to and manipulation of the forwarding plane of network devices
such as switches and routers, both physical and virtual (hypervisor-based).
❖ It is an open interface for remotely controlling the forwarding tables in network switches,
routers, and access points.
❖ OpenFlow is defined in the OpenFlow Switch Specification, published by the Open
Networking Foundation (ONF) and it is the most widely used implementation of the
SDN data plane.
❖ It is both a specification of the logical structure of data plane functionality and a protocol
between SDN controllers and network devices.
❖ Figure 1.4 indicates the main elements of an OpenFlow environment.

1.4 OpenFlow Switch Context


❖ OpenFlow environment Consists of the following:
(1) SDN Controller that include OpenFlow Software
(2) OpenFlow Switch
(3) End Systems
1.4.4 OpenFlow Switch:
❖ OpenFlow switch consists of one or more flow tables, group table and meter table. It
includes a data path and a control channel. A single switch can be managed by one or
more controllers.
❖ Figure 1.5 shows the main components of OpenFlow Switch that includes:
(1) OpenFlow Channel
(2) Flow table
(3) Group table
(4) Meter table
(5) Port

Figure 1.5 OpenFlow Switch


1.4.4.1 OpenFlow Channel
❖ OpenFlow channel is an interface between an OpenFlow switch and an OpenFlow
controller, used by the controller to manage the switch.
❖ An SDN controller communicates with OpenFlow compatible switches using the
OpenFlow protocol.
1.4.4.2 OpenFlow Port
❖ Port connects to other OpenFlow switches and to end-user devices that are the sources and
destinations of packet flows.
❖ OpenFlow port: Where packets enter and exit the OpenFlow pipeline.
❖ A packet can be forwarded from one OpenFlow switch to another OpenFlow switch only
via an output OpenFlow port on the first switch and an ingress OpenFlow port on the
second switch.
❖ OpenFlow defines three types of ports:
(1) Physical port: Corresponds to a hardware interface of the switch.
(2) Logical port: Does not correspond directly to a hardware interface of the switch.
Logical ports are higher-level abstractions that may be defined in the switch using
non-OpenFlow methods. Logical ports may include packet encapsulation and may
map to various physical ports. The processing done by the logical port is
implementation dependent and must be transparent to OpenFlow processing, and
those ports must interact with OpenFlow processing like OpenFlow physical ports.
(3) Reserved port: Defined by the OpenFlow specification. It specifies generic
forwarding actions such as sending to and receiving from the controller, flooding, or
forwarding using non-OpenFlow methods, such as “normal” switch processing.
1.4.4.3 Group Table
❖ A flow table may direct a flow to a group table, which may trigger a variety of actions that
affect one or more flows.
❖ Group table is used for special actions like multicast, broadcast, load balancing and
others.
❖ The group table and group actions enable OpenFlow to represent a set of ports as a single
entity for forwarding packets.
❖ Each group table consists of a number of rows, called group entries, consisting of four
components:
(1) Group identifier: A 32-bit unsigned integer uniquely identifying the group. A group
is defined as an entry in the group table.
(2) Group type: To determine group semantics, as explained subsequently.
(3) Counters: Updated when packets are processed by a group.
(4) Action buckets: An ordered list of action buckets, where each action bucket contains
a set of actions to execute and associated parameters.
❖ A group is designated as one of the types depicted in Figure 1.6 all, select, fast failover,
and indirect.
(1) The all type executes all the buckets in the group. Thus, each arriving packet is
effectively cloned. Typically, each bucket will designate a different output port, so that
the incoming packet is then transmitted on multiple output ports.
(2) The select type executes one bucket in the group, based on a switch computed
selection algorithm. The selection algorithm should implement equal load sharing or,
optionally, load sharing based on bucket weights assigned by the SDN controller.
(3) The fast failover type executes the first live bucket. Port liveness is managed by code
outside of the scope of OpenFlow and may have to do with routing algorithms or
congestion control mechanisms. The buckets are evaluated in order, and the first live
bucket is selected.
(4) The indirect type allows multiple packet flows to point to a common group identifier.
This type provides for more efficient management by the controller in certain
situations.

Figure 1.6 Group Types


1.4.4.4 Meter Table
❖ A meter table can trigger a variety of performance related (QoS) actions on a flow.
❖ It contains actions related to QoS management.
❖ Meter table is used to perform simple QoS operations like rate limiting to complex
QoS operations like DiffServ.
❖ Within the meter table, each entry consists of three main fields:
(1) Meter identifier: A 32-bit unsigned integer uniquely identifying the meter.
(2) Meter bands: An unordered list of one or more-meter bands, where each meter band
specifies the rate of the band and the way to process the packet.
(3) Counters: Updated when packets are processed by a meter. These are aggregate
counters. That is, the counters count the total traffic of all flows, and do not break the
traffic down by flow.
1.4.4.5 Flow Table
❖ Flow table - the standard table that allows to forward packet to a single port.
❖ A flow table matches incoming packets to a particular flow and specifies what functions
are to be performed on the packets.
❖ Flow is packets going between a source and destination pair that share a set of header field
values.
Flow Table Structure
❖ The basic building block of the logical switch architecture is the flow table. Each packet
that enters a switch passes through one of more flow tables. Each flow table consists of a
number of rows, called entries, consisting of seven components as shown in Figure 1.7 (a).

Figure 1.7 Open Flow Table Entry Formats


(1) Match fields: Used to select packets that match the values in the fields.
The match fields component of a table entry consists of the following required fields as
shown in Figure 1.7 (b):
(1) Ingress port: The identifier of the port on this switch on which the packet arrived.
This may be a physical port or a switch-defined virtual port. Required in ingress tables.
(2) Egress port: The identifier of the egress port from action set required in egress tables.
(3) Ethernet source and destination addresses: Each entry can be an exact address, a
bit masked value for which only some of the address bits are checked, or a wildcard
value (match any value).
(4) Ethernet type field: Indicates type of the Ethernet packet payload.
(5) IP: Version 4 or 6.
(6) IPv4 or IPv6 source address, and destination address: Each entry can be an exact
address, a bitmasked value, a subnet mask value, or a wildcard value.
(7) TCP source and destination ports: Exact match or wildcard value.
(8) UDP source and destination ports: Exact match or wildcard value.
(2) Priority: Relative priority of table entries. This is a 16-bit field with 0 corresponding to
the lowest priority. In principle, there could be 216 = 64k priority levels.
(3) Counters: Updated for matching packets. The OpenFlow specification defines a variety
of counters. Table 1.1 lists the counters that must be supported by an OpenFlow switch.

Table 1.1. Required OpenFlow Counters


(4) Instructions: The instructions component of a table entry consists of a set of instructions
that are executed if the packet matches the entry. Before describing the types of
instructions, we need to define the terms action and action set.
(a) Actions describe packet forwarding, packet modification, and group table processing
operations. The OpenFlow specification includes the following actions:
(1) Output: Forward packet to specified port. The port could be an output port to
another switch or the port to the controller. In the latter case, the packet is
encapsulated in a message to the controller.
(2) Set-Queue: Sets the queue ID for a packet. When the packet is forwarded to a
port using the output action, the queue ID determines which queue attached to
this port is used for scheduling and forwarding the packet. Forwarding behavior
is dictated by the configuration of the queue and is used to provide basic QoS
support.
(3) Group: Process packet through specified group.
(4) Push-Tag/Pop-Tag: Push or pop a tag field for a VLAN or Multiprotocol Label
Switching (MPLS) packet.
(5) Set-Field: The various Set-Field actions are identified by their field type and
modify the values of respective header fields in the packet.
(6) Change-TTL: The various Change-TTL actions modify the values of the IPv4
TTL (time to live), IPv6 hop limit, or MPLS TTL in the packet.
(7) Drop: There is no explicit action to represent drops. Instead, packets whose action
sets have no output action should be dropped.
(b) An Action set is a list of actions associated with a packet that are accumulated while
the packet is processed by each table and that are executed when the packet exits the
processing pipeline. The types of instructions can be grouped into four categories:
(1) Direct packet through pipeline: The Goto-Table instruction directs the packet
to a table farther along in the pipeline. The Meter instruction directs the packet to
a specified meter.
(2) Perform action on packet: Actions may be performed on the packet when it is
matched to a table entry. The Apply-Actions instruction applies the specified
actions immediately, without any change to the action set associated with this
packet. This instruction may be used to modify the packet between two tables in
the pipeline.
(3) Update action set: The Write-Actions instruction merges specified actions into
the current action set for this packet. The Clear-Actions instruction clears all the
actions in the action set.
(4) Update metadata: A metadata value can be associated with a packet. It is used
to carry information from one table to the next. The Write-Metadata instruction
updates an existing metadata value or creates a new value.
(5) Timeouts: Maximum amount of idle time before a flow is expired by the switch. Each
flow entry has an idle_timeout and a hard_timeout associated with it. A nonzero
hard_timeout field causes the flow entry to be removed after the given number of seconds,
regardless of how many packets it has matched. A nonzero idle_timeout field causes the
flow entry to be removed when it has matched no packets in the given number of seconds.
(6) Cookie: 64-bit opaque data value chosen by the controller. May be used by the controller
to filter flow statistics, flow modification and flow deletion and not used when processing
packets.
(7) Flags: Flags alter the way flow entries are managed. For example, the flag
OFPFF_SEND_FLOW_REM triggers flow removed messages for that flow entry.
1.4.5 Flow Table Pipeline
❖ A switch includes one or more flow tables. If there is more than one flow table, they are
organized as a pipeline, with the tables labelled with increasing numbers starting with zero.
❖ The use of multiple tables in a pipeline, rather than a single flow table, provides the SDN
controller with considerable flexibility.
❖ The OpenFlow specification defines two stages of processing:
(1) Ingress processing: Ingress processing always happens, beginning with Table 0, and
uses the identity of the input port. Table 0 may be the only table, in which case the
ingress processing is simplified to the processing performed on that single table, and
there is no egress processing.
(2) Egress processing: Egress processing is the processing that happens after the
determination of the output port. It happens in the context of the output port. This
stage is optional. If it occurs, it may involve one or more tables. The separation of the
two stages is indicated by the numerical identifier of the first egress table. All tables
with a number lower than the first egress table must be used as ingress tables, and no
table with a number higher than or equal to the first egress table can be used as an
ingress table.
1.4.5.1 Process of Pipelining
❖ Pipeline processing always starts with ingress processing at the first flow table.
❖ The packet must be first matched against flow entries of flow Table 0.
❖ Other ingress flow tables may be used depending on the outcome of the match in the first
table. If the outcome of ingress processing is to forward the packet to an output port, the
OpenFlow switch may perform egress processing in the context of that output port.
❖ When a packet is presented to a table for matching, the input consists of the packet, the
identity of the ingress port, the associated metadata value, and the associated action set.
❖ For Table 0, the metadata value is blank and the action set is null.
❖ At each table, processing proceeds as follows:
(1) If there is a match on one or more entries, other than the table-miss entry, the match
is defined to be with the highest priority matching entry. The priority is a component
of a table entry and is set via OpenFlow. The priority is determined by the user or
application invoking OpenFlow. The following steps may then be performed:
(a) Update any counters associated with this entry.
(b) Execute any instructions associated with this entry. This may include updating the
action set, updating the metadata value, and performing actions.
(c) The packet is then forwarded to a flow table further down the pipeline, to the
group table, to the meter table, or directed to an output port.
(2) If there is a match only on a table-miss entry, the table entry may contain instructions,
as with any other entry. In practice, the table-miss entry specifies one of three actions:
(a) Send packet to controller. This will enable the controller to define a new flow for
this and similar packets, or decide to drop the packet.
(b) Direct packet to another flow table farther down the pipeline.
(c) Drop the packet.
(3) If there is no match on any entry and there is no table-miss entry, the packet is dropped.
❖ If and when a packet is finally directed to an output port, the accumulated action set is
executed and then the packet is queued for output. Figure 1.8 illustrates the overall ingress
pipeline process.

Figure 1.8 Packet Flow Through an OpenFlow Switch: Ingress Processing


❖ If egress processing is associated with a particular output port, then after a packet is
directed to an output port at the completion of the ingress processing, the packet is directed
to the first flow table of the egress pipeline.
❖ Egress pipeline processing proceeds in the same fashion as for ingress processing, except
that there is no group table processing at the end of the egress pipeline.
❖ Egress processing is as shown in Figure 1.9.
Figure 1.9 Packet Flow Through an OpenFlow Switch: Egress Processing
1.4.6 OpenFlow Protocol
❖ The OpenFlow protocol describes message exchanges that take place between an
OpenFlow controller and an OpenFlow switch.
❖ Typically, the protocol is implemented on top of TLS, providing a secure OpenFlow
channel.
❖ The OpenFlow protocol enables the controller to perform add, update, and delete actions
to the flow entries in the flow tables.
❖ It supports three types of messages:
❖ Controller-to-switch messages are initiated by the controller and used to directly manage
or inspect the state of the switch. This class of messages enables the controller to manage
the logical state of the switch, including its configuration and details of flow and group
table entries.
❖ Asynchronous messages are initiated by the switch and used to update the controller of
network events and changes to the switch state.

❖ Symmetric messages are initiated by either the switch or the controller and sent without
solicitation. Hello messages are exchanged between the switch and controller upon
connection startup. Sent during the handshake i.e. secure channel setup. Echo request
and reply messages can be used by either the switch or controller to measure the latency
or bandwidth of a controller-switch connection or just verify that the device is up and
running.
❖ In general terms, the OpenFlow protocol provides the SDN controller with three types of
information to be used in managing the network:
(1) Event-based messages: Sent by the switch to the controller when a link or port change
occurs.
(2) Flow statistics: Generated by the switch based on traffic flow. This information
enables the controller to monitor traffic, reconfigure the network as needed, and adjust
flow parameters to meet QoS requirements.
(3) Encapsulated packets: Sent by the switch to the controller either because there is an
explicit action to send this packet in a flow table entry or because the switch needs
information for establishing a new flow.

You might also like