0% found this document useful (0 votes)
38 views42 pages

Control Plane

- The network layer is divided into a data plane and control plane. The data plane handles forwarding of data packets through routers based on information in the packet headers, while the control plane determines the path packets take between source and destination hosts. - Two common approaches for the control plane are traditional routing algorithms implemented in routers, and software-defined networking (SDN) where routing is implemented on remote servers. Dijkstra's algorithm is commonly used to calculate the shortest path between nodes on a graph for routing. It works by maintaining distances from the source node and updating neighboring nodes' distances if a shorter path is found.

Uploaded by

makislaskos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views42 pages

Control Plane

- The network layer is divided into a data plane and control plane. The data plane handles forwarding of data packets through routers based on information in the packet headers, while the control plane determines the path packets take between source and destination hosts. - Two common approaches for the control plane are traditional routing algorithms implemented in routers, and software-defined networking (SDN) where routing is implemented on remote servers. Dijkstra's algorithm is commonly used to calculate the shortest path between nodes on a graph for routing. It works by maintaining distances from the source node and updating neighboring nodes' distances if a shorter path is found.

Uploaded by

makislaskos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Network layer: data plane, control plane

Data plane Control plane


§ local, per-router function § network-wide logic
§ determines how datagram § determines how datagram is
arriving on router input routed among routers along
port is forwarded to end-end path from source host
router output port to destination host
§ forwarding function § two control-plane approaches:
• traditional routing algorithms:
values in arriving
packet header implemented in routers
• software-defined networking
1
0111
2
(SDN): implemented in
3 (remote) servers

Theoretical Background
§ Control plane calculates the entries of forwarding
tables.
• How?
• Use a routing algorithm from graph theory

4-2
Theoretical Background
§ What do you need to know to compute a path
between two nodes on a graph?
• Topology (who’s connected to who)
• Link capacities
• Propagation delays.
§ How do we obtain these?
• Link-state routing (e.g., OLSR)
• Distance-vector routing (e.g., AODV)
§ Assume network parameters are static for now.

4-3

Basics of Routing
Single-Source Shortest Path Problem - The
problem of finding shortest paths from a source
vertex v to all other vertices in the graph.

4-4
Network Layer
Dijkstra's algorithm
Dijkstra's algorithm - is a solution to the single-source
shortest path problem in graph theory.

Works on both directed and undirected graphs. However,


all edges must have nonnegative weights.

Input: Weighted graph G={E,V} and source vertex v∈V,


such that all edge weights are nonnegative

Output: Lengths of shortest paths (or the shortest paths


themselves) from a given source vertex v∈V to all other
vertices

Approach
§ The algorithm computes for each vertex u the distance to
u from the start vertex v, that is, the weight of a shortest
path between v and u.
§ the algorithm keeps track of the set of vertices for which
the distance has been computed, called the cloud C
§ Every vertex has a label D associated with it. For any
vertex u, D[u] stores an approximation of the distance
between v and u. The algorithm will update a D[u] value
when it finds a shorter path from v to u.
§ When a vertex u is added to the cloud, its label D[u] is
equal to the actual (final) distance between the starting
vertex v and vertex u.

6
Dijkstra pseudocode
Dijkstra(v1, v2):
for each vertex v: // Initialization
v's distance := infinity.
v's previous := none.
v1's distance := 0.
List := {all vertices}.

while List is not empty:


v := remove List vertex with minimum distance.
mark v as known.
for each unknown neighbor n of v:
dist := v's distance + edge (v, n)'s weight.

if dist is smaller than n's distance:


n's distance := dist.
n's previous := v.

reconstruct path from v2 back to v1,


following previous pointers.
7

Example: Initialization

Distance(source) = 0 ∞ Distance (all vertices


0 A
2
B but source) = ∞

4 1 3 10

2 2 ∞
∞ C D E

5 8 ∞ 4 6

1
F G

∞ ∞

Pick vertex in List with minimum distance.

8
Example: Update neighbors'
distance
0 2
2
A B

4 1 3 10

2 2 ∞
∞ C D E

5 8 1 4 6

Distance(B) = 2 1
F G
Distance(D) = 1
∞ ∞

Example: Remove vertex with


minimum distance
0 2
2
A B

4 1 3 10

2 2 ∞
∞ C D E

5 8 1 4 6

1
F G

∞ ∞

Pick vertex in List with minimum distance, i.e., D

10
Example: Update neighbors

0 2
2
A B

4 1 3 10

2 2
3 C D E 3

5 8 1 4 6

Distance(C) = 1 + 2 = 3 1
F G
Distance(E) = 1 + 2 = 3
Distance(F) = 1 + 8 = 9 9 5
Distance(G) = 1 + 4 = 5

11

Example: Continued...
Pick vertex in List with minimum distance (B) and update neighbors
0 2
2
A B

4 1 3 10

2 2
3 C D E 3

5 8 1 4 6
Note : distance(D) not
F
1
G updated since D is
already known and
9 5 distance(E) not updated
since it is larger than
previously computed

12
Example: Continued...
Pick vertex List with minimum distance (E) and update neighbors

0 2
2
A B

4 1 3 10

2 2
3 C D E 3

5 8 1 4 6

1
F G
No updating
9 5

13

Example: Continued...
Pick vertex List with minimum distance (C) and update neighbors

0 2
2
A B

4 1 3 10

2 2
3 C D E 3

5 8 1 4 6

Distance(F) = 3 + 5 = 8 1
F G

8 5

14
Example: Continued...
Pick vertex List with minimum distance (G) and update neighbors

0 2
2
A B

4 1 3 10

2 2
3 C D E 3

5 8 1 4 6

1
F G
Previous distance
6 5
Distance(F) = min (8, 5+1) = 6

15

Example (end)

0 2
2
A B

4 1 3 10

2 2
3 C D E 3

5 8 1 4 6

1
F G

6 5
Pick vertex not in S with lowest cost (F) and update neighbors

16
Back to reality
§ No one initially has the complete network info.
• Nodes only know their neighbours.
• Centralized algorithm should be replaced by a
decentralized algorithm.
§ Link-state routing
• Each router collects information about its local
neighborhood by sending probe packets out of all its
output ports to see who it is connected to and what
the properties of the connecting link (propagation
delay and capacity) are.
• Exchange this info with each other by Link State
Advertisements (LSA).
• Broadcast to everyone in the network.

4-17

Distance-vector routing
§ LSR sequences routing into two phases:
• Information accumulation to reconstruct network
graph.
• Route computation
§ DVR is more incremental
• Two phases are interleaved.
• Idea: a router’s shortest path to a destination can be
decomposed into a LINK to one of the router’s
neighbors concatenated with a shortest path to that
destination from that neighbor.
• At each step expand the frontier of the routers one
«hop» at a time!

4-18
Distributed Bellman-Ford
§ Each node keeps best estimate to destination:
• d(v), where d represents the router’s current estimate
of the shortest distance to destination v
• «distance-vector»
• Whenever the distance vector of a router changes, the
router exchanges the distance vector with its
neighbors alone
• No broadcast to entire network.
• When a router receives a distance vector dN from one
of its neighbors, it incrementally updates its own
distance vector dR for every destination v
dR(v)=min(dR(v),dN(v)+link metricR,N)
• Update the forwarding table:
• if a router chooses to update its shortest path to go through
its neighbor, it updates its next hop to that neighbor
4-19

Network Layer 4-20


Comparison of LSR and DVR
§ Size of the network is important.
• Small-scale network broadcasting is easy.
§ Dynamicity of the network is important.
• Routing algorithms assume static link parameters.
• A single change in network, requires all nodes to be
broadcast in LSR.
§ DVR does not have the correct shortest path
until algorithm converges.
§ Other algorithms?
• Many more…
• Dynamic routing (using delay as link metric), source routing,
flooding, multipath routing, anycasting, etc.
4-21

What Can a Basic Router do to Packets?


§ Send it…
§ Delay it…
§ Drop it…
§ How they are done impacts Quality of Service
• Best effort? Guaranteed delay? Guaranteed throughput?
§ Many variations in policies with different behavior
§ Rich body of research work to understand them
§ Limited Internet deployment
• Many practical deployment barriers since Internet was best-effort
to begin with, adding new stuff is hard
• Some people just don’t believe in the need for QoS! Not enough
universal support

22
Packet Scheduling
§ Decide when and what packet to send on
output link
• Usually implemented at output interface

flow 1

1 Classifier flow 2 Scheduler

2 flow n

Buffer
management

23

Internet Classifier
§ A “flow” is a sequence of packets that are related (e.g. from the
same application)
§ Flow in Internet can be identified by a subset of following fields
in the packet header
• source/destination IP address (32 bits)
• source/destination port number (16 bits)
• protocol type (8 bits)
• type of service (4 bits)
§ Examples:
• All TCP packets from Ozgur’s web browser on machine A to web
server on machine B
• All packets from Sabanci
• All packets between Sabanci and Koc
• All UDP packets from Sabanci FENS
§ Classifier takes a packet and decides which flow it belongs to
§ Note: In ATM or MPLS, the classifier can become just a label
demultiplexer

24
Buffer/Queue
§ Buffer: memory where packets can be stored
temporarily
§ Queue: using buffers to store packets in an
ordered sequence
• E.g. First-in-First-Out (FIFO) queue
Buffer Buffer

Packet Packet

Packet Head Packet


Of Queue
Packet Packet Packet Packet

25

Buffer/Queue
§ When packets arrive at an output port faster than the output link
speed (perhaps only momentarily)
§ Can drop all excess packets
• Resulting in terrible performance
§ Or can hold excess packets in buffer/queue
• Resulting in some delay, but better performance
§ Still have to drop packets when buffer is full
• For a FIFO queue, “drop tail” or “drop head” are common policies
• i.e. drop last packet to arrive vs drop first packet in queue to make room

§ A chance to be smart: Transmission of packets held in


buffer/queue can be *scheduled*
• Which stored packet goes out next? Which is more “important”?
• Impacts quality of service

26
Scheduler
§ Decides how the output link capacity is shared by
flows
• Which packet from which flow gets to go out next?
§ E.g. FIFO schedule
• Simple schedule: whichever packet arrives first leaves
first
• Agnostic of concept of flows, no need for classifier, no
need for a real “scheduler”, a FIFO queue is all you
need
§ E.g. TDMA schedule
• Queue packets according to flows
• Need classifier and multiple FIFO queues
• Divide transmission times into slots, one slot per flow
• Transmit a packet from a flow during its time slot
27

TDMA Example

flow 1

Classifier flow 2 TDMA


1 Scheduler

2 flow n

Buffer
management

28
Internet Today
§ FIFO queues are used at most routers
§ No classifier, no scheduler, best-effort

§ Sophisticated mechanisms tend to be more


common near the “edge” of the network
• E.g. At campus routers
• Use classifier to pick out Netflix packets
• Use scheduler to limit bandwidth consumed by Netflix
traffic

29

Achieving QoS in Statistical Multiplexing


Network
§ We want guaranteed QoS
§ But we don’t want the inefficiency of TDMA
• Unused time slots are “wasted”

§ Want to statistically share un-reserved capacity


or reserved but unused capacity

§ One solution: Weighted Fair Queuing (WFQ)


• Guarantees a flow receives at least its allocated bit
rate

30
Fair Queueing
§ In a fluid flow system, it reduces to bit-by-bit round
robin among flows
• Each flow receives min(ri, f) , where
• ri – flow arrival rate
• f – link fair rate (see next slide)
§ Weighted Fair Queueing (WFQ) – associate a
weight with each flow [Demers, Keshav & Shenker
’89]
• In a fluid flow system it reduces to bit-by-bit round robin
§ WFQ in a fluid flow system à Generalized
Processor Sharing (GPS) [Parekh & Gallager ’92]

31

Fair Rate Computation


§ If link congested, compute f such that
å min(r , f ) = C
i
i

f = 4:
8 10
4 min(8, 4) = 4
6 4 min(6, 4) = 4
2 min(2, 4) = 2
2

32
WFQ Architecture

flow 1

Classifier flow 2 WFQ


1 Scheduler

2 flow n

Buffer
management

33

What is Weighted Fair Queueing?


Packet queues
w1

w2
R

wn

§ Each flow i given a weight (importance) wi


§ WFQ guarantees a minimum service rate to flow i
• ri = R * wi / (w1 + w2 + ... + wn)
• Implies isolation among flows (one cannot mess up
another)

34
What is the Intuition? Fluid Flow
w1

water pipes
w2

w3

water buckets

t2
t1

w1 w2 w3

35

Fluid Flow System


§ If flows can be served one bit at a time
§ WFQ can be implemented using bit-by-bit weighted
round robin
• During each round from each flow that has data to send,
send a number of bits equal to the flow’s weight

36
Fluid Flow System: Example 1
Packet Packet inter-arrival Arrival
Flow 1 (w1 = 1) 100 Kbps
Size (bits) time (ms) Rate
(Kbps)
Flow 1 1000 10 100
Flow 2 (w2 = 1)
Flow 2 500 10 50

Flow 1 1 2 3 4 5
(arrival traffic) time

Flow 2 1 2 3 4 5 6
(arrival traffic) time

Service 3 4 5
1 2
in fluid flow 1 2 3 4 5 6
system 0 10 20 30 40 50 60 70 80 time (ms)

37

Fluid Flow System: Example 2


§ Red flow has packets backlogged link
between time 0 and 10
• Backlogged flow à flow’s queue
not empty
§ Other flows have packets flows
continuously backlogged weights 5 1 1 1 1 1
§ All packets have the same size

0 2 4 6 8 10 15

38
Implementation in Packet System
§ Packet (Real) system: packet transmission cannot
be preempted. Why?
§ Solution: serve packets in the order in which they
would have finished being transmitted in the fluid
flow system

39

Packet System: Example 1

Service
in fluid flow
system

0 2 4 6 8 10

§ Select the first packet that finishes in the fluid flow system

Packet
system

0 2 4 6 8 10

40
Packet System: Example 2

Service 3 4 5
1 2
in fluid flow 1 2 3 4 5 6
system time (ms)

§ Select the first packet that finishes in the fluid flow system

Packet 1 2 1 3 2 3 4 4 5 5 6
system time

41

Implementation Challenge
§ Need to compute the finish time of a packet in
the fluid flow system…
§ … but the finish time may change as new packets
arrive!
§ Need to update the finish times of all packets that
are in service in the fluid flow system when a new
packet arrives
• But this is very expensive; a high speed router may
need to handle hundred of thousands of flows!

42
Example
§ Four flows, each with weight 1
Flow 1
time
Flow 2
time
Flow 3
time
Flow 4
time
ε
Finish times computed at time 0

time
0 1 2 3
Finish times re-computed at time ε

time
0 1 2 3 4

43

Solution: Virtual Time


§ Key Observation: while the finish times of packets
may change when a new packet arrives, the order
in which packets finish doesn’t!
• Only the order is important for scheduling
§ Solution: instead of the packet finish time maintain
the round # when a packet finishes (virtual finishing
time)
• Virtual finishing time doesn’t change when a packet
arrives

§ System virtual time V(t) – index of the round in the


bit-by-bit round robin scheme
44
Example
Flow 1
time
Flow 2
time
Flow 3
time
Flow 4
time
ε

§ Suppose each packet is 1000 bits, so takes 1000


rounds to finish
§ So, packets of F1, F2, F3 finishes at virtual time 1000
§ When packet F4 arrives at virtual time 1 (after one
round), the virtual finish time of packet F4 is 1001
§ But the virtual finish time of packet F1,2,3 remains
1000
§ Finishing order is preserved

45

System Virtual Time (Round #): V(t)


§ V(t) increases inversely proportionally to the sum of the weights
of the backlogged flows
§ Since round # increases slower when there are more flows to
visit each round.

Flow 1 (w1 = 1)
time
Flow 2 (w2 = 1)
time

1 2 3 4 5
1 2 3 4 5 6

V(t) C/2
C

46
Fair Queueing Implementation
§ Define
• Fi k- virtual finishing time of packet k of flow i
• aik- arrival time of packet k of flow i
• Lki - length of packet k of flow i
• wi – weight of flow i

§ The finishing time of packet k+1 of flow i is


Fi k +1 = max(V ( aik +1 ), Fi k ) + Lki +1/ wi

§ Smallest finishing time first scheduling policy

47

What is a Service Model?


“external process”
Network element delivered traffic
offered traffic
(connection oriented)

§ The QoS measures (delay,throughput, loss,


cost) depend on offered traffic, and possibly
other external processes.
§ A service model attempts to characterize
the relationship between offered traffic,
delivered traffic, and possibly other external
processes.

48
Arrival and Departure Process
Rin Network Element Rout

bits Rin(t) = arrival process


= amount of data arriving up to
time t

delay

buffer
Rout(t) = departure process
= amount of data departing up to time t
t

49

Traffic Envelope (Arrival Curve)


§ Maximum amount of service that a flow can send
during an interval of time t

b(t) = Envelope
slope = max average rate

“Burstiness Constraint”

slope = peak rate

t
50
Service Curve
§ Assume a flow that is idle at time s and it is
backlogged during the interval (s, t)
§ Service curve: the minimum service received by
the flow during the interval (s, t)

51

Big Picture
bits Service curve
bits

Rin(t)

slope = C

t t
bits

Rout(t)

t
52
Delay and Buffer Bounds
E(t) = Envelope
bits (arrival curve)

Maximum delay

Maximum buffer

S (t) = service curve

53

Linear Service Curves: Example


bits bits
Service FTP
curves Video
t
Arrival t
process
bits bits
Arrival
curves
t
t
bits bits
Deadline
computation

t t
Video packets have to wait after ftp packets

istoica@cs.berkeley.edu
t 55
Non-Linear Service Curves: Example
bits bits
Service Video FTP
curves
t
Arrival t
process
bits bits
Arrival
curves
t
t
bits bits
Deadline
computation

t t
Video packets transmitted
as soon as they arrive
istoica@cs.berkeley.edu
t 56

Network-layer functions
Recall: two network-layer functions:
§ forwarding: move packets
from router’s input to data plane
appropriate router output
§ routing: determine route
taken by packets from source control plane
to destination

Two approaches to structuring network control plane:


§ per-router control (traditional)
§ logically centralized control (software defined networking)

Network Layer: Control Plane 5-


57
Per-router control plane
Individual routing algorithm components in each and every
router interact in the control plane

4.1 • OVERVIEW OF NETWORK LAYER 309

Routing
Algorithm
Routing algorithm control
Control plane plane
Data plane

Local forwarding data


table
header output
plane
0100 3
0110 2
0111 2
1001 1

Values in arriving
values in arriving
packet’s header
1
packet header 1101

2
3
0111 1
2
3

Figure 4.2 ♦ Routing algorithms determine values in forward tables


Network Layer: Control Plane 5-
tables. In this example, a routing algorithm runs in each and every router and both
forwarding and routing functions are contained within a router. As we’ll see in Sec-
58
tions 5.3 and 5.4, the routing algorithm function in one router communicates with
the routing algorithm function in other routers to compute the values for its forward-
ing table. How is this communication performed? By exchanging routing messages
containing routing information according to a routing protocol! We’ll cover routing
algorithms and protocols in Sections 5.2 through 5.4.
The distinct and different purposes of the forwarding and routing functions can

Logically centralized control plane


be further illustrated by considering the hypothetical (and unrealistic, but technically
feasible) case of a network in which all forwarding tables are configured directly by
human network operators physically present at the routers. In this case, no routing
protocols would be required! Of course, the human operators would need to interact
with each other to ensure that the forwarding tables were configured in such a way
that packets reached their intended destinations. It’s also likely that human configu-

A distinct (typically remote) controller interacts with local


ration would be more error-prone and much slower to respond to changes in the net-
work topology than a routing protocol. We’re thus fortunate that all networks have
both a forwarding and a routing function!
control agents (CAs)

M04_KURO4140_07_SE_C04.indd 309
Remote Controller 11/02/16 3:14 PM

control
plane

data
plane

CA
CA CA CA CA
values in arriving
packet header

0111 1
2
3

Network Layer: Control Plane 5-


59
Generalized Forwarding and SDN
Each router contains a flow table that is computed and
distributed by a logically centralized routing controller

logically-centralized routing controller

control plane

data plane
local flow table
headers counters actions

1
0100 1101
3 2
values in arriving
packet’s header
Network Layer: Data Plane 4-60

OpenFlow abstraction
§ match+action: unifies different kinds of devices
§ Router § Firewall
• match: longest • match: IP addresses
destination IP prefix and TCP/UDP port
• action: forward out numbers
a link • action: permit or
§ Switch deny
• match: destination § NAT
MAC address • match: IP address
• action: forward or and port
flood • action: rewrite
address and port

Network Layer: Data Plane 4-61


History of OpenFlow.
§ 2006: Martin Casado, a PhD student at Stanford and team
propose a clean-slate security architecture (SANE) which defines a
centralized control of security (in stead of at the edge as normally
done). Ethane generalizes it to all access policies.
§ April 2008: OpenFlow paper in ACM SIGCOMM CCR
§ 2009: Stanford publishes OpenFlow V1.0.0 specs
§ June 2009: Martin Casado co-founds Nicira
§ March 2010: Guido Appenzeller, head of clean slate lab at Stanford, co-
founds Big Switch Networks
§ March 2011: Open Networking Foundation is formed
§ Oct 2011: First Open Networking Summit.
§ Juniper, Cisco announce plans to incorporate.
§ July 2012: VMware buys Nicira for $1.26B
§ Nov 6, 2013: Cisco buys Insieme for $838M

Network Layer 4-62

Software defined networking (SDN)


§ Internet network layer: historically has been
implemented via distributed, per-router approach
• monolithic router contains switching hardware, runs
proprietary implementation of Internet standard
protocols (IP, RIP, IS-IS, OSPF, BGP) in proprietary
router OS (e.g., Cisco IOS)
• different “middleboxes” for different network layer
functions: firewalls, load balancers, NAT boxes, ..

§ ~2005: renewed interest in rethinking network


control plane

Network Layer: Control Plane 5-63


Recall: logically centralized control plane
A distinct (typically remote) controller interacts with local
control agents (CAs) in routers to compute forwarding tables

Remote Controller

control
plane

data
plane

CA
CA CA CA CA

Network Layer: Control Plane 5-64

Software defined networking (SDN)


Why a logically centralized control plane?
§ easier network management: avoid router
misconfigurations, greater flexibility of traffic flows
§ table-based forwarding (recall OpenFlow API)
allows “programming” routers
• centralized “programming” easier: compute tables
centrally and distribute
• distributed “programming: more difficult: compute
tables as result of distributed algorithm (protocol)
implemented in each and every router
§ open (non-proprietary) implementation of control
plane

Network Layer: Control Plane 5-65


SDN perspective: data plane switches
Data plane switches network-control applications

§ fast, simple, commodity


routing

switches implementing
generalized data-plane access load
control balance
forwarding (Section 4.4) in
hardware control
plane
northbound API
§ switch flow table computed,
installed by controller SDN Controller
§ API for table-based switch (network operating system)
control (e.g., OpenFlow)
• defines what is controllable and southbound API
what is not
§ protocol for communicating data
with controller (e.g., OpenFlow) plane

SDN-controlled switches
Network Layer: Control Plane 5-69

SDN perspective: SDN controller


SDN controller (network OS): network-control applications

§ maintain network state


routing

information
access load
§ interacts with network control balance
control applications “above”
via northbound API northbound API
control
plane
§ interacts with network
switches “below” via SDN Controller
southbound API (network operating system)
§ implemented as distributed
system for performance, southbound API

scalability, fault-tolerance,
robustness data
plane

SDN-controlled switches
Network Layer: Control Plane 5-70
SDN perspective: control applications
network-control apps: network-control applications

§ “brains” of control:
routing

implement control functions
using lower-level services, API access load
control balance
provided by SDN controller
§ unbundled: can be provided by northbound API
control
plane
3rd party: distinct from routing
vendor, or SDN controller SDN Controller
(network operating system)

southbound API

data
plane

SDN-controlled switches
Network Layer: Control Plane 5-71

Components of SDN controller


routing access load
control balance
Interface layer to
network control Interface, abstractions for network control apps
apps: abstractions
API
network
graph
RESTful
API
… intent

Network-wide state
management layer: statistics … flow tables
state of networks
Network-wide distributed, robust state management
SDN
links, switches,
controller
services: a distributed
database
Link-state info host info … switch info

communication layer: OpenFlow … SNMP


communicate Communication to/from controlled devices
between SDN
controller and
controlled switches

Network Layer: Control Plane 5-72


OpenFlow protocol
§ operates between
OpenFlow Controller controller, switch
§ TCP is used to
exchange messages
• optional encryption
§ three classes of
OpenFlow messages:
• controller-to-switch
• asynchronous (switch
to controller)
• symmetric (misc)

Network Layer: Control Plane 5-73

OpenFlow data plane abstraction


§ flow: defined by header fields
§ generalized forwarding: simple packet-handling rules
• Pattern: match values in packet header fields
• Actions: for matched packet: drop, forward, modify,
matched packet or send matched packet to controller
• Priority: disambiguate overlapping patterns
• Counters: #bytes and #packets

Flow table in a router (computed and distributed by


controller) define router’s match+action rules
Network Layer: Data Plane 4-74
OpenFlow data plane abstraction
§ flow: defined by header fields
§ generalized forwarding: simple packet-handling rules
• Pattern: match values in packet header fields
• Actions: for matched packet: drop, forward, modify,
matched packet or send matched packet to controller
• Priority: disambiguate overlapping patterns
• Counters: #bytes and #packets

* : wildcard
1. src=1.2.*.*, dest=3.4.5.* à drop
2. src = *.*.*.*, dest=3.4.*.* à forward(2)
3. src=10.1.2.3, dest=*.*.*.* à send to controller

OpenFlow: Flow Table Entries

Rule Action Stats

Packet + byte counters

1. Forward packet to port(s)


2. Encapsulate and forward to controller
3. Drop packet
4. Send to normal processing pipeline
5. Modify Fields

Switch VLAN MAC MAC Eth IP IP IP TCP TCP


Port ID src dst type Src Dst Prot sport dport

Link layer Network layer Transport layer


Examples
Destination-based forwarding:
Switch MAC MAC Eth VLAN IP IP IP TCP TCP
Action
Port src dst type ID Src Dst Prot sport dport
* * * * * * 51.6.0.8 * * * port6
IP datagrams destined to IP address 51.6.0.8 should
be forwarded to router output port 6
Firewall:
Switch MAC MAC Eth VLAN IP IP IP TCP TCP
Forward
Port src dst type ID Src Dst Prot sport dport
* * * * * * * * * 22 drop
do not forward (block) all datagrams destined to TCP port 22

Switch MAC MAC Eth VLAN IP IP IP TCP TCP


Forward
Port src dst type ID Src Dst Prot sport dport
* * * * * 128.119.1.1
* * * * drop
do not forward (block) all datagrams sent by host 128.119.1.1

Examples
Destination-based layer 2 (switch) forwarding:
Switch MAC MAC Eth VLAN IP IP IP TCP TCP
Action
Port src dst type ID Src Dst Prot sport dport
22:A7:23:
* 11:E1:02 * * * * * * * * port3
layer 2 frames from MAC address 22:A7:23:11:E1:02
should be forwarded to output port 6

Network Layer: Data Plane 4-78


OpenFlow: controller-to-switch messages

Key controller-to-switch messages


§ features: controller queries OpenFlow Controller
switch features, switch replies
§ configure: controller
queries/sets switch
configuration parameters
§ modify-state: add, delete, modify
flow entries in the OpenFlow
tables
§ packet-out: controller can send
this packet out of specific
switch port
Network Layer: Control Plane 5-79

OpenFlow: switch-to-controller messages


Key switch-to-controller messages
§ packet-in: transfer packet (and its OpenFlow Controller
control) to controller. See packet-
out message from controller
§ flow-removed: flow table entry
deleted at switch
§ port status: inform controller of a
change on a port.

Fortunately, network operators don’t “program” switches by


creating/sending OpenFlow messages directly. Instead use
higher-level abstraction at controller
Network Layer: Control Plane 5-80
SDN: control/data plane interaction example
Dijkstra’s link-state 1 S1, experiencing link failure
Routing using OpenFlow port status
message to notify controller
4 5
network
graph
RESTful
API
… intent 2 SDN controller receives
OpenFlow message, updates
statistics
3
… flow tables
link status info
3 Dijkstra’s routing algorithm
Link-state info host info … switch info application has previously
2 registered to be called when
OpenFlow
… SNMP
ever link status changes. It is
called.
4 Dijkstra’s routing algorithm
6 access network graph info, link
1
state info in controller,
s2 computes new routes
s1
s4
s3
Network Layer: Control Plane 5-81

SDN: control/data plane interaction example


Dijkstra’s link-state
Routing
4 5
network
graph
RESTful
API
… intent 5 link state routing app interacts
with flow-table-computation
statistics
3
… flow tables
component in SDN controller,
which computes new flow
Link-state info host info … switch info
tables needed

2 6 Controller uses OpenFlow to


OpenFlow
… SNMP
install new tables in switches
that need updating
6
1

s2
s1
s4
s3
Network Layer: Control Plane 5-82
OpenFlow example Example: datagrams from
hosts h5 and h6 should
be sent to h3 or h4, via s1
match action and from there to s2
IP Src = 10.3.*.* Host h6
forward(3)
IP Dst = 10.2.*.* 10.3.0.6
1 s3 controller
2

3 4
Host h5
10.3.0.5

1 s1 1 s2
2 Host h4
4 2 4
Host h1 10.2.0.4
3 3
10.1.0.1
Host h2
10.1.0.2 match action
match action Host h3 ingress port = 2
10.2.0.3 forward(3)
ingress port = 1 IP Dst = 10.2.0.3
IP Src = 10.3.*.* forward(4) ingress port = 2
forward(4)
IP Dst = 10.2.*.* IP Dst = 10.2.0.4

What is network management?


§ autonomous systems (aka “network”): 1000s of interacting
hardware/software components
§ other complex systems requiring monitoring, control:
• jet airplane
• nuclear power plant
• others?

"Network management includes the deployment, integration


and coordination of the hardware, software, and human
elements to monitor, test, poll, configure, analyze, evaluate,
and control the network and element resources to meet the
real-time, operational performance, and Quality of Service
requirements at a reasonable cost."

Network Layer: Control Plane 5-84


Infrastructure for network management
definitions:
managing entity
agent data
managing
data managed device
managed devices
entity
contain managed
agent data
objects whose data is
network gathered into a
management
protocol agent data
managed device Management
managed device
Information Base (MIB)

agent data
agent data
managed device
managed device

Network Layer: Control Plane 5-85

SNMP protocol
Two ways to convey MIB info, commands:

managing managing
entity entity

request
trap msg
response

agent data agent data

managed device managed device

request/response mode trap mode


Network Layer: Control Plane 5-86
INTENT-BASED Networkıng

87

Intent Based Net Landscape


IBN Landscape
Multi
Scripting Tools
Vendor Intent Based
Networking
Vendor Lock-in

NMS Tools

White Box

Hardware
Vendor-Specific
Extreme Management
Lock-in

Level 0 Level 1 Level 2 Level 3


Basic Single Source Real-time Change Self-Operation
Automation of Truth Validation

Intent Based Networking Maturity Levels


5

Network Layer 4-88

You might also like