0% found this document useful (0 votes)
49 views6 pages

22 Polima

This document proposes an extension to the Izzy distributed routing protocol to support elastic controller clusters in software-defined networks (SDNs). The extension, called Seedling, divides controllers into groups based on their proximity and assigns each group an instance of Izzy. Seedling determines the location of the root node for each Izzy instance, placing it centrally within the controller group. Seedling then monitors changes to the network topology and controller cluster to adapt the grouping and root placements over time as needed.

Uploaded by

David Carrascal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views6 pages

22 Polima

This document proposes an extension to the Izzy distributed routing protocol to support elastic controller clusters in software-defined networks (SDNs). The extension, called Seedling, divides controllers into groups based on their proximity and assigns each group an instance of Izzy. Seedling determines the location of the root node for each Izzy instance, placing it centrally within the controller group. Seedling then monitors changes to the network topology and controller cluster to adapt the grouping and root placements over time as needed.

Uploaded by

David Carrascal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Towards a Distributed Routing Protocol for In-band

Control Channel with Elastic Controller Clusters


Polina Holzmann* , Artur Hecker† and Martina Zitterbart*
*
Karlsruhe Institute of Technology, Germany

Huawei, Munich Research Center, Germany
Email: {holzmann,zitterbart}@kit.edu, artur.hecker@huawei.com

Abstract—In software-defined networks (SDN), network con- several individual controllers, which together appear as a single
trol logic is implemented in a (logically-) centralized SDN controller to the outside.
controller that communicates with SDN switches via an SDN
control channel. As the ability of the controller to communicate For such a cluster, the goal of the control channel becomes
with switches is critical for the operationability of the network, twofold. Besides providing connectivity between individual
reliable connectivity of the control channel must be provided. controllers and switches under their control, the control channel
This question is especially important for in-band control channels,
where no separate management network is available and most must provide connectivity among controllers within the cluster.
switches are not directly connected to the controller(s). The latter is necessary for the controllers to synchronize state.
In our previous work [1], we argued for providing reliable Commonly used state synchronization primitives, however, rely
connectivity for the in-band control channel using a distributed on available network connectivity between the members of
routing protocol. We presented the protocol Izzy which is the cluster [4]. A distributed routing protocol can provide this
optimized for robust connectivity between a single SDN controller
and the switches under its control. Realistic deployments, how-
connectivity without relying on the controllers, i.e., not being
ever, use logically-centralized controller clusters for reliability dependant on the controllers to operate.
and scalability reasons. In this paper, we describe how to extend
Izzy to support such logically-centralized clusters. We propose to
In our previous work we argued that since the control channel
divide controllers into groups based on distance between them. could be used to carry latency-sensitive communications, a rout-
Each such group runs a separate instance of Izzy. This paper ing protocol for the control channel should calculate shortest
presents the distributed algorithm Seedling that accomplishes paths. Izzy is based on building a shortest-path spanning tree
the former. We describe how Seedling works in static scenarios with the controller as the root node (we discuss robustness later).
followed by an outline of necessary steps to extend it to dynamic
settings including changes to both the number and location of
With several controllers, Izzy could be applied by building a
controllers as well as to network topology, especially link failures. separate tree for each controller, effectively running several
instances of Izzy in parallel. Each Izzy instance introduces
overhead in protocol maintenance and, more importantly, space
I. I NTRODUCTION in forwarding tables. In this paper we propose to reduce the
number of maintained instances by relaxing shortest paths
In software-defined networks (SDN), network control logic requirements and allowing some path stretch.
is implemented in a (logically-) centralized SDN controller
In this paper we present the algorithm Seedling that divides
which communicates with SDN switches over an SDN control
controllers into groups based on the proximity between them.
channel. We are considering SDN deployments where the
Each group then “shares” an instance of Izzy. Since Izzy builds
control channel is in-band. An in-band channel must provide
a spanning tree with a pre-defined root node, the algorithm also
connectivity over possibly multi-hop paths between controller
determines the location of such a root node by placing it in the
and switches which requires routing functionality. Greenberg
“middle” of the group (quasi planting a tree, hence its name).
et al. [2] presented two main requirements for routing on
The paper describes the functionality of Seedling in static
the control channel : (i) the paths of an in-band channel are
scenarios and outlines how Seedling should be extended to
maintained separately from normal network paths (ii) routing is
support network dynamics. In addition to changes of network
optimized solely for robustness of connectivity under failures.
topology including link and node failures, Seedling should
We argue that both requirements can be best satisfied by a
support elastic controller clusters, where the number [5] and
dedicated distributed routing protocol. In our previous work we
placement of individual controllers [6] changes over time.
presented the distributed routing protocol Izzy, that maintains
reliable connectivity between a single SDN controller and the The rest of the paper is organized as follows. Section II
SDN switches under its control [1]. gives an overview of Izzy. Section III outlines how to extend
Having a single controller may be realistic only in simple Izzy to support controller clusters. Section IV describes the
deployments. For reasons such as reliability and scalabil- Seedling algorithm for dividing controllers into groups and
ity, most production networks employ logically-centralized determining locations of tree roots for Izzy. Finally, Section V
controller clusters, e.g., Onos [3]. Such a cluster consists of summarized related work and Section VI concludes the paper.

978-1-5386-4920-6/18/$31.00 ©2018 IEEE


Authorized licensed use limited to: Univ de Alcala. Downloaded on January 18,2023 at 12:23:29 UTC from IEEE Xplore. Restrictions apply.
II. OVERVIEW OF I ZZY exceed the configured limit. In the next section we describe a
Izzy [1] is a distributed routing protocol that provides distributed algorithm Seedling that accomplishes this. Seedling
robust connectivity between a single SDN controller and SDN starts after the location of controllers is known. It calculates
switches under its control. Izzy assumes flat addressing scheme. the initial groups and location of tree roots. Once the tree
Izzy constructs a shortest path spanning tree rooted at the roots are determined each selected root switch initiates an
controller node. Then it assigns each switch a temporary instance of Izzy. Izzy calculates forwarding paths and installs
forwardable address, or locator, that encodes this switch’s corresponding rules in the data plane. Once this is done,
position in the tree. We use a simple prefix-based scheme: connectivity of the control channel is established. Further,
root has a locator 0, its children are 01, 02, . . ., children of 01 Seedling monitors changes to network topology and adapts the
are 011, 012, . . . and so on. The numbers are binary encoded. controller groups and their respective tree roots accordingly.
The locators are used to aggregate addresses in the forwarding If the location of tree root is changed, Seedling notifies the
tables: each switch an entry for each of its tree links, and corresponding instance of Izzy, which in turn, is responsible
packets are forwarded using longest prefix match. for adapting the structure of its tree.
Robustness is achieved by using pre-computed failover paths
for tree links. Each switch calculates backup paths for each IV. S EEDLING
of its tree links and installs them into forwarding tables. Izzy In this section we describe Seedling. The section begins with
also includes a mechanism for consistent updates. We omit describing Seedling for static scenarios, i.e., static network
further details as well as the description of Izzy’s convergence topology and controller placement. Then we outline the
scheme due to a lack of space. necessary changes to support dynamic scenarios.
Izzy’s locators are only valid within the tree, i.e., they can In the following, we do not make a distinction between
be used to forward packets only along the corresponding tree. switches and general-purpose nodes where controllers can be
If several instances of Izzy are running in parallel, each switch hosted and use the term node for either of them. Each node
is assigned a different locator for each tree. Forwarding tables runs Seedling. Some of the nodes host controllers.
will contain entries for each instance separately. Addresses in For a given location of controller nodes, Seedling divides
packets, thus, effectively select a tree used for forwarding. controllers into non-overlapping groups based on proximity
III. I ZZY WITH M ULTIPLE C ONTROLLERS between them, and, then, determines the position of the root
node for Izzy in the “middle” of each group. In this paper, we
As already described in Section I, a straightforward way to
use hop count as proximity metric, but any symmetrical link
use Izzy with multiple controllers is to run a separate instance of
costs can be applied. In Seedling, the parameter N defines
Izzy for each controller. This approach results in shortest paths,
the maximal allowed distance from one controller to another.
but introduces overhead in maintaining a separate spanning tree
Controllers can belong to the same group if they are within
with separate locators for each controller, and, most importantly
the distance of N hops from each other. A tree root will then
require separate forwarding rules for each instance. Although
be placed at most N hops away from each controller, resulting
some overhead could be reduced by aggregating forwarding
in maximal additive stretch of 2 · N for Izzy’s paths Currently,
rules, shortest paths can only be achieved by maintaining a
the value for N is global, i.e., it is identical for all controller.
spanning tree per controller (see e.g., [7]).
As an example consider the network in Fig. 1. If the value
We motivated the shortest path requirement by the fact that
of N is five hops, all controllers can be in one group. For a
the SDN control channel is likely to carry latency-sensitive
smaller value of N , say, two, the controllers should be divided
communication. However, adding a small additive stretch
into two groups: c1 and c2 , c3 , c4 . For each group the root node
should not pose a significant problem. In tree-based routing,
is the node in the middle of the group, i.e, it is a node with
forwarding between two non-root nodes can be achieved by
minimal average distance in hops to all controllers within the
forwarding a packet from a source up to the root node, followed
group. If all contollers in Fig. 1 belong to the same group, the
by down to a destination. Thus, the resulting worst case additive
tree root should be either node a or b. For a group c2 , c3 , c4 ,
stretch on a path to a controller is proportional to its distance
the node c3 is selected as the tree root.
to the tree root, and is relatively small, if the controller is
located close to the root node. Thus we limit the maximal A. Seedling for Static Scenarios
allowed distance of controllers to tree roots and determine the
number of Izzy instances so that this constraint is satisfied. This In static scenarios Seedling runs in three main phases:
results in maximal additive stretch of twice the limit. Since 1) collection phase – in this phase the controller nodes
the tolerable stretch depends on the applications running on explore their local neighborhood. Each controller learns
the controllers, the limit is configurable. about other controllers within its N -hop neighborhood
With minor modifications, Izzy can be used for tree-based as well as the distances between nodes and controllers
routing between a group of controllers close to the tree root and in the neighborhood.
all switches. What is necessary is to determine the membership 2) decision phase – each controller node chooses to which
of the group as well as the location of the tree root, such group it belongs to and calculates the location of Izzy’s
that maximal distance of each controller to the root does not root within this group. The decision algorithm tries to

Authorized licensed use limited to: Univ de Alcala. Downloaded on January 18,2023 at 12:23:29 UTC from IEEE Xplore. Restrictions apply.
c1 c1 c2 c3 (a)

c1 c2 c3 (b)
a c2 c4
c1

c3 c3 c4 c5 (c)
b
c2

Figure 1. An example of controller placement on Abilene topology. White Figure 2. Examples of difficult situations for the decision phase of Seedling
nodes are controller nodes. with N = 2 hops.

ensure, that controllers make consistent decisions. This membership. In the following description a point of view of a
is however not always possible. single controller is described.
3) notification phase – selected root nodes are informed by From the data in Eq. (1) the controller knows:
the controllers. This phase also includes a consistency
1) all other controllers within its N -hop neighborhood, as
check between the controllers in the group.
well as its distance to these controllers
Once the notification phase terminates, each selected root node 2) for each of these controllers, the controllers which are
starts executing Izzy. within its N -hop neighborhood (with corresponding
Note, that Seedling is executed by all nodes. In particular distances),
each node maintains state. Further, Seedling is executed before The algorithm begins by only considering controller pairs
any end-to-end connectivity is available: calculating paths for with bi-directional visibility, i.e., the controllers are in each
control messages of Seedling is an integral part of Seedling. others collection trees. In a static scenario with a global value
1) Collection Phase: In this phase each controller explores of N in hops this visibility is always symmetrical.
its N -hop neighborhood and learns distances to nodes within Then the controller decides which controllers should belong
this neighborhood. The phase proceeds as follows. First, each to its group. Before explaining this step, we illustrate it with
controller builds a so-called collection tree. The collection tree examples. Recall, that the groups should not overlap.
of controller ci is a spanning tree of maximal depth N rooted First, consider the example in Fig. 1 with N = 2. In this
at ci . Such a tree can be constructed using distributed Bellman- case c1 will see no other controllers in its 2-hop neighborhood.
Ford. As the result, each node within the N -hop neighborhood Since c2 , c3 and c4 all see each other, each controller decides
of ci learns its distance to ci as well as its parent in the tree that it belong to the group with the two other controllers.
and thus a path to ci . Now consider the first two examples in Fig. 2 and N = 2. In
Once collection trees for all controllers have been con- both topologies, since all controllers cannot belong to the same
structed1 , the controllers “collect” the learned distances. Each group, the algorithm must decide on whether to build groups
node sends a message to each controller, containing distances c1 , c2 or c2 , c3 . More precisely the algorithm must choose to
to all controllers it knows. These messages are forwarded along which group c2 belongs. In the Fig. 2a c2 belongs to the group
the collection trees of corresponding controllers. As the result c2 , c3 since it is closer to c3 than to c1 . In Fig. 2b the distances
each controller receives the following information from each are equal and ties are broken by node id’s: c2 belongs to the
node in its N -hop neighborhood: group c1 , c2 if node id of c1 is smaller than c3 . Each of the
three controllers makes the same decision. Finally, consider
node a : {c1 : dac1 , c2 : dac2 , . . . , cn : dacn } Fig. 2c again with N = 2 where the group of c3 should be
node b : {c1 : dbc1 , c2 : dbc2 , . . . , cm : dbcm } (1) determined. In this case both c1 and c2 should decide that
they both are in the same group, but c3 should belong to the
etc.
group with c4 and c5 since it is closer to this group. This is
where dnci is the distance of node n from the controller ci , that more complicated than previous cases because here groups of
is the distance of n in ci ’s collection tree. This information is controllers are considered.
used as an input for the decision algorithm. The general idea of the decision algorithm is as follows.
2) Decision Phase: In this phase each controller individually The controller that executes the algorithm considers each other
determines to which group of controllers it belongs to as controller cother in its neighborhood one by one and decides
well as which node should be Izzy’s tree root for the group. whether the two controllers should belong to the same group. In
The following algorithm is executed by each controller and order to do this, the controller considers two possible divisions
ensures that controllers make consistent decisions about group of controllers into groups: (1) when itself and cother belong
to the same group and (2) when cother belongs to the group
1 We defer the description of how to determine that the collection tree has with all controllers not in this controller’s group. Note, that
been constructed till Section IV-A4. all other controllers are also assigned to the groups in both

Authorized licensed use limited to: Univ de Alcala. Downloaded on January 18,2023 at 12:23:29 UTC from IEEE Xplore. Restrictions apply.
cases. Then the controller considers minimal distance of cother
collection phase decision phase
to other members of its group and assigns cother to the same planned changes
group with itself if the minimal distance in this case is smaller. calculate new
In Fig. 2c c1 will consider the cases: (1) {c1 , c2 , c3 } and (2) groups and
maintain significant changes
{c1 , c2 }, {c3 , c4 , c5 }. Since the minimal distance for the first root node
and monitor due to failures
case is two hops and for the second case is one hop (c3 to c4 ), collection
c3 does not belong to the same group with c1 . synchronize
trees
Once the group membership is determined, the controller and notify
calculates the position of the root node. The root node is the tree roots
to Izzy
node with the minimal average distance to all controllers in notification phase
the group. This can be trivially calculated from Eq. (1) by
first calculating for each node (each line in (1)), the average Figure 3. An overview of how phases of Seedling interact in dynamic scenarios.
distance of this node to the controllers which belong to the
group and then selecting the node with minimal value.
As the outcome of this phase each controller decides on the the average inter-arrival interval between these messages. After
group it belongs to and on the location of the root node, i.e., each message it sets a timeout to k · average inter-arrival time.
the following data: When the timer fires, the controller decides that the algorithm
stabilizes and proceeds to the decision phase.
group: {cl , cm , cn , . . .}, root node: node ni (2) Note, that although algorithms that allow to detect when the
3) Notification Phase: Once the controllers have decided on distributed Bellman-Ford terminates exist (see e.g., [8]), we
their membership and the location of the corresponding root are unaware of an algorithm that works under the presence of
nodes, each root node should be made aware of this and start link failures during algorithm execution. Since the latter is our
running an instance of Izzy for the group with itself as a root. intended goal, we decided for our timeout version.
This phase also includes a synchronization mechanism which
B. Towards Support for Dynamic Scenarios
verifies that the decisions are indeed consistent.
Once each controller decided on group membership and In dynamic settings both network topology as well as
location of the corresponding root node as in Eq. (2), it sends a number and location of controllers changes over time. In
message with this data to both the root node and all controllers this case instead of running in a single pass as described
in the group. Collection trees are used to route this message. in the previous subsection, Seedling runs continuously and
As the result, all controllers in the group exchange the data adapts both group memberships and location of tree roots.
between each other and the root node receives a notification The dynamic version of Seedling uses the same three phases
from each controller in the group. (slightly modified) as follows (see Fig. 3). Collection phase
Since the controllers exchange information about their group is always running. Seedling adapts the collection trees and
memberships, each controller can check whether the group monitors changes to distances, i.e., each controller continuously
memberships are consistent and correct inconsistencies if gathers distances (i.e., data in Eq. (1)) from nodes. Depending
necessary. We defined a simple set of priority rules which on the type of change as discussed below, controller nodes
are omitted due to a lack of space. If a controller changes its may decide to proceed to the decision phase. Once the
decision it repeats the previous step of this phase. new group memberships and locations of root nodes are
The selected root node waits until it receives the same data calculated, Seedling proceeds to the notification phase. The only
from each controller, i.e., each controller reports the same group modification here is that in addition to the new root, the old
membership and root node location. Thus, it waits until all root should also be notified. Both old and new root nodes wait
controller have made consistent decisions and starts executing for notifications from all affected controllers, after which Izzy
Izzy afterwards. adapts its routing structure. Each controller node independently
4) Timings for the Collection Phase: In the previous sub- decides when to proceed to the decision phase. Synchronization
section we assumed that it is known when the construction of occurs in the notification phase, where controllers exchange
each collection tree is completed. In a distributed asynchronous their new group memberships (see Eq. (2)).
setting the completion cannot be trivially determined. Changes to the location of a tree roots include reconfiguring
In Seedling, we are actually interested in when each the corresponding instance of Izzy, which, in turn, requires
controller collects all relevant distances and can transition update of forwarding tables of all nodes in the network. For
to decision phase. We use the following simple scheme for this. this reason we want to avoid frequent changes of tree roots.
Intermediate nodes in the collection tree report their distances Upon deciding when to change the locations of tree roots,
multiple times – each time the distance to a new controller is we distinguish between planned changes and failures. Planned
learned or a known distance is changed the node notifies all changes such as adding or moving a controller are expected to
controllers it knows. In static scenarios the construction of the persist and are applied immediately. Failures of links or nodes,
collection trees will eventually stabilize and the controller will including failures of controllers may be transient. Here, we
eventually stop receiving messages. Thus each controller tracks adapt tree roots locations only on significant changes that persist

Authorized licensed use limited to: Univ de Alcala. Downloaded on January 18,2023 at 12:23:29 UTC from IEEE Xplore. Restrictions apply.
for some time. The threshold for the former should be derived in the group will detect root node failre when end-to-end
from the value of N . For the latter we could use an exponential retransmissions fail. If the root node already begun executing
backoff for the value of timeout discussed in Section IV-A4 Izzy, failures of the root node are handled by Izzy and not by
by e.g., increasing the value of k. This will suppress frequent Seedling.
reconfigurations when the network is unstable. 5) Limitations: The outlined dynamic version of Seedling
Below we describe specifics of handling different types of can be used in networks where on average the dynamics
network dynamics. is relatively low. The dynamics of Seedling itself is limited
1) Addition or Removal of Controllers: New controllers primarily by the synchronization mechanism of the notification
start with building their collection trees. When a controller phase. The mechanism will only terminate if the network
is removed, its node should explicitly withdraw its collection is stable for a period of time necessary for controllers to
tree. Note, that since the removal is planned, the controller synchronize their decisions with each other. Further, as already
node can execute some kind of “shut-down” actions before mentioned, the frequency of changes to root node locations is
terminating. These will cause all affected controllers to receive limited by the convergence time of Izzy.
new distances and the rest of Seedling can be executed as
described above. C. Towards Estimating Complexity of Seedling
2) Controller Failures: On a failure of a controller, its As Seedling is distributed, it’s time, message, and space
collection tree should also be withdrawn. However in this case complexities are of interest. Time complexity of Seedling is
instead of explicit withdraw of controller node, intermediate O(N ). Collection trees are constructed using a Bellman-Ford
nodes should detect controller node’s failure and remove its approach. Time complexity for constructing a spanning tree of
collection tree. This is not straightforward for two reasons. depth N is O(N ) [8]. Once the spanning tree is constructed
First, each individual node cannot distinguish between a failure it takes an additional O(N ) for all messages with distances
of the controller node and a failure of a link to that node. Thus to reach their controllers. Decision phase is not distributed.
nodes have to cooperatively detect the failure. Second, failure Notification phase consists of exchanging messages of at most
of a root node of a tree is exactly the case where count-to- N hops away and thus also has a complexity of O(N ). A
infinity2 occurs in Bellman-Ford. Thus a solution similar to timeout for the collection tree construction (see Section IV-A4)
the one of DUAL [9] or a path vector algorithm is necessary could be modeled as an additional O(average link delay). Time
for collection tree construction. complexity of Seedling after link failures should correspond
3) Link Failures: Link failures affect Seedling in two ways. to time complexity of DUAL [9].
First, on link failures, distances in collection trees change. Since Space complexity for non-controller nodes is a space required
collection trees should employ a solution to count-to-infinity, to store state for each collection tree. The number of collection
collection trees do not contain loops [9]. Second, since the trees on each node is at most the number of controllers.
collection trees are used to route messages of Seedling between This results in O(maximal node degree·number of controllers)
nodes, on link failures these routes may not be available and space, where the first clause is space complexity of Bellman-
messages may not be delivered. Thus, Seedling should use an Ford. Each controller node stores distances reported by all
end-to-end retransmissions mechanism for reporting distances nodes in its collection tree. The number of these messages is
in the collection phase and for all messages in the notification at most maximal node degreeN
phase. Since in the collection phase we chose to exchange additional
4) Node Failures: Generally, failures of nodes are equivalent messages instead of using a message-optimal algorithm, we
to failures of all attached links, where each link failure is reserve estimating message complexity as future work.
handled as described above. There are two special cases. The
first special case is a failure of a controller node. We already V. R ELATED W ORK
described this case.
The second special case is a failure of a selected root node For routing on the control channel we consider a problem
in the notification phase. In this phase each selected root node of calculating bi-directional paths for a communication pattern
waits for messages from all controllers in the corresponding in which the communication only occurs between a small
group. Each such message is acknowledged. It can happen subset of nodes and all other nodes. That is, for each pair of
that the root node failed after receiving and acknowledging communication partners, either the source or the destination
messages from some controllers in the group. However, once belongs to the aforementioned subset. We further consider
the tree root is selected, the instance of Izzy will run and a a trade-off between the size of the forwarding tables and
routing tree starts constructing. Thus, in this case controllers the (additive) path stretch with respect to this communication
can detect root node failures by the absence of Izzy’s messages. pattern in particular. To the best of our knowledge the exact
Controllers should use a timeout for this. Other controllers problem statement is not addressed.
The above mentioned trade-off is studied by compact routing.
2 to clear possible ambiguity, here, count-to-infinity refers to the fact that
Our problem is conceptually similar to landmark routing
on network partitions, Bellman-Ford does not terminate [9]. Failure of a root
node of a tree is equivalent to network partition between the root node and schemes (see [10] and references therein), where Seedling
all other nodes. executes a role of landmark selection algorithm. However,

Authorized licensed use limited to: Univ de Alcala. Downloaded on January 18,2023 at 12:23:29 UTC from IEEE Xplore. Restrictions apply.
existing landmark selection algorithms pursue different goals R EFERENCES
and cannot be applied to our communication pattern. [1] P. Goltsman et al., “Towards a resilient in-band SDN
A similar problem is a problem of selecting rendez-vous control channel”, in 1. KuVS Fachgespräch ”Network
points for multicast routing [11]. This problem can also be Softwarization” – From Research to Application, 2017.
viewed as a problem of constructing a minimum Steiner tree. [2] A. Greenberg et al., “A clean slate 4D approach to
A Steiner tree is a tree in a graph which spans a given subset network control and management”, ACM SIGCOMM
of vertices. This is equivalent to building communication Computer Communication Review, vol. 35, no. 5, 2005.
paths for a selected group of nodes. In our problem only [3] P. Berde et al., “Onos: Towards an open, distributed
one communication end-point is selected. SDN OS”, in Proceedings of the third workshop on Hot
Placing root nodes for a given location of controllers is topics in software defined networking, ACM, 2014.
similar to the problem of placing controllers for a given network [4] Y. Zhang et al., “When raft meets SDN: How to elect
topology. The variants of facility location problem which are a leader and reach consensus in an unruly network”,
discussed in [12], can be applied to our problem as well. in Proceedings of the First Asia-Pacific Workshop on
Facility location problem is however N P-hard and existing Networking, ACM, 2017.
distributed algorithms provide approximate solutions (see e.g., [5] A. A. Dixit et al., “Elasticon: An elastic distributed
[13] and references therein). We reserve the comparison of SDN controller”, in Proceedings of the Tenth ACM/IEEE
roots placement by Seedling to the placements achievable with Symposium on Architectures for Networking and Commu-
these algorithms as future work. nications Systems, Los Angeles, California, USA: ACM,
VI. C ONCLUSION AND O UTLOOK 2014.
[6] M. F. Bari et al., “Dynamic controller provisioning
In this paper we presented a distributed algorithm Seedling,
in software defined networks”, in Proceedings of the
which, together with a routing protocol Izzy [1], establishes
9th International Conference on Network and Service
communication paths on an SDN control channel for logically-
Management (CNSM 2013), 2013.
centralized controller clusters. Seedling determines location of
[7] D. Krioukov, K. Fall, A. Brady, et al., “On compact
tree roots for Izzy, while Izzy then establishes communication
routing for the internet”, ACM SIGCOMM Computer
paths by constructing spanning trees from these roots. We
Communication Review, vol. 37, no. 3, 2007.
described Seedling in static scenarios and outlined the necessary
[8] C. Boulinier et al., “Space efficient and time optimal dis-
changes to support network dynamics under the assumption
tributed BFS tree construction”, Information Processing
that on average the network dynamics is relatively low. The
Letters, vol. 108, no. 5, 2008.
resulting solution can be employed in networks where one
[9] J. J. Garcia-Lunes-Aceves, “Loop-free routing using
logically-centralized controller is feasible, such as datacenter
diffusing computations”, IEEE/ACM Transactions on
networks or small ISPs. For example, Internet2 is powered by
Networking (TON), vol. 1, no. 1, 1993.
a single Onos cluster [14].
[10] P. Jakma, “A distributed, compact routing protocol for
In our future work we will address distributed or hierarchical
the internet”, PhD thesis, University of Glasgow, 2016.
controllers, i.e., [15]. In such deployments the network is
[11] B. Fenner et al., Protocol Independent Multicast - Sparse
divided into areas, and a single area is managed by a single
Mode (PIM-SM): Protocol Specification (Revised), RFC
controller or cluster. Seedling can be applied to calculate tree
7761 (Internet Standard), RFC, RFC Editor, Mar. 2016.
roots for a given location of controllers within each area.
[12] B. Heller, R. Sherwood, and N. McKeown, “The con-
However, since each controller is limited to its area, Izzy
troller placement problem”, in Proceedings of the First
should be modified to construct a tree that only spans the
Workshop on Hot Topics in Software Defined Networks,
selected area. Additionally communication paths between the
Helsinki, Finland: ACM, 2012.
controllers in different areas are necessary.
[13] C. Frank and K. Römer, “Distributed facility location
As for Seedling itself, we are also considering a alternative
algorithms for flexible configuration of wireless sensor
where the groups can overlap. For example in Fig. 2b, c2 could
networks”, in International Conference on Distributed
belong to both groups, while two roots could be placed on
Computing in Sensor Systems, Springer, 2007.
the both non-controller nodes. This will simplify the decision
[14] G. Lipscomb. (2015). Internet2 implements first large-
phase. We are currently evaluating whether this approach is
scale deployment of onos in live network.
more beneficial in terms of resulting stretch for inter-controller
[15] M. Gerola et al., “Icona: Inter cluster onos network
communication.
application”, in 1st IEEE Conference on Network Soft-
Finally, we are working on extending the decision phase to
warization (NetSoft),, IEEE, 2015.
allow each controller to have its own local value for N . In
this case the control channel can be adapted to the demands
of controllers. For example, considering the trade-off between
overhead and path stretch, we can optimize either for low
overhead, if controllers configure network proactively, or for
latency, if reactive applications are deployed.

Authorized licensed use limited to: Univ de Alcala. Downloaded on January 18,2023 at 12:23:29 UTC from IEEE Xplore. Restrictions apply.

You might also like