Data Center Interconnect
Data Center Interconnect
-Rusdi Nasution
Data centers can be deployed in various environments, such as within a corporation, the
cloud, or across multiple global locations. Organizations can either build and manage their
own data centers or opt for services from specialized data center providers.
When multiple data centers are set up within an organization, it's essential to establish
connections between them. This connection is known as a Data Center Interconnect (DCI). A
DCI can operate at either Layer 2 (L2) or Layer 3 (L3). L2 DCI bridges Layer 2 traffic across the
transport network, while L3 DCI connects data centers through Layer 3, using IP routing.
There are numerous transport options available to enable these interconnections.
Before exploring the protocols and bridging methods used to relay traffic between data
centers, it's important to understand the transport networks over which this traffic passes.
Several types of networks can provide Data Center Interconnect (DCI) functionality:
- Point-to-point links: These are private lines or dark fiber connections exclusively reserved
for the organization, interconnecting multiple sites without sharing the network with other
entities or customers.
- MPLS interconnects: These networks use Multiprotocol Label Switching (MPLS) to bridge
two or more service domains. Like IP transport, MPLS interconnects can be either customer-
owned or managed by a service provider.
There are several methods to deploy EVPN/VXLAN across Data Center Interconnect (DCI):
- Layer 3 VPN (L3VPN) over MPLS: This method leverages MPLS networks to interconnect
data centers by extending Layer 3 VPNs, allowing for routing across the WAN and enabling
seamless data center communication.
- EVPN stitching: This technique connects two or more EVPN domains, allowing
communication between them by stitching together separate EVPN instances across
different data centers.
- EVPN-VXLAN over an existing WAN: In this method, EVPN-VXLAN traffic is tunneled over a
wide-area network (WAN), utilizing the existing infrastructure to connect multiple data
centers while keeping Layer 2 and Layer 3 services consistent.
- Direct connect: This approach uses dedicated point-to-point links, such as dark fiber or
other high-speed connections, to directly connect data centers, enabling high-performance
and low-latency EVPN/VXLAN traffic forwarding.
Layer 3 DCI
In this scenario, since each data center has its own unique IP address range, there's no need
to exchange MAC addresses between them. Instead, border leaf nodes advertise IP prefixes
using EVPN Type-5 routes, which omit host MAC addresses. For an L3 DCI, hosts in one data
center must be on different subnets than those in another data center, necessitating IP
routing.
For example, if host1 is on the 10.1.1.0/24 network, to communicate with hosts on the
10.1.2.0/24 network in another data center, it sends packets to its default gateway, which is
the IRB interface on its local leaf node (L1). Unlike a traditional VXLAN L3 gateway, with EVPN
Type-5 routes, IP routing between L3 domains occurs without configuring VNIs for routing on
each L3 gateway.
In this process:
- L2 learns the MAC address of host2 and advertises it as an EVPN Type-2 route to the border
leaf (BL2).
- BL2 creates a VXLAN tunnel to L2 and advertises the IP prefix 10.1.2.0/24 as a Type-5 route
to BL1.
- BL1 installs the route, using BL2's advertised MAC address for the IRB interface as the
destination MAC when forwarding traffic via the VXLAN tunnel to BL2.
- BL1 then advertises the Type-5 route to its EBGP neighbor, L1, which also installs the route
in its VRF table.
When host1 sends traffic to host2, L1 de-encapsulates the VXLAN packet, performs IP routing,
and forwards the packet to BL2 over the VXLAN tunnel. BL2 de-encapsulates the packet,
checks its L2 switching table, and sends it to host2 via the VXLAN tunnel between BL2 and L2.
L2 then delivers the packet to host2.
This approach allows efficient L3 communication between data centers using EVPN Type-5
routes.
When a Layer 2 stretch is required between data centers, border leaf nodes advertise EVPN
Type-2 routes. Each EVPN Type-2 route corresponds to a single MAC address within a data
center, which means a potentially large number of routes—sometimes in the thousands—are
advertised across the DCI link.
Because the Layer 2 stretch extends the same subnet across multiple data centers, VXLAN
tunnels must be established end-to-end between data centers and across the DCI transport
network. These VXLAN tunnels carry the MAC address information between the sites,
enabling seamless Layer 2 connectivity, as if all the devices were in the same local network.
This approach allows devices in different data centers to communicate over the same Layer 2
domain, which is crucial for applications that require Layer 2 adjacency, like certain clustering
technologies or virtual machines that need to be migrated between sites.
Layer 3 DCI — Example (Signaling Plane)
In this example, we will enable a Layer 3 DCI between two data centers, my-pod13 and my-
pod14, focusing on the configuration for my-pod13. Keep in mind that the same configuration
steps must be applied to my-pod14 as well.
Unique Subnets: Each data center has distinct IP subnets that will be routed between in
the VXLAN overlay.
EVPN Gateways: The devices borderleaf1 and borderleaf2 will serve as the EVPN
gateways for this scenario.
Underlay Configuration:
To ensure proper routing of traffic between the data centers, the following key underlay
network components and configuration are necessary:
1. Loopback Reachability: The loopback address of the remote EVPN gateway (borderleaf2)
must be reachable from all devices within the local data center (e.g., leaf1, spine1, and
borderleaf1).
2. EBGP Peering Sessions:
Each leaf and border leaf node within a data center will have an EBGP peering session
with the spine node.
The border leaf node in each data center will also have an EBGP session with the
External Router, which handles routing with external networks (using a connectivity
template).
3. Redistributing Loopback Addresses:
To ensure proper forwarding across data centers, each node will redistribute its
loopback address as a BGP route.
This ensures that all devices in both data centers can forward traffic to any other
device's loopback address.
Overlay Configuration:
The overlay uses EVPN MP-EBGP for signaling and routing between the data centers.
This signaling configuration ensures that traffic can be routed across both data centers, with
proper segmentation and isolation using the Finance VRF, and enables communication
between hosts across different subnets in the data centers.
The data plane in this example demonstrates how traffic is forwarded between servers
located in two data centers, my-pod13 and my-pod14, once the signaling plane is established
by the border leaf nodes. The key points of the data plane operation include:
This approach allows for efficient Layer 3 forwarding across data centers while maintaining IP
segmentation and minimizing the complexity of maintaining MAC address tables across
multiple data centers. The EVPN Type-5 routes and VXLAN tunnels ensure smooth
communication in a distributed environment.
Layer 2 DCI — Example (Signaling and Data Plane)
In this section, we will explore how to enable a Layer 2 Data Center Interconnect (DCI)
between two data centers, my-pod13 and my-pod14. While the configuration details provided
will focus on my-pod13, similar steps will need to be executed for my-pod14. The example
illustrates that both data centers share a common subnet, which we will extend across the
two locations using a VXLAN overlay. In this setup, borderleaf1 and borderleaf2 will serve as
the EVPN gateways.
Signaling Plane
1. Underlay Configuration:
It's crucial that all nodes in the remote data center have reachable loopback
addresses from all devices in the local data center. For example, leaf1, spine1, and
borderleaf1 must be able to route packets to the loopback addresses of leaf2, spine2,
and borderleaf2.
To establish this connectivity, each leaf and border leaf node will set up an EBGP
peering session with the spine node within its respective data center.
Additionally, to facilitate external communication, each border leaf node will also
establish an EBGP session with the external router using a connectivity template (not
illustrated in this example).
To accurately populate the inet.0 underlay routing table with correct routing
information, every node will redistribute its loopback address as a BGP route. This
ensures that all devices in both data centers can route traffic to each device's
loopback address.
2. Overlay Configuration:
For the overlay, each node will utilize EVPN MP-EBGP. The peering sessions for the
overlay MP-EBGP will replicate the configurations of the underlay EBGP sessions.
Within each data center, every leaf and border leaf node will maintain an MP-EBGP
session with the spine node.
When deploying a blueprint in Apstra, the configuration for both underlay EBGP and
overlay MP-EBGP sessions is automatically enabled, simplifying the signaling process.
3. EVPN MP-EBGP Multihop Session:
The last critical component of the signaling plane is the EVPN MP-EBGP multihop
session between the two border leaf nodes.
During this session, each border leaf node will advertise the MAC addresses it has
learned as EVPN Type-2 routes to the remote EVPN gateway (the other border leaf).
Data Plane
1. Traffic Forwarding:
After the signaling plane is established and the MAC addresses are learned, the data
plane can efficiently forward traffic between the servers across the two data centers.
The VXLAN overlay allows seamless communication between devices in different
locations, utilizing the established Layer 2 connectivity.
2. MAC Address Learning:
Each border leaf node will maintain a table of learned MAC addresses and utilize this
information to forward traffic appropriately across the VXLAN tunnels.
By using this approach, organizations can effectively stretch Layer 2 networks across multiple
data centers while ensuring efficient routing and connectivity, leveraging VXLAN and EVPN
technologies. This enables applications to operate seamlessly across geographically
dispersed environments.
=======================================================================
In this lab session, I am compiling lab documentation to illustrate the configuration of Data
Center Interconnect (DCI) and verify its setup, following Juniper Networks APSTRA Training.
In this lab, I will create a DCI between two blueprints to facilitate communication between
servers located in different data centers. By completing this lab, I will:
Note: Each lab is designed to function independently, allowing me to complete any lab
without having to complete the earlier ones. To support this, there are extra steps included at
the start of each lab. However, if I am working through the labs sequentially—lab1, lab2, lab3,
and so forth—I will notice that this lab builds on the previous one. In that scenario, I can
proceed directly to Part 3 of this lab.
In this section of the lab, I will apply a minimal configuration to the network devices in the
topology. Refer to the diagram labeled 'Lab Diagram: Network Management' for guidance.
Go back to the student desktop. Open a terminal window and SSH into the Apstra server (IP:
172.25.11.99) using the credentials admin/L@b123$L@b123$.
Run the sudo /var/tmp/apstra-backups/restore.py command using the credentials
admin/L@b123$L@b123$ and select lab 15. The Python script will take around five minutes
to complete. Once finished, I can move on to the next part of the lab.
Next, return to the Chrome browser. In Chrome, open the Juniper Apstra server's user
interface by going to https://172.25.11.99. Log in with the admin/admin credentials.
Go to the Blueprints section in the UI to check the overall status of the existing blueprints.
Wait until Apstra shows that no anomalies are detected.
In this section, I will examine the topology that has been prepared and check the IP
reachability between the two blueprints. Please refer to the diagram titled 'Lab Diagram:
Configuring a DCI (Parts 5)' for assistance during this part of the lab.
Refer to the lab diagram for this section of the lab and respond to the questions.
Question: How many existing blueprints are being managed by Apstra? What are their names?
Answer: Apstra is currently managing two blueprints, which are named my-pod13 and my-
pod14."
Answer: Each blueprint has a server, a leaf node, a spine node, and a border leaf node.
Question: In order to establish a DCI between the blueprints (data centers), what must be
true about the IP reachability of the IP fabric nodes in each data center?
Answer: In order to establish a DCI, there must be IP reachability between the DCI gateways
in each data center. Usually, the gateways must be able to reach each others loopback
address.
Return to student desktop. Then, open a terminal window by navigating to Applications >
System Tools > Terminal.
In the terminal window, SSH into the external router at 172.25.11.31 using the credentials
lab/lab123. Then, run the command show bgp summary to check the BGP neighbor
relationships of the external router.
Question: Does the external router have any BGP relationships? If so, with which devices in
the topology?
Answer: The external router has established BGP sessions with both borderleaf1 and
borderleaf2 nodes.
Question: In Apstra, how are BGP sessions configured between a leaf and an external device?
Answer: To establish BGP sessions between a leaf and an external device, I need to create a
connectivity template and apply it to the interface facing the external device.
Run the command show route receive-protocol bgp 192.168.1.1 to examine the routes being
learned from borderleaf1.
Question: What types of networks is the external router learning from borderleaf1?
Answer: The external router is learning the loopback addresses from my-pod13.
Run the command show route receive-protocol bgp 192.168.1.3 to examine the routes being
learned from borderleaf2.
Question: What types of networks is the external router learning from borderleaf2?
Answer: The external router is learning the loopback addresses from my-pod14.
In the terimanal window, SSH to leaf1 (172.25.11.3) using the lab/lab123 credentials. Attempt
to ping all the loopback addresses in both of the blueprints. Source the ping packets from the
loopback of leaf1.
Question: Were the pings to all the devices in the two blueprint successful?
Answer: Yes, the pings should be successful. The leaf1 node has IP reachability to all the
loopback addresses of all devices in both blueprints.
In this section of the lab, I will set up a Finance routing zone in each blueprint to support the
finance subnet specific to each blueprint. Following that, I will create a Layer 3 DCI between
the two blueprints to facilitate communication between server1 and server2. Please consult
the diagram labeled 'Lab Diagram: Configuring a DCI (Part 6)' for assistance during this part
of the lab.
Question: What server(s) will take part in the finance-www virtual network?
Question: What server(s) will take part in the finance-app virtual network?
Answer: The server2 device is the only server that belongs to the finance-app virtual network.
Answer: In Apstra, a routing zone is utilized to facilitate routing between virtual networks. A
routing zone consists of a set of related VRFs that will be configured on the leaf nodes. In this
lab, I will create a routing zone named Finance in each blueprint. Consequently, Apstra will
enable a VRF called Finance on every leaf node, resulting in a total of four VRFs across the
two blueprints, allowing for traffic routing between the virtual networks.
Question: Between the two blueprints, which two devices do you think will be the EVPN
external gateways for the DCI?
Answer: There are a few options but since the borderleaf nodes provide the gateway from the
blueprint to the outside world, it probably makes the most sense to the make the borderleaf
nodes the EVPN external gateways for the DCI.
Go back to the my-pod13 blueprint. From there, navigate to Staged > Virtual > Routing Zones
and click on Create Routing Zone.
As illustrated in the lab diagram, it's crucial for Layer 3 DCIs to have consistent route target
communities and VNI values in the EVPN Type 5 routes advertised between blueprints. In
Apstra, I configure these essential settings within the routing zone. In the Create Routing
Zone window, I will name the VRF Finance, set the VNI to 5300, and keep the other fields at
their default settings. Click Create once completed.
Question: By default, Apstra automatically chooses a route target community for the routing
zone. What was the chosen community?
Answer: By default, Apstra utilizes the VNI value to generate the route target community.
Since I selected a VNI value of 5300, Apstra automatically assigned the route target
community as target:5300:1.
Next, I will create the finance-www virtual network. Navigate to Staged > Virtual > Virtual
Networks and click on Create Virtual Network.
In the Create Virtual Network window, input the following parameters:
Type: VXLAN
Name: finance-www
Routing Zone: Finance
VNI(s): leave blank
VLAN ID (on Leafs): 100
Reserve across blueprint: checked
DHCP Service: Disabled
IPv4 Connectivity: Enabled
IPv4 Subnet: 10.30.42.0/24
Virtual Gateway IP: 10.30.42.1
Create Connectivity Templates for: Tagged
Assigned To: check all switches.
Answer: Yes, the finance-www virtual network should have appeared on the list of virtual
networks.
Now that I have created the virtual network, I need to assign resources to it. Click the red
status indicator next to VNI Virtual Network IDs, and then click the Update Assignments
button to view the available resource pools.
Set the VNI Virtual Network IDs to evpn-vni and click the Save button.
Go to Staged > Virtual > Routing Zones and click the red status indicator next to Finance: Leaf
Loopback IPs. Then, click the Update Assignments button to view the available resource
pools.
Set the Finance: Leaf Loopback IPs to leaf-loopback and click the Save button.
Answer: Yes, there is one new connectivity template that is in the Ready state (I have not yet
assigned it). The template was autogenerated by Juniper Apstra; however, I will assign it to
the appropriate interface.
Click the Assign icon next to the tagged VXLAN 'finance-www' connectivity template. In the
Assign Tagged VXLAN 'finance-www' window, check the box next to the server1-facing
interface and click Assign.
Question: What is the status of all the connectivity templates?
Answer: The status of the connectivity templates has changed from Ready (in yellow) to
Assigned (in green), where appropriate.
Up to this point, I have established the routing zone, virtual network, and connectivity
template to facilitate traffic forwarding to local destinations within the my-pod13 blueprint.
However, the objective is to create a DCI to enable traffic to be routed to remote destinations
in the my-pod14 blueprint. To create a DCI, it is essential to identify the AS number and IP
address of the remote EVPN gateway, which is borderleaf2 in the my-pod14 blueprint.
Go back to the my-pod14 blueprint. In this blueprint, navigate to Active > Physical and click
on the borderleaf2 node. Then, select the Properties tab.
Question: What is the AS number and IP address of the loopback interface? Does it match the
lab diagram?
Answer: The AS number is 64556 and the loopback IP address is 192.168.1.3 which matches
the lab diagram.
Go back to the my-pod13 blueprint. In this blueprint, navigate to Staged > DCI > Over the Top
or External Gateways. Then, select Create Over the Top or External Gateway.
In the Create Over the Top or External Gateway window, enter the following values:
Name: borderleaf2
IP Address: 192.168.1.3
ASN: 64556
TTL: 15
EVPN Route Types: Type-5 Only (l3-only mode)
Local Gateway Nodes: borderleaf1
In the blueprint, click the Uncommitted tab to see the changes that will be made in the
Logical Diff tab.
Question: When you decide to commit, what changes will be to the network?
Answer: Upon comitting. Juniper Apstra will add a Routing Zone, BGP peering for the EVPN
gateway, 1 connectivity templates, and a virtual network to the blueprint.
Click Commit and enter the description Added virtual network and EVPN gateway. Then,
click Commit again to apply the changes to the Active blueprint and deploy them.
In the blueprint, select the Dashboard tab and observe how the gauges update in response to
the changes.
Answer: Yes, there should be one anomaly. Apstra expects to see an established BGP session
between the local and remote borderleaf nodes, but the session is currently down. I will
address this in the upcoming steps of this part of the lab.
Go back to the my-pod14 blueprint. From there, navigate to Staged > Virtual > Routing Zones
and click on Create Routing Zone.
As illustrated in the lab diagram, for Layer 3 DCIs, it is crucial to have matching route target
communities and VNI values in the EVPN Type 5 routes advertised between blueprints. In
Apstra, these key settings are configured in the routing zone. In the Create Routing Zone
window, name the VRF Finance, set the VNI to 5300, and leave the other fields at their
default settings. Click Create when done.
Question: By default, Apstra automatically chooses a route target community for the routing
zone. What was the chosen community? Does it match the community that was configured in
the my-pod13 blueprint?
Answer: By default, Apstra uses the VNI value to create the route target community. Since
you selected a VNI value of 5300, Apstra assigned the route target community as
target:5300:1, which matches the community I configured earlier.
Answer: By default, Apstra uses the VNI value to generate the route target community. Since I
chose a VNI value of 5300, Apstra set the route target community to target:5300:1, matching
the community I configured earlier.
Now, I will create the finance-app virtual network. Navigate to Staged > Virtual > Virtual
Networks and click Create Virtual Network.
Type: VXLAN
Name: finance-app
Routing Zone: Finance
VNI(s): leave blank
VLAN ID (on Leafs): 101
Reserve across blueprint: checked
DHCP Service: Disabled
IPv4 Connectivity: Enabled
IPv4 Subnet: 10.30.43.0/24
Virtual Gateway IP: 10.30.43.1
Create Connectivity Templates for: Tagged
Assigned To: check all switches
Question: Does the finance-app virtual network appear in the list of virtual networks?
Answer: Yes, the finance-app virtual network should have appeared on the list of virtual
networks.
Now that I’ve created my virtual network, I need to assign resources to it. Click the red status
indicator next to VNI Virtual Network IDs, then click the Update assignments button to view
the available resource pools.
Set the VNI Virtual Network IDs to evpn-vni and click the Save button.
Go to Staged > Virtual > Routing Zones and click the red status indicator next to Finance: Leaf
Loopback IPs, then click the Update assignments button to view the available resource pools.
Set the Finance: Leaf Loopback IPs to leaf-loopback and click the Save button.
Navigate to Staged > Connectivity Templates .
Answer: Yes, there is a new connectivity template in the Ready state that I have not assigned
yet. The template was automatically generated by Juniper Apstra, but I need to assign it to
the correct interface.
Click the Assign icon next to the Tagged VxLAN 'finance-app' connectivity template.
In the Assign Tagged VxLAN 'finance-app' window, check the box next to the server2-facing
interface and click Assign.
Question: What is the status of all the connectivity templates?
Answer: The status of the connectivity templates has updated from Ready (yellow) to
Assigned (green), where applicable.
Up to this point, I have established the routing zone, virtual network, and connectivity
template to facilitate the forwarding of traffic to local destinations within the my-pod14
blueprint. However, our objective is to create a DCI to enable traffic to also reach remote
destinations in the my-pod13 blueprint. To create a DCI, it's essential for me to know the AS
number and IP address of the remote EVPN gateway (borderleaf1 in the my-pod13 blueprint).
Go back to the my-pod13 blueprint. In the my-pod13 blueprint, navigate to Active > Physical
and click on the borderleaf1 node. Then, select the Properties tab.
Question: What is the AS number and IP address of the loopback interface? Does this match
the lab diagram?
Answer: The AS number is 64553 and the loopback IP address is 192.168.1.1 which matches
the lab diagram.
Return to the my-pod14 blueprint. In the my-pod14 blueprint, go to Staged > DCI > Over the
Top or External Gateways and select Create Over the Top or External Gateway.
In the Create Over the Top or External Gateway window, enter the following values:
Name: borderleaf1
IP address: 192.168.1.1
ASN: 64553
TTL: 15
EVPN Route Types: Type-5 Only (L3-only mode)
Local Gateway Nodes: borderleaf2
Click Create
Question: When you decide to commit, what changes will be to the network?
Answer: When I choose to commit, Juniper Apstra will add a Routing Zone, set up BGP peering
for the EVPN gateway, include one connectivity template, and a virtual network into the
blueprint.
Click Commit and enter the description Added virtual network and EVPN gateway. Then click
Commit again to apply the changes to the Active blueprint and deploy them.
In the blueprint, select the Dashboard tab and observe as the gauges reflect the changes.
Question: Did all the gauges eventually turn green?
Answer: Yes, all the gauges should show green. Congratulations! I have successfully created a
Layer 3 DCI between the two blueprints.
Go back to the student desktop. From the student desktop, open a terminal window and
establish an SSH session with the server2 device (172.25.11.22) using lab/lab123 as the
credentials. In the SSH session with the server2 device, try to ping server1 (10.30.42.101).
Return to the Chrome browser. From there, go to Active > Physical > Topology in the my-
pod14 blueprint and select leaf2. In the Device tab for leaf2, click on Execute CLI Command
and run the command show route nh-detail extensive table Finance 10.30.42.0/24 to see how
packets intended for server1 will be routed.
Question: How do packets destined to server1 get forwarded from leaf2? Does that match
how it is depicted in the lab diagram?
Answer: The leaf2 node will take a packet destined for server1 and encapsulate it in an
Ethernet frame using a destination MAC address of borderleaf1's IRB interface and then
encapsulate the new Ethernet frame in VXLAN encapsulation with a destination IP address of
borderleaf1's loopback interface address. This matches what is depicted in the lab diagram.
In this section of the lab, I will set up a Layer 2 DCI between the two blueprints to enable
communication between server1 and server2 over a shared subnet. Refer to the diagram
titled 'Lab Diagram: Configuring a DCI (Part 7)' for guidance during this part of the lab.
Close the Execute CLI Command window. In the my-pod14 blueprint, go to the Time Voyager
tab. Click on the Jump to the revision button for the revision labeled as 'added BGP session
with external router.' Then, click Rollback to confirm the action of reverting to that previous
state.
In the blueprint, select the Uncommitted tab to view the changes that are pending in the
Logical Diff tab.
Click Commit and enter the description Rollback to before L3 DCI, then click Commit to
commit changes to the active blueprint and deploy the changes.
In the blueprint, click the Dashboard tab and watch the gauges adjust to the changes.
Question: Did all the gauges eventually turn green?
Return to the my-pod13 blueprint. In this blueprint, go to the Time Voyager tab. Click the
Jump to the revision button for the revision labeled 'added BGP session with external-router.'
Then, click Rollback to confirm the action of reverting to that previous state.
In the blueprint, click the Uncommitted tab to see the changes that will be made in the
Logical Diff tab.
Click Commit and enter the description Rollback to before L3 DCI, then click Commit to
commit changes to the Active blueprint and deploy the changes.
In the blueprint, click the Dashboard tab and watch the gauges adjust to the changes.
Question: Did all the gauges eventually turn green?
The final steps of the lab have brought us back to a previous configuration. As a result, I now
have two blueprints where underlay routing is enabled, but all virtual networks and EVPN
gateways have been deleted. Please refer to the lab diagram for this section of the lab.
Question: What server(s) will take part in the finance-www virtual network?
Answer: The server1 and server2 devices belong to the finance-www virtual network.
Question: Will there be a need for routing to occur between the two servers?
Answer: Because the two servers belong to same IP subnet, there will be no need for a routing
zone to be configured
Navigate to Staged > Virtual > Routing Zones and click Create Routing Zone.
In the Create Routing Zone window, designate the VRF as Finance, assign the VNI a value of
5300, and keep all other fields at their default settings. Once completed, click Create.
Question: By default, Apstra automatically chooses a route target community for the routing
zone. What was the chosen community?
Answer: By default, Apstra employs the VNI value to aid in creating the route target
community. Since I selected a VNI value of 5300, Apstra designated the route target
community as target:5300:1.
Now, I will set up the finance-www virtual network. In the my-pod13 blueprint, go to Staged >
Virtual > Virtual Networks and click on Create Virtual Network.
As outlined in the lab diagram, for Layer 2 DCIs, it is crucial to have matching route target
communities and VNI values in the EVPN type 2 routes advertised between blueprints. In
Apstra, these essential settings are configured in the virtual network.
- **Type:** VXLAN
- **Name:** finance-www
- **VNI(s):** 5301
Answer: Yes, the finance-www virtual network should have appeared on the list of virtual
network.
Question: What is the VNI associated with the finance-www virtual network?
Answer: By default, Apstra utilizes the VNI value to create the route target community. Since I
selected the VNI value of 5301, Apstra assigned the route target community as target:5301:1.
Answer: Yes, there is one new connectivity template in the Ready state (I have not assigned it
yet). Juniper Apstra automatically generated the template, but I am responsible for assigning
it to the appropriate interface.
Click the Assign icon next to the Tagged VxLAN 'finance-www' connectivity template.
In the Assign Tagged VxLAN 'finance-www' window, select the checkbox corresponding to the
server1-facing interface and click Assign.
Answer: The status of the connectivity templates has changed from Ready (in yellow) to
Assigned (in green), where appropriate.
In the my-pod13 blueprint, go to Staged > DCI > Over the Top or External Gateways and
choose Create Over the Top or External Gateway.
In the Create Over the Top or External Gateway window, enter the following details:
Name: borderleaf2
IP address: 192.168.1.3
ASN: 64556
TTL: 15
EVPN Route Types: All Routes (l2+l3 mode)
Local Gateway Nodes: borderleaf1
Click Create
Navigate to Staged > Virtual > Routing Zones, and click the red status indicator next to
Finance: Leaf Loopback IPs. Then, click the Update assignments button to view the available
resource pools.
Set the Finance: Leaf Loopback IPs to leaf-loopback and click the Save button to apply the
changes.
In the blueprint, click the Uncommitted tab to see the changes that will be made in the
Logical Diff tab.
Question: When you decide to commit, what changes will be to the network?
Answer: When I decide to commit, Juniper Apstra will add a Routing Zone, BGP peering for
the EVPN gateway, 1 connectivity template, and a virtual network to the blueprint.
Click Commit and enter the description "Added L2 virtual network and EVPN gateway." Then,
click Commit again to apply the changes to the Active blueprint and deploy the updates.
In the blueprint, click the Dashboard tab and watch the gauges adjust to the changes.
Answer: Yes, there should be a single anomaly. Apstra is expecting to see an established BGP
session between the local and remote borderleaf nodes, but the session is currently down. I
will resolve this in the next steps of this part of the lab.
1. Navigate to the Routing Zones: In the my-pod14 blueprint, go to Staged > Virtual >
Routing Zones.
2. Create a New Routing Zone: Click on Create Routing Zone.
3. Configure the Routing Zone:
Name: Enter Finance.
VNI: Set it to 5300.
Default Values: Leave the remaining fields at their default values.
4. Finalize Creation: Click Create when you are finished.
Question: By default, Apstra automatically chooses a route target community for the routing
zone. What was the chosen community? Does it match the community that was configured in
the my-pod13 blueprint?
By default, Apstra utilizes the VNI value to generate the route target community. Because I
selected a VNI value of 5300, Apstra assigned the route target community as target:5300:1,
which corresponds with the community I configured previously.
Now, I'll create the finance-www virtual network. I'll navigate to Staged > Virtual > Virtual
Networks and click on Create Virtual Network.
In the Create Virtual Network window, I will enter the following parameters:
Type: VXLAN
Name: finance-www
Routing Zone: Finance
VNI(s): 5301
VLAN ID (on Leafs): 100
Reserve across blueprint: checked
DHCP Service: Disabled
IPv4 Connectivity: Disabled
Create Connectivity Templates for: Tagged
Assigned To: check all switches
Once I have filled in these details, I will click on Create when finished.
Question: Does the finance-www virtual network appear in the list of virtual networks?
Answer: Yes, the finance-www virtual network should have appeared on the list of virtual
networks.
Question: What is the VNI associated with the finance-www virtual network?
Answer: By default, Apstra uses the VNI value to generate the route target community. Since I
selected the VNI value of 5301, Apstra assigned the route target community as target:5301:1.
Answer: Yes, there is one new connectivity template that is currently in the Ready state (it
hasn't been assigned yet). Juniper Apstra automatically generated this template, but it's my
responsibility to assign it to the appropriate interface.
Click the Assign icon next to the Tagged VxLAN 'finance-www' connectivity template.
In the Assign Tagged VxLAN 'finance-www' window, select the checkbox that is associated
with server2-facing interface and click Assign.
Question: What is the status of all the connectivity templates?
Answer: The status of the connectivity templates has changed from Ready (in yellow) to
Assigned (in green), where appropriate.
Navigate to Staged > DCI > Over the Top or External Gateways. Select Create Over the Top or
External Gateway.
In the Create Over the Top or External Gateway window, set the following values:
Name: borderleaf1
IP address: 192.168.1.1
ASN: 64553
TTL: 15
EVPN Route Types: All Routes (l2+l3 mode)
Local Gateway Nodes: borderleaf2
Click Create
To navigate to the Routing Zones in the my-pod14 blueprint, follow these steps:
This will allow you to see and manage the available resource pools for the Leaf Loopback IPs.
Navigate to Staged > Virtual > Routing Zones and click the red status indicator next to
Finance: Leaf Loopback IPs then click the Update assignments button to see available
resource pools.
Set the Finance: Leaf Loopback IPs to leaf-loopback and click the Save button
In the blueprint, click the Uncommitted tab to see the changes that will be made in the
Logical Diff tab.
Question: When you decide to commit, what changes will be to the network?
Answer: When I decide to commit, Juniper Apstra will add a Routing Zone, BGP peering for
the EVPN gateway, one connectivity template, and a virtual network to the blueprint.
Click Commit, enter the description "Added L2 virtual network and EVPN gateway," and then
click Commit again to apply the changes to the Active blueprint and deploy them.
In the blueprint, click the Dashboard tab and watch the gauges adjust to the changes.
Answer: Yes, all the gauges should be green. Well done! I’ve successfully created a Layer 2 DCI
between the two blueprints.
In this step, I will modify the IP address and VLAN tag settings of server2's fabric-facing
interface to ensure it is in the same subnet as server1. Go back to the session already open to
server2. From this session, run the command ./apstra-v100-singlehomed-tagged.sh
(password: lab123) to reconfigure server2's fabric-facing interface.
Question: The last step of the script shows the new IP addressing of the interfaces of the
device. Does server2 have a vlan100 interface with the correct IP addressing (see the lab
diagram)?
From the Chrome browser, navigate to Active > Physical > Topology (in the my-pod14
blueprint) and click leaf2. In the Device tab for leaf2, click Execute CLI Command and issue
the show ethernet-switching table command to view how packets destined to server1 will be
forwarded.
Question: How do packets destined to server1 get forwarded from leaf2? Does that match
how it is depicted in the lab diagram?
Answer: The leaf2 node will take an Ethernet frame destined for server1 and encapsulate it in
VXLAN encapsulation with a destination IP address of leaf1's loopback interface address
(192.168.1.0). This matches what is depicted in the lab diagram.