100% found this document useful (1 vote)
1K views372 pages

ADCX V18A SG - Vol.1

Uploaded by

TuPro Fessional
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
1K views372 pages

ADCX V18A SG - Vol.1

Uploaded by

TuPro Fessional
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 372

JUnlPe[

N ETWORKS
Education Services

Data Center Fabric


with EVPN and VXLAN
STUDENT GUIDE Revision V18A

.

'J

Engineering
Simplicity

Education Services Courseware


Data Center Fabric with EVPN and VXLAN
V-18.a

Student Guide
Volume 1 of 2

un1Pe[ NETWORKS
Education Services

1133 Innovation Way


Sunnyvale, CA 94089 USA
408-745-2000
www.juniper.net

Cou rse Number: EDU-JUN-ADCX


This document is produced by Juniper Networks, Inc.
This document or any part t hereof may not be reproduced or t ransmitted in any form under penalty of law, without t he prior written permission of Juniper Net works Education
Services.
Juniper Networks, Junos, Steel-Belted Radius, NetScreen, and ScreenOS are registered t rademarks of Juniper Netw orks, Inc. in t he Unit ed States and oth er count ries. The
Juniper Networks Logo, the Ju nos logo, and JunosE are t ra demarks of Juniper Networks, Inc. All other t rademarks, service marks, registered t rademarks, or regist ered service
marks are t he property of t heir respective owners.
Data Center Fabric with EVPN and VXLAN Studen t Guide , Revision V-18.a
Copyright © 2019 Juniper Networks, Inc. All rights reserved.
Print ed in USA.
Revision History:
Revision 14.a- April 2016
Revision 1 7.a- June 201 7
Revision V18.a- June 2019
The informat ion in t his document is current as of t he date listed above.
The informat ion in t his document has been carefully verif ied and is believed t o be accurat e for Junos OS Release 18.1 R3 -SX.
Juniper Networks assumes no responsibilities for any inaccuracies t hat may appear in this document. In no event will Juniper Networks be liable for direct, indirect, special,
exemplary, incidental, or consequent ial damages resulting f rom any defect or omission in t his document, even if advised of the possibility of such damages.

Juniper Networks reserves t he right to change, modify, t ransfer, or otherwise revise t his publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
t ime-related limitations through t he year 2038. However, the NTP application is known t o have some difficulty in t he year 2036.
SOFTWARE LICENSE
The terms and cond it ions for using Juniper Networks software are described in t he software license provided with the software, or t o the extent applicable, in an agreement
executed between you and Juniper Net works, or Juniper Net works agent. By using Juniper Networks software, you indicat e that you understand and agree t o be bound by its
license t erms and condit ions. Generally speaking, the software license rest ricts t he manner in which you are permitted t o use t he Juniper Net works software, may contain
prohibitions against certain uses, and may state condit ions under which t he license is automat ically terminated. You should consult t he software license for further det ails.
Contents

Chapter 1: Course Introduction ......................................................... 1-1

Chapter 2: Data Center Fundamentals Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1


Data Center Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Data Center Fabric Architectures ........ . . . ... . ..... . .... . ..... . ..... . .... . ..... . .... 2-9

Chapter 3: IP Fabric .................................................................. 3-1


IP Fabric Overview . ..... . ..... . .... . ..... . ... . ..... .. ... . ..... . ..... .. ... .. .... . ... 3-3
IP Fa bric Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
IP Fabric Scaling . ..... . .... . . . ....... . . . ... . .......... . ..... . ..... . .... . ..... .. .. 3-22
Configure an IP Fabric .. . .............. . . . ... . .......... . ..... . ..... . .... . ..... . ... 3-27
Lab: IP Fabric ... . ..... . .... . . . ... . ..... . .... . ..... . ..... . .......... . ... . ...... . .. 3-53

Chapter 4: VXLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1


Layer 2 Connectivity over a Layer 3 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
VXLAN Fundamentals ....... . ..... . ..... . .... . ..... . .... .. .... . ..... . .......... . .. 4-13
VXLAN Gateways . ..... . .......... . ... . ..... . .......... . ..... . ..... . .... . ..... . ... 4-23

Chapter 5: EVPN VXLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


VXLAN Management ... . .... . ..... . ..... . .... . ..... . ..... . .... . ..... . ... . .......... 5-3
VXLAN with EVPN Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-8
EVPN Routing and Bridging ... . ..... . ..... . .... . ..... . .... .. .... . ................... 5-15

Chapter 6: Configuring VXLAN ................... . ...................................... 6-1


Configuring Multicast Signaled VXLAN . .......... . ..... . .... . ..... . ..... . .......... . ... 6-3
Lab: VXLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3 9

Chapter 7: Basic Data Center Architectures . ............... . . . ............................ 7-1


Requirements Overview . . ................ . ... . ..... . .... . ........... . .... . ......... 7-3
Base Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 -6
Design Options and Modifications ... . ... . . . .............. . . . ... . ..... . .... . ..... . ... 7-17
Lab: EVPN-VXLAN L3-GW . ..... . .... . ..... . .............. . ..... . ..... . .... . ..... . .. 7-26

Chapter 8: Data Center Interconnect .................................................... 8-1


DCI Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
DCI Options for a VXLAN Overlay .... . .......... . ..... . .... . ........... . ............. 8 -11
EVPN Type 5 Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-21
DCI Exam pie .... . ..... . .... . ..... . ... . ................ . . . ... . .......... . ..... . ... 8 -28
Lab: Data Center Interconnect . .. .... . ..... . .............. . . . ... . .......... . ..... . .. 8 -56

Chapter 9: Advanced Data Center Architectures ....... . ........ . . . ...................... . . 9-1


Requirements Overview . . ..... . .......... . ... . ..... . .... . ..... . ..... . .... . ..... . ... 9-3
Base Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-6

www.juniper.net Contents • iii


Chapter 10: EVPN Multicast . ............................. . ............................. 10-1
EVPN Multicast Routing . . ..... . .... . ......... . ..... . .... . ..... . .......... . ..... . .. 10-3
EVPN Multicast .. . ..... . .... .. .... . ..... . .......... . . . . . . . .............. . . . ..... 10-21

Chapter 11: Multicloud DC Overview . .................................................... 11-1


CEM Overview ......... . ..... . .......... . .............. . ................ . ..... . .. 11-3
CEM Use Cases ........ . .... . ........... . .......... . ... . . . .............. . . . ..... 1 1-11

Acronym List ................................................................. . ..... ACR-1

iv • Contents www.juniper.net
Course Overview

Th is f ive-day co urse is designed to provide in-depth instruction on IP fabric and Ethernet VPN Controlled Virtua l
Extensible LAN (EVPN-VXLAN) data cent er design and configuration. Add it iona lly, the co urse will cove r other data center
concepts, including basic and advanced data center design options, Data Center Interconnect (DCI), EVPN m ulticast
enhancements, and an introduction to dat a center automation co ncept s. The co urse ends with a multi-sit e dat a cente r
design lab. Th is content is based on Junos OS Re lease 17.4R1 and 18.2R1-S3.

Course Level
Data Center Fabric with EVPN and VXLAN (ADCX) is an advanced level course.

Intended Audience
The primary audiences for this cou rse are the following:

• Data Center Implementation Engineers; and

• Data Center Design Engineers.

Prerequisites
The fo llowing are the prereq uisites for this course:

• Understanding of the OSI model;

• Adva nced routing knowledge- the Advanced Junos Enterprise Routing (AJER) course or equ iva lent
knowledge;

• Intermediate switching knowledge- the Junos Enterprise Switching Using Enhanced Layer 2 Software (JEX)
course or equivalent knowledge; and

• Intermediate to advanced Junos CLI experience.

Objectives
Aft er successfully completing t his course, you should be able to:

• Describe and configure an IP fabric .

• Describe and configure an EVPN-VXLAN data center.

• Describe and configure Centrally Routed Bridging (CRB) EVPN-VXLAN .

• Describe and configure Edge Routed Bridging (ERB) EVPN-VXLAN .

• Describe basic and advanced dat a ce nter design co ncept s .

• Describe and configure Data Center Interconnect.

• Describe enhancements t o mu lticast fu nctionality in an EVPN-VXLAN .

• Describe the role of multicloud data center co ntrollers .

www.j uniper.net Course Overview • v


Course Agenda

Day1
Chapter 1 : Course Introduction

Chapter 2 : Data Center Fundamenta ls Overview

Chapter 3 : IP Fabric

Lab 1 : IP Fabric

Chapter 4 : VXLAN Fundamentals

Day2
Chapter 5 : EVPN Controlled VXLAN

Chapter 6 : Configuring EVPN Controlled VXLAN

Lab 2: EVPN-VXLAN

Day3
Chapter 7: Basic Data Center Architectures

Lab 3 : EVPN-VXLAN Layer 3 Gateways

Chapter 8 : Data Center Interconnect

Day4
Lab 4 : Data Center Interconnect

Chapter 9 : Advanced Data Center Architectures

Chapter 1 0: EVPN Multicast

Chapter 1 1: Introduction to Multicloud Data Center

Day5
Chapter 1 2: Comprehensive Lab

Lab 5: Comprehensive Data Center Lab

Appendix A: Virtua l Chassis Fabric

Appendix B: Virtua l Chassis Fabric Management

Appendix C: Junos Fusion Data Center

Appendix D: Multi-Chassis LAG

Appendix E: Troubleshooting MC-LAG

Appendix F: Zero Touch Provisioning

Appendix G: In-Service Software Upgrade

Appendix H: Troubleshooting Basics

Appendix I: Data Center Devices

vi • Course Agenda www .juniper.net


Document Conventions

CLI and GUI Text


Frequently throughout this course, we refer to text that appears in a comman d-line interface (CLI) or a graph ica l user
interface (GUI). To make the language of these documents easier to read, we distinguish GUI and CLI text from chapter
text according to the following table.

Style Description Usage Example

Franklin Gothic Normal text. Most of what you read in the Lab Guide and
Student Guide.

Cou r i er New Console text:


commit comple t e
• Screen captures

• Non-command-related syntax Ex it i n g confi gu r at i on mode


GUI text elements:

• Menu names Select Fi l e > Open, and then click


Co n fig u ra t io n. conf in the Fi l ename text
• Text field entry
box.

Input Text Versus Output Text


You will also frequently see cases where you must enter input text yourself. Often these instances will be shown in the
context of where you must enter them. We use bold style to distinguish text that is input versus text that is simply
displayed.

Style Description Usage Example

No r ma l CLI No distinguishing variant. Physical inte rf ace : fxpO , En a ble d


No r ma l GUI View configuration history by cl icking
Co n figurat i o n > His t o r y .

CLI Input Text that you must enter. l ab@Sa n Jose> show r o ute
GUI Input Select Fi l e > Save, and type config. ini
in the Fi l ename f ield.

Undefined Syntax Variables


Finally, this course distinguishes syntax variables, where you must assign the va lue (undefined variables). Note that
these styles can be combined with the input style as well .

Style Description Usage Example

CLI Undefined Text where the variable's value is the user's Type set policy policy-name.
discretion or text where the variable's va lue
ping 10.0.x.y
as shown in the lab guide might differ from
GUI Undefined the va lue the user must input according to Select File > Save, and type filename
the lab topology. in the Fi l ename fie ld .

www.juniper.net Document Conventions • vi i


Additional Information

Education Services Offerings


You can obtain information on the latest Education Services offerings, course dates, and class locations from the Wo rld
Wide Web by pointing your Web browser to: http://www.juniper.net;training/education/ .

About This Publication


This course was developed and tested using the software re lease listed on the copyright page. Previous and later
versions of software might behave differently so you should always consult the documentation and re lease notes for the
version of code you are running before reporting errors.
This document is written and maintained by the Juniper Networks Education Services development team. Please send
questions and suggestions for improvement to tra ining@j uniper.net.

Technical Publications
You can print technica l man uals and release notes directly from the Internet in a variety of fo rmats:

• Go to http://www.jun iper.net/ techpubs/ .


• Locate the specific softwa re or hardware release and title you need, and choose the format in which you
want to view or print the document.
Documentation sets and CDs are available th rough your local Juniper Networks sales office or account representative.

Juniper Networks Support


For technical support, contact Juniper Networks at http://www.juniper.net/ customers/ support/ , or at 1-888-314-JTAC
(within the United States) or 408-7 45-2 121 (outside the United States).

vi ii • Additional Information www.juniper.net


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 1: Course Introduction

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Get to know one another
• Identify the objectives, prerequisites, facilities, and materials used
during this course
• Identify additional Education Services courses at Juniper Networks
• Describe the Juniper Networks Certification Program

Q 2019 Juniper Networks, Inc All Rights Reserved

We Will Discuss:
• Objectives and course content information;

• Additional Juniper Networks, Inc. courses; and

• The Juniper Networks Certification Program .

Chapter 1-2 • Course Introduction www.juniper.net


Data Center Fabric with EVPN and VXLAN

Introductions

■ Before we get started ...


• What is your name?
• Where do you work?
• What is your primary role in your organization?
• What kind of network experience do you have?
• Are you certified on Juniper Networks?
• What is the most important thing for you to learn
in this training session?

C> 2019 Juniper Networks, Inc All Rights Reserved

Introductions
The slide asks several questions for you to answer during c lass introductions.

www .j uniper.net Course Introduction • Chapter 1-3


Dat a Cent er Fabric with EVPN and VXLAN

Prerequisites

■ The prerequisites for this course are the following:


• Understanding of the OSI model
• Advanced routing knowledge; the Advanced Junos Enterprise Routing (AJER)
course or equivalent knowledge strongly recommended
• Intermediate switching knowledge; the Junos Enterprise Switching Using
Enhanced Layer 2 Software (JEX) or equivalent knowledge
• Intermediate to advanced Junos CLI experience

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(l\\'OPl(S
4

Prerequisites
The slide lists the prerequ isites for t his course.

Chapter 1-4 • Course Introduction www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Course Contents (1 of 2)
■ Contents:
• Chapter 1: Course Introduction
• Chapter 2: Data Center Fundamentals Overview
• Chapter 3: IP Fabrics
• Chapter 4: VXLAN Fundamentals
• Chapter 5: EVPN Controlled VXLAN
• Chapter 6: Configuring EVPN Controlled VXLAN
• Chapter 7: Basic Data Center Architectures
• Chapter 8: Data Center Interconnect
• Chapter 9: Advanced Data Center Architectures
• Chapter 10: EVPN Multicast
• Chapter 11: Introduction to Multicloud Data Center
C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per
N(lWOPKS
5

Course Contents, Part 1


The slide lists the first topics we discuss in this course.

www .juniper.net Course Introduction • Chapter 1 - 5


Data Center Fabric with EVPN and VXLAN

Course Contents (2 of 2)

■ Contents (contd.):
• Chapter 12: Comprehensive Lab
• Appendix A: Virtual Chassis Fabric
• Appendix B: Virtual Chassis Fabric Management
• Appendix C: Junos Fusion Data Center
• Appendix D: Multi-Chassis LAG
• Appendix E: Troubleshooting MC-LAG
• Appendix F: Zero Touch Provisioning
• Appendix G: In-Service Software Upgrad
• Appendix H: Troubleshooting Basics
• Appendix I: Data Center Devices

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(l\\'OPl(S
6

Course Contents, Part 2


Th is slide discusses t he rema ining t opics we discuss in t his course.

Chapter 1-6 • Course Introduction www.juniper.net


Data Center Fabric with EVPN and VXLAN

Course Administration

■ The basics:
• Sign-in sheet
• Schedule
• Class times
• Breaks
• Lunch
• Break and restroom facilities
• Fire and safety procedures
• Communications
• Telephones and wireless devices
• Internet access

O 2019 Juniper Nelworl<s, foe All Rights Resel'Jed.

General Course Administration


The slide documents general aspects of c lassroom administration .

www .juniper.net Course Introduction • Chapter 1-7


Data Center Fabric with EVPN and VXLAN

Education Materials
• Available materials for classroom-based
and instructor-led online classes:
• Lecture material
• Lab guide
• Lab equipment
• Self-paced online courses also available
• http://www.juniper.net/courses

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
s

Training and Study Materials


The slide describes Education Services materials that are available for reference both in the classroom and online.

Chapter 1-8 • Course Introduction www.juniper.net


Data Center Fabric with EVPN and VXLAN

Additional Resources

■ For those who want more:


• Juniper Networks Technical Assistance Center (JTAC)
• http://www.juniper.net/support/requesting-support.html
• Juniper Networks books
• http://www.juniper.net/books
• Hardware and software technical documentation
• Online: http://www.juniper.net/techpubs
• Portable libraries: http://www.juniper.net/techpubs/resources
• Certification resources
• http://www.j uni per. net/certification

Q 2019 Juniper Networks, Inc All Rights Reserved

Additional Resources
The slide provides links to additional resources available to assist you in the installation, configuration , an d operation of
Juniper Networks products.

www .juniper.net Course Introduction • Chapter 1-9


Data Center Fabric with EVPN and VXLAN

Satisfaction Feedback

~
~
Class
r
"' ~
~
Feedback


~
~
.. ~ ~
~

..,

~

-- - ~

~
I II I

i1

• To receive your certificate, you must complete the survey


• Either you will receive a survey to complete at the end of class ,
or we will e-mail it to you within two weeks
• Completed surveys help us serve you better!

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
10

Satisfaction Feedback
Juniper Networks uses an electronic survey system to col lect and analyze yo ur comments and feedback. Depending on the
class you are taking, please complete the survey at the end of the class, or be sure to look for an e-mail about two weeks
from class completion that directs you to complete an online survey form . (Be sure to provide us with your current e-ma il
address.)

Submitting your feedback entit les you to a certificate of c lass completion. We thank you in advance for taking the time to
help us improve our educationa l offerings.

Chapter 1-10 • Course Introduction www.juniper.net


Data Center Fabric with EVPN and VXLAN

Juniper Networks-Junos-Based Curriculum Certification

Course

Service Provider Routing & Switching Enterprise Routing & Switching Data Center Junos Security

JNCIE-DC

JNCIE- SP Bootcamp JNCIE- ENT Bootcamp JNCIE· DC Self-Study Bundle JNCIE- SEC Self-Study Bundle

t
JNCIP-SP
t
JNCIP-ENT
t
JNCIP-DC
t
JNCIP-SEC
Advanced Junos Service Provider Advanced Junos Enterprise Switching (AJEX) Advanced Data Center Switching (ADCX) Advanced Junos Security (AJSEC)
Routing (AJSPR)
Advanced Junos Enterprise Routing (AJER)
Ju nos Layer 3 VPNs (JL3V)

Ju nos Layer 2 VPNs (JL2V)

JNCIS-SP JNCIS- ENT

Junos MPLS Fundamentals (JMF) Junos Enterprise Switching (JEX)

Junos Service Provider Switching (JSPX) Junos Intermediate Routing (JIR)

t
Junos Intermediate Routing (JIR)

Introduction to the Ju nos Operating System (JJOS)

Networking Fundamentals

Notes: Information current a s of M arch 2019. Course a nd exa m information (le ngth, av ailability, c ontent, etc .) is subj ect to c ha nge ; refe r t o www.juniper.net/training for the mo st current info rmation.
C> 2019 J uniper Networks , Inc All Rights Reserved.

Juniper Networks Education Services Curriculum


Juniper Networks Education Services can help ensure that you have the knowledge and skills to deploy and maintain
cost-effective, high-performance networks for both enterprise and service provider environments. We have expert training
staff with deep technical and industry knowledge, providing you with instructor-led hands-on courses in the classroom and
online, as well as convenient, self-paced eLearning courses. In addition to the courses shown on the sl ide, Education
Services offers training in automation, E-Series, firewall/ VPN, IDP, network design, QFabric, support, and wireless LAN.

Courses
Juniper Networks courses are available in the following formats:

• Classroom-based instructor-led techn ical courses;

• Online instructor-led technical courses;

• Self-paced on-demand train ing with labs;

• Hardware installation eLearning courses as we ll as technical eLearning courses;

• Learning bytes: Short, topic-specific, video-based lessons covering Juniper products and technologies.

Find the latest Education Services offerings covering a wide range of platforms at www.juniper.net/training

www .juniper.net Course Introduction • Chapter 1- 11


Data Center Fabric with EVPN and VXLAN

Juniper Networks Certification Program

• Why earn a Juniper Networks certification?


• Juniper Networks certification makes you stand out
• Demonstrate your solid understanding of networking technologies
• Distinguish yourself and grow your career
• Capitalize on the promise of the New Network
• Develop and deploy the services you need
• Lead the way and increase your value
i
~
• Unique benefits for certified individuals ~~ r.~ ~ . .

Education Services
CERTIFICATION PROGRAM

<O 2019 Juniper Networl<s, fllC All Rights ReseNed.

Juniper Networks Certification Program


A Juniper Networks certification is the benchmark of skills and competence on Juniper Networks technologies.

Chapter 1-12 • Course Introduction www.juniper.net


Data Center Fabric with EVPN a nd VXLAN

Juniper Networks Certification Education Services


CERTIFICATION PROGRAM
Program Framework
Automation
Data Center Security Cloud ' Design
and DevO s
t., I
0.
X
LU

-~
JNCIP-
·-"'0 ENT-Cloud*,
"'
.&
E! SP-Cloud*
0.

..."'
---:l---:t••••••••••••••• •0 JNCIS·ENT
:t
·- JNCIS-
·-.,"'u
0.
Cloud
Vl

:t
JNCIA-
13

u
0
"' Cloud
<
Service Provider Enterprise Data Junos Cloud Automation Data Center,
Routing and Routing and Center Security and Service Provider,
Switching Switching DevOps Security Design
Information as of March 2019. •in pla nning. Refer to www.juniper.neVcertification for the most current information.

C> 2019 Juniper Networks, Inc All Rights Reserved

Juniper Networks Certification Program Overview


The Juniper Networks Cert if ication Program (JNCP) consists of multit ie red tracks that enable participa nts to demonst rate
compete nce with Jun iper Networks technology through a combinatio n of w ritten proficiency exams and hands-o n
co nfigu ration and t ro ubleshooting exams. Successful cand idates demonstrate a thorough understanding of Internet and
secu rity tech nologies a nd Jun iper Networks platform configuration and troubleshooting skills.

The JNCP offers t he fo llowing features:

• Mult iple tracks;

• Mult iple certification levels;

• Written proficiency exams; a nd

• Hands-on configuration a nd troubleshooting exams.

Each JNCP track has one to fou r certification levels- Associate-level, Specia list-level, Professiona l-level, a nd Expert-level. The
Associate-leve l, Specia list-level, and Professio na l-level exams are computer-based exams composed of multiple cho ice
questions administered at Pea rson VUE testing centers worldwide.

Expert-level exams are composed of hands-on lab exerc ises administered at select Juniper Networks testing cente rs . Please
visit t he JNCP website at http://www.jun iper.net;certif ication f or detailed exam information, exam pric ing, and exam
registration.

www .j uniper.net Course Introduction • Chapter 1 - 13


Data Center Fabric with EVPN and VXLAN

Certification Preparation
• Training and study resources: • Community:
• Juniper Networks Certification • J-Net:
Program website: http ://forums .juniper. net/t5/
www.juniper.net/certification Training-Certification-and/
• Education Services training bd-p/Training_and_Certification
classes: • Twitter: @JuniperCertify
www.juniper.net/training
• Juniper Networks documentation
and white papers:
www.juniper.net/documentation

C> 2019 Juniper Networks. Inc All Rights Reserved Jun1Per


N(l\\'OPl(S
14

Preparing and Studying


The slide lists some options for those interested in preparing for Juniper Networks certification.

Chapter 1-1 4 • Course Introduction www.juniper.net


Data Center Fabric with EVPN and VXLAN

Junos Genius: Certification Preparation App


• Prepare for certification JUnlPe[ NETWORKS .
I Education Services
JUNOS GENIUS
• Access e-learning videos, practice ,_, ,11('.l{l A,-.o,,i

tests, and more ~ ( :oo~k f,b,


- -- - -- www.juniper.net/junosgenius
• Any time, any location, any device
65-item practice tests to Learning Bytes provid ing Hardw are Overview and
• Ongoing training opportunities as prepare you for Juniper instruction on Juniper Deployment courses
Networks Certifications technologies
new content is regularly added r · :_~~\~ 9J7P~

Back to !earning path 11 =: •


',_
~
- - - ... ·0._• _.

• Prepare for associate, specialist, and Recently viewed ....


professional-level JNCP certification @ OIMJ

a
• Purchase on-demand training
courses
·-~-,
____ ___
...... _
__
• PIM-SM11~0!'!1J • .i,...A,,.,....,....1n•-......

... · --·--.. __~-


,...
o l~
,._,
"''
,pk
0 ~11:'1/l
.,.v_,
Adw, f"IAtr,Qt..

.,._ .....
·------- .-
• ,,,,,-1,.11:f'...-.. multif)lsftii.M• . , _
,
.....
• View assets offline (app only; --
-·---·-·---
--~-
·•------·-
Recently Added

@ @

JNCP practice tests not included) 0 ID 0 a


Ao • ,;.o Jv Ot ~ ff • ~"''lY (JS[

1--1
IOI$

UV.w, ,ev,_.. • • •·

"'app only - JNCP practice tests are not included.


1-- . .- . •--·"::::- 1
•• I 1 • •
Ill I II 1 H
M OJt Popular
~
,IOflJllt u.wi
;
---~-
6
..,. ..

C> 2019 Juniper Networks, Inc All Rights Reserved

Junos Genius
The Ju nos Genius mobile learning platform (wwwjunosgenius.net) helps you learn Juniper technologies and prepare for
Juniper certif icat ion exams on your schedule. An app for iOS and Android devices, along with laptops and desktops, Junos
Genius provides certification preparation resources, practice exams, and e-learning courses developed by experts in Juniper
technology. Courses cover automation, routing, switching, security, and more.

www .juniper.net Course Introduction • Chapter 1 - 15


Data Center Fabric with EVPN and VXLAN

Find Us Online

J-n~ http://www.juniper.net/jnet

http://www.juniper.net/facebook

I http://www.juniper.net/youtube

http://www.juniper.net/twitter

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
16

Find Us Online
The slide lists some on line resources to learn and share information about Juniper Networks.

Chapter 1-1 6 • Course Introduction www.juniper.net


Data Center Fabric with EVPN and VXLAN

Questions

O 2019 Juniper Nelworl<s, lllC All Rights ReseNed.

Any Questions?
If you have any questions or concerns about the class you are attending, we suggest that you vo ice them now so that your
instructor can best address your needs during class.

www .juniper.net Course Introduction • Chapter 1- 17


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 2: Data Center Fundamentals Overview

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Explain the benefits and challenges of the traditional multitier architecture
• Describe the new networking requirements in a data center
• Describe the various data center fabric architecture

C> 2019 Juniper Networks, Inc All Rights Reserved

We Will Discuss:
• The benefits and challenges of the traditional multitier architecture;

• The networking requirements that are requiring a change to the design of a data center;

• The various data center fabric architectures.

Chapter 2-2 • Data Center Fundamentals Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: Data Center Fundamentals Overview

➔ Traditional Multitier Architecture Challenges


■ Next Generation Data Center Fabrics

iQ 2019 Juniper Networks, Inc All Rights Reserved

Data Center Challenges


The slide lists the topics we will discuss. We wi ll discuss the highlighted topic first.

www .juniper.net Data Center Fundamentals Overview • Chapter 2-3


Data Center Fabric with EVPN and VXLAN

Hierarchical Design

■ Legacy data center networks are often hierarchical and can consist of
access, aggregation, and core layers
• Benefits of a hierarchical network design include:
• Modularity - facilitates change
• Function-to-layer mapping - isolates faults

WAN Edge Device


-
Core Layer

Distribution Layer

Access Layer = = = =

C> 2019 Juniper Networks, Inc All Rights Reserved

Multiple Tiers
Legacy data centers are often hierarchical and consist of multiple layers. The diagram in the example illustrates the typical
layers, which include access, distribution (sometimes referred to as aggregation), and core. Each of these layers performs
unique responsibilities.

Hierarchical networks are designed in a modular fashion. Th is inherent modularity facilitates change and makes this design
option quite scalable. When working with a hierarchica l network, the individual elements can be replicated as the network
grows. The cost and complexity of network changes are generally confined to a specific portion (or layer) of the network
rather than to the entire network.

Because f unctions are mapped to ind ividua l layers, fau lts relating to a specific function can be isolated to that function 's
co rresponding layer. The ability to isolate fau lts to a specific layer can greatly simplify troubleshooting efforts.

Chapter 2-4 • Data Center Fundamentals Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Functions of Tiers

• Tiers are defined to aid successful network design and to represent


functionality found within a network
WAN Edge Device

Core tier switches relay packets between aggregation switches -·


and function as the gateway to the WAN edge device
·-·

Distribution tier switches connect access switches and often


provide inter-VLAN routing and policy-based connectivity

Access tier switches facilitate end-user and device access and


enforce access policy -........

C> 2019 Juniper Networks, Inc All Rights Reserved

Functions of Layers
The individua l layers usually represent specific f unctions found within a network. It is often mistaken ly thought that the
access, distribution (or aggregation), and core layers must exist in clear and distinct physical devices, but this is not a
requirement, nor does it make sense in some cases.

The example highlights the access, aggregation, and core layers and provides a brief description of t he functions commonly
implemented in those layers. If CoS is used in a network, it should be incorporated consistently in all three layers.

www .juniper.net Data Center Fundamentals Overview • Chapter 2-5


Data Center Fabric with EVPN and VXLAN

Benefits in the Data Center

• Using a traditional hierarchical network implementation in the data


center can include a number of benefits:
• Interoperability with multiple vendors
WAN Edge Device

• Flexible switching platform support

C> 2019 Juniper Networks , Inc All Rights Reserved

Benefits of Using Hierarchy


Data centers built uti lizing a hierarchical implementation can bring some flexibility to designers:

• Since using a hierarch ical implementation does not requ ire the use of proprietary featu res or protocols, a
multitier topology can be constructed using eq uipment from multiple vendors.

• A mu lt itier implementation allows flexible placement of a variety of switching platforms. The simpl icity of the
protocols used does not req uire specif ic Ju nos versions or platform positioning.

Chapter 2 - 6 • Data Center Fundamentals Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Challenges in the Data Center

• Using a traditional hierarchical network implementation in the data


center can include a number of challenges:
• Requires loop prevention mechanisms
WAN Edge Device

• Inefficient resource usage Not all data paths


....... are used
• Limited scalability
• Increased latency

Admin

All devices are managed individually

+-1 Data paths often include multiple hops 1--+


C> 2019 Juniper Networks, Inc All Rights Reserved

Challenges of Using Hierarchy


Data centers built more than a few years ago face one or more of t he following challenges:

• The legacy multitier switching architecture cannot provide today's appl ications and users with predictable
latency and uniform bandwidth. This problem is made worse when virtual ization is introduced, where the
performance of virtua l machines (VMs) depends on the physical location of t he servers hosting those VMs.

• The management of an ever growing data center is becoming more and more taxing administratively speaking.
While the north to south boundaries have been fixed fo r years, the east to west boundaries have not stopped
growing. This growth, of t he compute, storage, and infrastructure, requires a new management approach.

• The power consumed by networking gear represents a significant proportion of the overall power consumed in
the data center. This cha llenge is particularly important today, when escalating energy costs are putting
additional pressure on budgets.

• The increasing performance and densities of modern CPUs has led to an increase in network traffic. The
network is often not equipped to deal with the large bandwidth demands and increased number of media
access control (MAC) addresses and IP addresses on each network port.

• Separate networks for Ethernet data and storage traffic must be maintained, adding to the training and
management budget. Siloed Layer 2 domains increase the overall costs of t he data center environment. In
addition, outages re lated to the legacy behavior of the Spanning Tree Protocol (STP), which is used to support
these legacy environments, often results in lost revenue and unhappy customers.

Given these cha llenges, along with others, data center operators are seeking solutions.

www .juniper.net Data Center Fundamentals Overview • Chapter 2 - 7


Data Center Fabric with EVPN and VXLAN

Inefficient Resource Usage

• Inefficient resource utilization


• Many links not used unless a failure occurs
• Potential bandwidth not being utilized

Collapsed Multitier Physical Topology

• Unused link bandwidth


• Slower Layer 2 convergence

• Unused uplink bandwidth


-- -- -- --


Slower Layer 2 convergence
Active-standby server uplinks
+--
+--
- -
+-- +--

-
+--

L2 access

-
- Unused links

C> 2019 Juniper Networks, Inc All Rights Reserved

Resource Utilization
In the m ultitier topo logy displayed in the example, you can see that almost ha lf t he links are not uti lized . In th is example you
would also need to be running some type of spanning tree protocol (STP) to avoid loops which, wou ld introd uce a delay with
your network convergence as we ll as introduce significant STP control traffic taking up valuab le bandwidth.

Th is topology is re latively simple but allows us to visualize the lack of resource uti lization. Imagine a data center with a
hundred racks of servers with a hundred top of rack access switches. The access switches all aggregate up to the
core/distribution switches, including redundant connections. In this much larger and comp licated network, you wou ld have
thousands of physical cable connections that are not being utilized . Now imagine these connections are fiber. In addition to
the unused cables you would also have two transceivers per connection that are not being used. Because of the inefficient
use of physical components there is a significant amount of usable bandwidth that is sitting idle, and a significant
investment in device components that sit idle until a fa ilure occurs.

Chapter 2-8 • Data Center Fundamentals Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: Data Center Fundamentals Overview

➔ Next Generation Data Center Architectures

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lW()PKS
9

Data Center Fabric Architectures


The slide highlights the topic we d iscuss next.

www .j uniper.net Data Center Fundamentals Overview • Chapter 2-9


Data Center Fabric with EVPN and VXLAN

Applications Driving Change

■ Modern data center topologies need to be flatter, simpler, and more


flexible

,,-..J laaS

Saas BMaaS
PaaS

ATTRIBUTES ATTRIBUTES ATTRIBUTES


• Increased Machine to Machine • Virtualized with Bare Metal • Scale-out
• East-Westtraffic • Introduction of Network Overlays • On-demand

REQUIREMENTS REQUIREMENTS REQUIREMENTS


• Flatter Topology • Physical to Virtua l (P2V) integration • Multitenancy
• High performance and consistent • Overlay visualization and • Simple to operate, easy to scale
management

C> 2019 Juniper Networks, Inc All Rights Reserved

Applications Are Driving Change


Data centers must be flexible and change as the users needs change. This means that today's data centers must evolve and
become flatter, simpler, and more f lexible to keep up with the constantly increasing end user demands. Understanding why
these changes are being implemented is important when trying to understand the needs of the customer. There are a few
reasons impacting this change including:

• Application Flows: More east-west traffic communication is happening in data centers. With today's
appl ications, many requests can generate a lot of traffic between devices in a single data center. Basica lly a
single user request triggers a barrage of additiona l requests to other devices. "Go here and get this, then go
here and get that" behavior of many applications is being done on such a large scale today that it is driving data
centers to become f latter and to provide higher performance with consistency.

• Network Virtualization: Th is means overlay networks; for example, NSX and Contrail. Virtualization is being
implemented in today's data centers and will continue to gain popu larity in the future . Some customers might
not be currently using virtualization in the ir data center, but it could definitely play a role in the design for those
customers that are forward looking and eventua lly want to incorporate some level of virtualization.

• Everything as a service: To be cost effective, a data center that offers hosting services must be easy to scale out
and scale back as demands change. The data center should be very agile it shou ld be easy to deploy new
services quickly.

Chapter 2-10 • Data Center Fundamentals Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Data Center Fabric Architectures


Business Critical IT and Private Cloud Saas, Web Services

Multitier Ethernet Fabric


MC-LAG VCF Junos Fusion QFabric
IP Fabric
t' I J:B c..: •5B c5113 E I *61 .. ... ... ... .. .. •••
.
'@'
SL
'' '
'.' ''
'
- !~!::;..!!!!., ,'' ........... '•''

Junos: One common operating system for all fabrics


QFX 5000 QFX 10000 Series
(51xx/52xx) Series
1- 1 =I-FY
------
-----
-----
------
C> 2019 Juniper Networks. Inc All Rights Reserved

Data Center Fabric Architectures


The graphic shown is designed to serve as a quick Juniper Networks data center architecture guide based strictly on the
access layer devices needed . The sca ling numbers provided are calculated based on uplink ports on a per-access device
basis.

www .juniper.net Data Center Fundamentals Overview • Chapter 2- 11


Dat a Center Fabric with EVPN and VXLAN

Multitier with MC-LAG


• Multitier using MC-LAG
• Flexible deployment scenarios
• Open choice of technologies and protocols
• Flexible platform choices
• MC-LAG can allow load sharing across member links
• MC-LAG eliminates the need for STP on member links
■ Complexity
• Coordinating configurations
• Multi-vendor incompatibilities

-
---- ------

=-==

C> 2019 Juniper Networks, Inc All Rights Reserved

Multitier Using MC-LAG


This combination is a deployment option if the data center requires a standard multitier architecture. Multichassis link
aggregation groups (MC-LAGs) are very useful in a data center when deployed at the access layer to allow redun dant
connections to your servers, as we ll as dual control planes. In addition to the access layer, MC-LAGs are also commonly
deployed at the core layer. When MC-LAG is deployed in an Active/ Active fashion, both links between the attached device and
the MC-LAG peers are active and available for forward ing traffic. Using MC-LAG eliminates the need to run STP on member
links and, depending on the design, can el iminate the need for STP all together. When deploying MC-LAG in this scenario,
MC-LAG capabi lities are not required by the dual-homed device. To the dual-homed device, the connection to the MC-LAG
peers appears to be a simple LAG connection (from the server side in the example).

A drawback to using MC-LAG in a data center environment is the sometimes proprietary implementation between vendors,
which can introduce incompatibilities in a multi-vendor environment. Addit ionally, in some data center environments,
MC-LAG configuration can become comp lex and difficult to troubleshoot.

Chapter 2-12 • Data Center Fundamentals Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Virtual Chassis Fabric


• Virtual Chassis Fabric
• Juniper Networks proprietary fabric
• Single point of in-band management
• Spine and leaf architecture
• Scales up to 20 devices (4 spine nodes and 16 leaf nodes)

======

C> 2019 Juniper Networks, Inc All Rights Reserved

Virtual Chassis Fabric


The Juniper Networks Virtual Chassis Fabric (VCF) provides a low-latency, high-performance fabric architecture that can be
managed as a single device. VCF is an evolution of the Virtua l Chassis feature, which enables you to interconnect multiple
devices into a single logical device, inside of a fabric architecture. A VCF is constructed using a spine-and-leaf architecture. In
the spine-and-leaf architecture, each spine device is inte rconnected to each leaf device. A VCF supports up to 20 total
devices, including up to four devices being used as the spine.

www .juniper.net Data Center Fundamentals Overview • Chapter 2 - 13


Data Center Fabric with EVPN and VXLAN

Junos Fusion
• Junos Fusion
• Based on the IEEE 802.1 BR Standard
• Single point of management
• Spine and leaf architecture
• Also referred to as aggregation and satellite devices

======

C> 2019 Juniper Networks, Inc All Rights Reserved

Junos Fusion
Ju nos Fusion is a Juniper Networks Ethernet fabric architecture designed to provide a bridge from legacy networks to
software-defined cloud networks. With Junos Fusion, service providers and enterprises can reduce network comp lexity and
operational costs by collapsing underlying network elements into a single, logica l point of management. The Ju nos Fusion
architecture consists of two major components: aggregation devices and satellite devices. With this structure, it can also be
classified as a spine and leaf architecture. These components work together as a single switching system, f lattening the
network to a single t ier without compromis ing resiliency. Data center operators can build individual Ju nos Fusion pods
comprised of a pair of aggregation devices and a set of satel lite devices. Each pod is a col lection of aggregation and satellite
devices that are managed as a single device. Pods can be small-for example, a pair of aggregation devices and a handfu l of
satellites- or large with up to 64 satellite devices based on the needs of the data center operator.

Chapter 2-14 • Data Center Fundamentals Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

IP Fabric
• IP Fabric
• Flexible deployment scenarios
• Open choice of technologies and protocols
• Multi-vendor interoperability
• Highly scalable
• Strictly Layer 3

Q 2019 Juniper Networks, Inc All Rights Reserved

IP Fabric
An IP fabric is one of the most flexible and scalable data center solutions ava ilable. Beca use an IP fabric operates strictly
using Layer 3, there are no proprietary features or protocols being used so this solution works very we ll with data centers
that must accommodate m ult iple vendors. One of the most compl icated tasks in building an IP fabric is assigning all of the
details like IP addresses, BGP AS numbers, routing policy, loopback address assignments, and many other implementation
details.

www .j uniper.net Data Center Fundamentals Overview • Chapter 2 - 15


Dat a Center Fabric with EVPN and VXLAN

IP-Based Data Center


• Increasingly, there is less need for data center infrastructure to support a
Layer 2 extension between racks

• Mode 1 Applications
• Require Layer 2 connection (e.g. Legacy Data Base)
• Mode 2 (native IP) applications
• Do not require Layer 2 connectivity
• Overlay networking (tunneling of Layer 2 frames)
• Supports hybrid of Mode 1 and Mode 2 application types
• VXLAN (Supported by many vendors including VMware)
• MPLS

C> 2019 Juniper Networks, Inc All Rights Reserved

IP-Based Data Centers


Next generation data centers have different req uirements than the trad itional data center. One maj or requ irement in a next
generation data center is that traffic is load ba lanced over the multiple paths between rack in a data center. Also, a
requ irement that is becoming less and less necessary is the ability of t he underlying switch fabric to carry native Ethernet
frames between VMs/ server in different racks. Some of the major reasons for th is shift include the following:

1. IP-only Data: Many data centers simply need IP connectivity between racks of equipment. There is less and less
need for the stretch ing of Ethernet networks over the fabric. For example, one popular compute and storage
methodology is Apache's Hadoop. Hadoop allows for a large set of data (i.e. like a single Tera-bit fi le) to be
stored in chunks across many servers in a data center. Hadoop also allows for the st ored chunks of data to be
processed in parallel by the same servers they are stored upon. The connectivit y between the possibly
hundreds of servers needs only to be IP-based.

2. Overlay Networking: Overlay networking allows fo r Layer 2 connectivity between racks however, instead of layer
2 frames being t ransferred natively over the fabric, they are t unneled using a different outer encapsu lation.
Virtua l extensible Local Area Network (VXLAN), Multiprot ocol Label Switching (M PLS), and Generic Routing
Encapsu lation (GRE) are some of the common tunneling protocols used to t ransport Layer 2 frames of the
fabric of a data center. One of the benefits of ove rlay networking is that when there is a change to Layer 2
connectivity between VMs/ servers (the overlay network), the underlying fabric (underlay net work) can remain
relative ly untouched and unaware of t he changes occurring in the ove rlay network.

Chapter 2 - 16 • Data Cent er Fundamenta ls Overview www.juniper.net


Data Cente r Fabric with EVPN and VXLAN

Overlay Networking (1 of 2)
• Layer 2 Transport network
• Seen as a point of weakness in scale, high availability, flooding, load
balancing, and prevention of loops
• Adding a tenant requires touching the transport network

Assumes an Ethernet Fabric (loop prevention with STP would add even
VM 1 lvM2I lvM31 more complexity). If you add a VM, the transport network must be
configured with appropriate IEEE 802.1q tagging.

vSwitch ----

---- ----
Layer 2 Underlay Network

BMS

---- ---- ---- ---- ---- ----


C> 2019 Juniper Networks, Inc All Rights Reserved

Layer 2 Transport Network


The d iagram above shows a typical scenario with a Layer 2 underlay net work with attached servers that host VMs as well as
virtua l switches. The example shows the underlay net work as an Ethernet fabric. The fabric solves some of t he customer
requirements including load balancing over equal cost paths (assum ing Virtua l Chassis Fabric) as we ll as having no blocked
span ning tree ports in the network. However, th is topo logy does not solve the VM agility problem or the 802.l q VLAN overlap
prob lem . Also, as 802.lq VLANs are added to the vi rtual switches, those same VLANs must be provisioned on t he underlay
network. Managing the addition, removal, and movement of VMs (and the ir VLANs) for the 1000s of customers wou ld be a
nightmare for t he operators of t he underlay network.

www .j uniper.net Dat a Center Fundamentals Overview • Chapter 2 - 17


Data Center Fabric with EVPN and VXLAN

Overlay Networking (2 of 2)
• Overlay networking can allow for an IP underlay
• Compute and storage have already been virtualized in the DC; the
next step is to virtualize the network
• VXLAN (one of a few overlay networking protocols) allows the decoupling of
the network from the physical hardware (provides scale and agility)
A VTEP encapsulates Ethernet frames into IP packets, so adding a
VMNLAN requires no changes to the IP transport network. Loops are
prevented by the routing protocols.

VM 1 lvM2I VM3
--
IP Fabric ---- ---- BMS

---- ---- ---- ---- ----


VXLANVTEP
C> 2019 Juniper Networks, Inc All Rights Reserved

Overlay Networking
Overlay networking can help solve many of the requ irements and problems discussed in the previous slides. This slide shows
the addition of an overlay network that includes the use of VXLAN. The overlay network consists of the virtual switches and
the VXLAN tunnel endpoints (VTEPs). A VTEP wi ll encapsulate the Ethernet frames t hat it receives from the virtual switch into
IP and forward the resulting IP packet to t he remote VTEP. The underlay network simply needs to forward IP packets between
VTEPs. The receiving VTEP will de-encapsulate the VXLAN IP packets and then forward the resu lting Ethernet Frame to the
appropriate VM. Adding and removing VMs from the data center has no effect on the underlay network. The underlay
network simply needs to provide IP connectivity between the VTEPs.

When implementing the underlay network in this scenario, you have a few choices. You can use an Ethernet fabric like Virtual
Chassis (VC), Virtual Chassis Fabric (VCF), or Ju nos Fusion. All of these are valid solutions. Because al l of the traffic crossing
the underlay network is IP, the option for an IP fabric becomes available. The choice of underlay network comes down to
scale and future growth. An IP fabric is considered to be t he most scalable underlay solution.

Chapter 2-18 • Data Center Fundamenta ls Overview www.juniper.net


Data Center Fabric wit h EVPN and VXLAN

Spine-Leaf Architecture (1 of 2)
• High capacity physical underlay
• High density 1OG , 40G , 1OOG core switches as spine devices
• High density 1OG, 25G, 40G , 50G, 1OOG leaf switches for server access

. ... - -
•••

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine-Leaf Architecture Components


Diff erent data center architectures may req uire higher capacity core and edge swit ches . One such architecture is t he high
ca pa c ity spine-leaf design.

As server capabi lities increase, the need for higher speed uplinks and access links grows as well. Edge tech nologies such as
25Gbps and 50Gbps server access links place a h igher demand on uplink bandwidth to the switch ing co re, which can
dramatically increase oversubscription rat ios in the switching fabric.

www .j uniper.net Data Center Fundament als Overview • Chapt er 2 - 19


Dat a Center Fabric with EVPN and VXLAN

Spine-Leaf Architecture (2 of 2)
• Spine - QFX10000 series
• 1OG , 40G, 1OOG
• Can be used as routing core and DC interconnect
• Leaf - QFX series
• QFX10002-10G/40G access
• QFX51 xx - 1OG/40G/1 OOG access
• QFX52xx - 1OG, 25G, 40G, 50G, 1OOG access

.. 4 - -

------ •••
------ ------

C> 2019 Juniper Networks, Inc All Rights Reserved

QFX Series Data Center Switches


Although the QFX5100 Series can be deployed as a spine device in a spine-leaf topology, its scalability in that role is limited
in medium and large data centers. To fill the need for a high capacity underlay network, Juniper Networks offers several
options.

• The QFX1 00xx Series provides high densit y 10G, 40G, and 100G switching capacity for the core switching layer.
It provides routing and switch ing functions, data center interconnect, as well as deep buffers to help manage
bursty traffic patterns between leaf nodes.

• The QFX51xx Series provides high density 10G, 4 0G, and 100G switching capacity for access nodes.

• The QFX52xx Series provides high density 10G, 25G, 4 0G, 50 G, and 1 00G access and uplink interfaces.

Although all of these models are part of the QFX Series fami ly of switches, featu res and f unctionality can vary between
different product lines. Please refer to technical documentation for specific features and conf iguration options for your
specific QFX Series platform .

Chapter 2 - 20 • Data Center Fundamenta ls Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Explained the benefits and challenges of the traditional multitier architecture
• Described the new networking requirements in a data center
• Described the various data center fabric architectures

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lWOPKS
21

We Discussed:
• The benefits a nd chal lenges of t he trad itio nal multit ier architecture;

• The new networking req uirements in a data center; and


• The various data center fabric architectures.

www .j uniper.net Dat a Center Fundamentals Overview • Chapter 2 - 21


Data Center Fabric with EVPN and VXLAN

Review Questions

1. What are some of the challenges caused by the traditional data


center multitier implementations?
2. What are some of the applications that are driving a change in the
implementations of data centers?
3. How can Layer 2 networks be stretched over an IP network?

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(l\\'OPl(S
22

Review Questions
1.

2.

3.

Chapter 2-22 • Data Center Fundamenta ls Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Answers to Review Questions


1.
Trad itional data center multitier implementations require loop prevention, which wast es links and hardware resources, and
increases costs. Additionally, Layer 2 data center infrastructu res can quickly reach scalability limits.

2.
Some of the applications that are driving a change in data centers include greater east-west traffic, and more reliance on
predictable lat ency between devices within the data center. Increased need for on-demand scaling is also creating new
challenges.

3.
Layer 2 networks can be stretched over IP networks, or IP fabrics, using Layer 2 tunneling technology, also called overlay
networks

www .j uniper.net Dat a Center Fundamentals Overview • Chapter 2 - 23


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 3: IP Fabric

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Describe routing in an IP fabric
• Explain how to scale an IP fabric
• Configure an OS PF-based IP fabric
• Configure an EBGP-based IP fabric

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
2

We Will Discuss:
• Routing in an IP fabric;

• Scaling of an IP fabric;

• Configuring an OSPF-based IP fabric; and

• Configuring an EBGP-based IP fabric.

Chapter 3 - 2 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: IP Fabric

➔ IP Fabric Overview
■ IP Fabric Routing

■ IP Fabric Scaling
■ Configure an IP Fabric

C> 2019 Juniper Networks, Inc All Rights Reserved

IP Fabric Overview
The slide lists the topics we will discuss. We wi ll discuss the highlighted topic first.

www .juniper.net IP Fabric • Chapter 3-3


Dat a Cent er Fabric with EVPN and VXLAN

IP Fabric Infrastructure

■ IP fabric
• All IP infrastructure
• No Layer 2 switching or xSTP protocols
• Uses standards-based Layer 3 routing protocols, allowing for vendor interoperability
(can be a mix of devices)
• Multiple equal cost paths should exist between any two
servers/VMS in the DC Data Center
• Paths are computed dynamically by the routing protocol -- --
+--

• The network should scale linearly as the size increases


• Using the method developed by Charles Clos
(three-stage fabric) -- --
+--
--
+--
--
+--

C> 2019 Juniper Networks, Inc All Rights Reserved

IP Fabric
An IP fabric is one of the most f lexible and scalable dat a center solutions ava ilable. Beca use an IP fabric operates strictly
using Layer 3, there are no proprietary feat ures or prot ocols being used, so this solution works very well with dat a centers
that must accommodate m ult iple vendors. Some of t he most complicated tasks in bu ilding an IP fabric are assigning all of
the detai ls like IP addresses, BGP AS numbers, routing policy, loopback address assignments, and many other
implementation det ails. Throughout t his chapter we refe r to the devices as nodes (spine nodes and leaf nodes). Keep in
mind that all devices in an IP fabric are basica lly just Layer 3 ro uters that rely on routing information t o make f orwarding
decisions.

Chapter 3-4 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Three-Stage IP Fabric Architecture (1 of 2)

• Spine and leaf architecture


• Each leaf typically has a physical connection to each spine
• Each server facing interface (on leaf nodes) is always two hops away from any other
network facing interface
• Creates a resilient network where all traffic has multiple paths to all other devices in the
fabric
• No physical connectivity between spine nodes or between leaf nodes

Spine
---- ---- -
--
Nodes

~
,,..z
- --
-
Leaf
Nodes ---- ---- ■ ■ ■

C> 2019 Juniper Networks, Inc All Rights Reserved

A Three Stage Network


In the 1950s, Charles Clos f irst wrote about his idea of a non-blocking, mult istage, telephone switching architecture that
would allow calls to be completed. The switches in his topo logy are called crossbar switches. A Clos network is based on a
three-stage architecture: an ingress stage, a middle stage, and an egress stage. The theory is that there are multip le paths
for a ca ll to be switched through the network such that cal ls will always be connected and not "blocked " by another call. The
term Clos "fabric" came about lat er as people began to notice that the pattern of links looked like threads in a woven piece
of c loth.

You should notice that t he goal of the design is to provide connectivity from one ingress crossbar switch to an egress
crossbar switch. There is no need for connectivity between crossbar switches that belong to the same stage.

www .j uniper.net IP Fabric • Chapter 3-5


Dat a Center Fabric with EVPN and VXLAN

Three-Stage IP Fabric Architecture (2 of 2)

• Spine and leaf architecture (contd.)


• Traffic should be load shared over the multiple paths
• 4 distinct paths between Host A and Host B (1 path for every spine node)
• All unicast packets in the same flow follow the same path based on the vendor's hashing
algorithm (no out-of-order packets)

Spine Nodes

---- ---- ---- ----

---- ••• ----


r - - - - - - - - - - -~Le~a~
fN~o~de!s::.__ _ _ _ _ _ _ _ _ _ _ __, Ill
Host A Host B

C> 2019 Juniper Networks, Inc All Rights Reserved

An IP Fabric Is Based on a Clos Fabric


The diagram shows a 3-stage fabric design. In an IP fabric, the Ingress and Egress stage crossbar switches are called leaf
nodes. The middle stage crossbar switches are called spine nodes.

In a spine-leaf architecture, the goal is to share traffic loads over multiple paths through the fabric. A 3-stage fabric design
ensures t hat the access-facing port of any leaf node is exactly two hops from any other access-facing port on another leaf
node. It is called a 3-stage fabric because the forward ing path from any connected host is leaf-spine-leaf, or three st ages,
regardless of where the destination host is connected to the fabric.

Many applications funct ion best when packets are received in the order in which they are sent. In an IP fabric design,
per-flow based traffic sharing should be implemented so that packets in a unique flow follow the same fabric pat h. This helps
prevent out-of-order packets arriving at the destination host due t o potential congestion or latency between different paths
through the fabric.

Chapter 3-6 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

IP Fabric Design Options

Spine Spine
1 2

Option 1 Option 2
• Three-stage topology • Five-stage topology
• Small to medium deployment • Medium to large deployment
• Generally one BGP design • Lots of BGP design options

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine and Leaf Architecture


To maximize the throughput of the fabric, each leaf node should have a connection to each spine node. Th is ensures each
server-facing interface is always two hops away from any other server-facing interfaces. This creates a highly resilient fabric
with multiple paths to all other devices. An important fact to keep in mind is that a member switch has no idea of its location
(spine or leaf) in an IP fabric. The spine or leaf function is a matter of a device's physical location in the fabric. In genera l, the
choice of router or Layer 3 switch to be used as spine nodes should be partially based on the interface speeds and number
of ports that it supports.

Currently there are two prominent design options in an IP fabric architecture.

Option 1 is a basic three-stage architecture. Each leaf connects to every spine. The number of spines is determined by the
number of leaf nodes, and the throughput capacity required for leaf-to-leaf connectivity. In the diagram, a spine-leaf topo logy
with two spine devices and four leaf nodes is shown . The throughput capacity from one leaf to any other leaf is limited to
twice the capacity of a single uplink, since there is a single uplink to each spine node. In the event of a spine device fa ilure,
the forwarding capac ity from leaf-to-leaf is cut in half, wh ich could lead to traffic congestion. Th is can be increased by adding
additional uplinks to the spine nodes using technologies such as LAG. However, this type of design places a large amount of
traffic on few paths through the fabric. To increase scale, and to reduce the impact of a single spine device fa ilure in the
fabric domain, additional spine devices can be added. If, for instance, two more spine nodes were added to option 1, the
traffic from leaf 1 to leaf 4 would have four equal cost paths for traffic sharing. In the event of a spine device fa ilure, or
maintenance that requires a spine device to be removed from the fabric, the forwarding capacity of the fabric would only be
reduced by one fourth instead of one half.

For scalability and modularity, a Layer 3 fabric can be broken up into groups of spine-leaf nodes. Each group of spine-leaf
nodes is configured in a 3-stage fabric design. Each group is then interconnected through another fabric layer. Sometimes
the groups of 3-stage devices are cal led Pods, and the top tier of the fabric is ca lled a Super Spine. Traffic within a pod does
not leave the pod. Only inter-pod traffic must transit the f ive-stage fabric, or Super Spine. It is ca lled a f ive-stage fabric
because the forwarding path from one host to another follows a leaf-spine-superspine-spine-leaf path, or five stages. From
the perspective of the super spine devices in the d iagram, each spine level node is viewed as a leaf node of the super spine.

www.j uniper.net IP Fabric • Chapter 3-7


Data Center Fabric with EVPN and VXLAN

Juniper Networks DC Fabric/Spine Nodes

■ Recommended spine nodes Note: The numbers shown can vary based on
model, cards installed, and services enabled.

. ---
-- .
-"'" :a:i:;:;;_;::=i ;ac:::i -'=-:,, ==

Chassis QFX10k QFX52xx QFX51 xx

Large DC 1 OOGbE Small/medium DC Small DC


uplinks 40GbE/1 OOGbE uplinks 40GbE uplinks

Max 40GbE Density 576 64 (64-q) 32


Max 1OOGbE Density 480 32 N/A
Number of VLANs 16000 4096 4096

1Pv4 Unicast Routes 256,000 128,000 208,000

Q 2019 Juniper Networks, Inc All Rights Reserved

Recommended DC Fabric Spine Nodes


The slide shows some of the recommended Juniper Networks products that can act as spine nodes. You should consider
port density and sca ling limitations when choosing the product to place in the spine location. Some of the pertinent features
for a sp ine node include overlay networking support, Layer 2 and Layer 3 VXLAN Gateway support, and number of VLANs
supported.

Chapter 3-8 • IP Fabric www.ju n iper .net


Data Center Fabric with EVPN and VXLAN

Juniper Networks DC Leaf Nodes

• Recommended leaf nodes


QFX10002

QFX51xx QFX52xx
EX4300

. --·
--- --
====·== .
.. . ... . ..... ... ~

-
.
- -·--

Chassis QFX51xx QFX5120 QFX52xx EX4300

Max 40GbE Density 32 (24-q) 72 64 (64-q) 4

Max 1OOGbE Density 8 24 32 N/A

Number of VLANs 4096 16,000 4096 4096



208k 256k 128k
- 32k
1Pv4 Unicast Routes

Note: The numbers shown can vary based on model, cards installed, and services enabled

C> 2019 Juniper Networks, Inc All Rights Reserved

Recommended DC Fabric Leaf Nodes


The slide shows some of the recommended Juniper Networks products that can act as leaf nodes. Some considerations
when choosing leaf nodes include what type of server or host connectivity wi ll be used, and what types of server or host
connectivity may be used in the future (l ink types, link speeds, etc.) Other considerations may include whether or not Layer 3
VXLAN Gateway capabi lities will be deployed on the spine or on the leaf nodes, as well as the number and speed of uplink
ports.

www .juniper.net IP Fabric • Chapter 3-9


Data Center Fabric with EVPN and VXLAN

Agenda: IP Fabric

➔ IP Fabric Routing

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
10

IP Fabric Routing
The slide highlights the topic we d iscuss next.

Chapter 3-10 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Switching in the Data Center Overview

• Switching in the data center


• Switching forwards frames to a destination MAC address
• Uses a MAC forwarding table that contains host MAC addresses
• Usually populated through ARP process
10.2.2/24
• Provides Layer 2 connectivity between hosts
within the same broadcast domain
• Requires loop prevention mechanisms
(such as xSTP)

A's Switching Table


Host1 > int U -----------------
=
1111111 Host1

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1PerNflWOPKS


11

Data Center Switching


Layer 2 data centers f orward frames to connected hosts based on the dest ination MAC address of a network frame.

When a host in a Layer 2 data center is required to communicate with another host within the data center, and within the
same Layer 2 broadcast domain, the source host sends a data frame to the MAC address associated with the destination
host. If the MAC address of the destination host is unknown, the source host uses an ARP request to query the remote MAC
address associated with the destination host's IP address. The response to the ARP is stored in the source host's ARP table,
where it can be used for future transmissions.

Switches that relay the frames between hosts popu late a MAC address table, or switching table, based on the MAC
addresses on frames that enter switch ports. In th is manner a switch can assign an outbound port to each MAC address in
the Layer 2 domain and can avoid broadcasting Layer 2 frames on ports that do not connect to the destination hosts.

Because of the redundant links present in a switching domain fabric, the potential for broadcast loops requ ires the
implementation of loop prevention protocols. One of the most common loop prevention protocols in a switch ing domain is
Spann ing Tree Protocol {STP), wh ich manages the forwarding state of Layer 2 ports within the network and blocks forward ing
on ports that could potentially create a forward ing loop. In the event of an active forwarding link fa ilure, blocked ports can be
allowed to pass traffic. Because of this blocking nature of STP, the implementation of STP within a switching doma in has the
potential to greatly reduce the forward ing capabilities of a fabric by blocking ports.

When traffic is required to transit between Layer 3 domains in a Layer 2 fabric, a default gateway is often used as the next
hop to which hosts forward traffic. The defau lt gateway acts as a gateway between different Layer 2 domains, and functions
as a Layer 3 router.

www.j uniper.net IP Fabric • Chapter 3 - 11


Dat a Center Fabric with EVPN and VXLAN

Routing in the Data Center Overview

• Routing in the data center


• Routing forwards IP packets to a destination IP host address
• Uses an IP forwarding table that points to next-hop physical interface
• Usually populated through routing protocol processes
• MAC address changes at each physical hop
• Loop prevention provided by dynamic
routing protocols (OSPF, IS-IS, BGP, etc.) ------------------------
/"' B's Routin Table ~ ~ C's Routin Table'

• Load sharing provided by routing protocols ,


1 Host1 IP> NH W 0
> NH v u-----,. . . .,. y.,,,...,_,.....
[£] Host1 IP> NH X I
> NH z ,
(ECMP) : _ _ _ _..,,,, :
I I
• Each segment is a different broadcast I I

domain : ,-,£.-.. w.. ,.______ y z :


1, 0 ~ 0 )
A's Routing Table
Host1 IP > NH u ,__ '
> NH V
- - - - - - -
D's Routing Table
-1 Host1 > direct r --
p
' 'I_H_
o_st-1 ->- l
di-re-ct...

1111111 Host1
C> 2019 Juniper Networks, Inc All Rights Reserved

Routing in the Data Center


The concept of routing in a data center is based on implementing a Layer 3 fabric in place of a Layer 2 fabric. In a Layer 3
fabric, packets are forwarded according to IP next hops with in the routing/switc hing devices. A node in the fabric forwards
traffic toward the destination host based on routing table entries, wh ich are usually learned through a dynamic routing
protocol. With a Layer 3 fabric, the MAC address of the transit frame changes at each physical hop.

Ideally, leaf nodes in a Layer 3 fabric should have multiple forwarding paths to remote destinations. Loop prevention in a
Laye r 3 fabric is provided by dynamic routing protocols, such as OSPF, IS-IS, and BGP. Many vendors implement routing
solutions that enable equal cost multi-path load sharing (ECMP). The proper configuration of ECM P within the Layer 3 fabric
permits the use of all links for forwarding, wh ich provides a substantial advantage over STP loop prevention in a Layer 2
fabric.

Because the Layer 3 fabric is a routed domain, t he broadcast domains are limited in scope to each individua l link. With a
Layer 3 fabric, hosts attached to the edge, or leaf nodes, do not have direct Layer 2 connectivity, but instead communicate
through Layer 3.

Chapter 3 - 12 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

IP Fabric Routing Strategy (1 of 2)

• Each leaf node should have multiple next hops to remote


destinations
• Allows leaf nodes to load share traffic over multiple links

" - - - 1PFabMc- - -o_--_..., - - E - - - - - - - - '\


I
I
1
- -- I
I
I I
I I
I I
I I
I I

...
I _ _ _ I

I -~
- -~
- -
-
B ~'
I

-- --
C's Routing Table , ,
10.1.1/24 > nexthop X ,___ -----------------
> nexthop Y
10.2.2/24 > nexthop X
> nexthop Y
C> 2019 Juniper Networks, Inc All Rights Reserved

Routing Strategy, Part 1


The slide highlights the desired routing behavior of a Leaf node. Ideally, each Leaf node should have multiple next hops to
use to load share traffic over the IP fabric . Notice that router C can use two different paths to forward traffic to any remote
destination.

Remember that your IP Fabric will be forward ing IP data only. Each node is basically an IP router. To forward IP packets
between routers, they need to exchange IP routes. So, you have to make a choice between routing protocols. You want to
ensure that your choice of routing protocol is sca lable and future proof. As you can see by the chart, BGP is the natural
choice for a routing protocol, although the capabi lities of OSPF and IS-IS may be sufficient for many environments and
deployment types, depending on the end role of the IP fabric. For example, an IP fabric that wi ll connect directly to all host
devices, and maintain routing information to those hosts, wou ld benefit from the scale capabilities of BGP. An IP fabric that
will be deployed as an underlay technology for an underlay/ove rlay deployment may only be required to maintain routing
information related to the loca l links and loopback addresses of the fabric, and will not maintain routing information
pertaining to end hosts or tenants. In the latter scenario, an IGP may be sufficient to maintain the necessary routing
information.

www .juniper.net IP Fabric • Chapter 3 - 13


Data Center Fabric with EVPN and VXLAN

IP Fabric Routing Strategy (2 of 2)


■ Each spine node should have multiple next hops to reach
remote desti nations
• Allows spine nodes to load share traffic over multiple links
D's Routing Table E's Routin Table
10.1.1/24 > nexthopQ 10.1.1/24 > nexthop R
> nexthop S > nexthop T
10.3.3/24 > nexthop X 10.3.3/24 > nexthop Y
I

\
C A B .,, 1
-------------------- --
10.1.1/24 1

--...... /
Q 2019 Juniper Networks, Inc All Rights Reserved

Routing Strategy, Part 2


Within an IP fabric design, each spine node should have a connection to each leaf node. In the event of a spine node link or
node fai lure, leaf-to-leaf connectivity is maintained. In the case of mu ltihomed sites, each spine node should have multiple
paths to the remote site, and load balances traffic to the remote host across all leaf nodes that connect to the remote site.

Chapter 3 - 14 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

The Routing Protocol Decision


■ A scalable routing protocol must be chosen
• Each node must send and receive routing information to and from
all the other nodes in the fabric
• The routing protocol must scale and ensure load balancing
,------------------------~
IP Fabric - - '
t
I -=- +-::_ I
I - - I
I I

1-=--=--=--=--=- -=...1
I
, -
-
-
-
-
-
-
-
-
-
-
_I
I
~------------------------;
Requirement OSPF ISIS EBGP
Advertise Prefixes Yes Yes Yes
Scale Limited Limited Extensive
Policy Control Limited Limited Extensive
Traffic Tagging Limited Limited Extensive
Multivendor Stability Yes Yes Yes

C> 2019 Juniper Networks, Inc All Rights Reserved

Routing Protocols for IP Fabric Design


Your IP Fabric will be forwarding IP data on ly. Each node is basically an IP router. To forward IP packets between routers, they
need to exchange IP routes, and therefore you have to make a choice between routing protocols. You want to ensure that
your choice of routing protocol is sca lable and future proof. As you can see by the chart, BGP is the natural choice for a
routing protocol, although the capabilities of OSPF and IS-IS may be sufficient for many environments and deployment types,
depending on the end role of the IP fabric . For example, an IP fabric that will connect directly to all host devices, and
maintain routing information to those hosts, would benefit f rom the scale capabi lities of BGP. An IP fabric that wi ll be
deployed as an underlay techno logy for an underlay/overlay deployment may only be required to maintain routing
information related to the loca l links and loopback addresses of the fabric, and will not maintain routing information
pertaining to end hosts or tenants. In the latter scenario, an IGP may be sufficient to maintain the necessary routing
information.

www .juniper.net IP Fabric • Chapter 3 - 15


Data Center Fabric with EVPN and VXLAN

EBGP Fabric (1 of 4)

■ An EBGP fabric usually relies on each router to be in a different AS


• Private range is 64512 to 65535
• Can also use 32-bit AS numbers

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lW()PKS
16

EBGP Fabric, Part 1


In an EBGP based IP fabric, each device is often configured as a unique AS. The private AS number range, from 64512 to
65535, is available to be used within an IP fabric. Additionally, 32-bit AS numbers can be used to increase the number of AS
va lues available. The use of unique AS numbers on each node allows the built-in BGP path selection and loop prevention
mechanisms to be implemented.

Chapter 3-1 6 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

EBGP Fabric (2 of 4)
■ EBGP peering
• Physical interface peering (no need for an IGP)
• Every leaf node has one peering session to every spine node
• No leaf-to-leaf and no spine-to-spine peering (they will receive all routes based on
normal EBGP advertising rules)
• Al l routers configured for multipath multiple - as with forwarding table
load-balancing policy
• Enables ECMP at the routing table and forwarding table levels

•• ••
••• •• •• EBGP Session
•• •
•·············►

• • •

Q 2019 Juniper Networks, Inc All Rights Reserved

EBGP Fabric, Part 2


With EBGP peering, BGP peers connect to the IP address of physically connected devices, and therefore there is no need for
an underlying IGP. Every leaf node peers to each spine node. Since BG P peering sessions mirror t he physical topology, there
is no BG P peering session between leaf nodes. Leaf nodes receive routi ng information through the peering session to the
spine nodes.

By default, if a route to the same destination is received from two different BGP peers, which belong to different AS's, only
one of the routes will be selected as active in the routing table. In order to take advantage of the redundant paths within a
BGP based IP fabric, the mul t i path mul t i ple-as parameter can be configured. This parameter enables the device to
install all paths to a remote destination in the routing table. In order to export a ll pot ential paths in the routing table to the
forward ing table, a load balancing policy must be configured and applied to the forwarding table as we ll.

www .j uniper.net IP Fabric • Chapter3 - 17


Data Center Fabric with EVPN and VXLAN

EBGP Fabric (3 of 4)

■ Route advertisements from leaf nodes


• Route is advertised to all spine nodes
• Because of multipath multiple-as and load balancing policy, both
routes received by a spine node will be used for forwarding (ECMP)

••
••
•• ••••
•• ••
10.1/16 ....
• • • NH=A

C> 2019 Juniper Networks, Inc All Rights Reserved

EBGP Fabric, Part 3


The diagram illustrates a route t o network 10.1/16 advertised by leaf nodes AS65005 and AS65006. The spine nodes
AS65001 and AS65002 rece ive the route f rom two separate peer AS's. The default behavior of the spine nodes is to choose
one available next hop for network 10.1/16 and install t hat next hop as the active next hop in the routing tab le. The selected
next hop in the routing ta ble is then exported to the forward ing table.

To allow both advertised next hops t o be installed in the AS65001 and AS65002 devices, the multipath mult iple-as
parameters are conf igured on the BGP peering sessions. This allows all potential next hops for net work 1 0.1/16 to be
inst alled in the routi ng table.

Chapter 3 - 1 8 • IP Fabric www.j uniper.net


Data Center Fabric with EVPN and VXLAN

EBGP Fabric (4 of 4)

■ Route advertisements from spine nodes


• Routes are advertised to all leaf nodes
• Because of multipath multiple-as and load balancing policy, both
routes received by a leaf node will be used for forwarding (ECMP)

\_10.1/16 )
---..__J

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElV.'OPKS
19

EBGP Fabric, Part 4


Leaf nodes are connected to multiple spine nodes. In order to load share traffic among all spine nodes, the same
parameters must be configured on leaf nodes as on the spine nodes to allow mu ltiple paths to be installed in the routing and
forward ing tables.

www .j uniper.net IP Fabric • Chapter 3 - 19


Data Center Fabric with EVPN and VXLAN

EBGP Fabric Considerations

• An EBGP fabric usually relies on each router to be in a different AS


• Same AS can be used on all leaf nodes with adjustments
• Use as - override or loops 2 to override default BGP loop prevention mechanism

10.1 /16
AS Path 65000 65001 I ••••• •
•• •••••
••••

•• •
•••

~

Local AS in
advertised route!
Loop Detected!

10.1/16

C> 2019 Juniper Networks, Inc All Rights Reserved

EBGP Fabric Considerations


An EBGP fabric does not require that each device be configured with a unique AS number. In the example shown, all of the
spine nodes are assigned the same AS number among the spines, and all of the leaf nodes are assigned the same AS
number among the leaf nodes. This is a valid EBGP fabric configuration, but introduces some additional things to consider.

One of the built-in functions of the BGP protocol is loop prevention. The BGP protocol prevents loops by tracking the AS
numbers through which a route has been advertised, and drops a route that is received if that route has the local AS number
in the AS Path property of the route.

In the example, all leaf nodes are configured with AS 65001. The leaf nodes connected to network 10.1/16 advertise the
10.1/ 16 network to the spine nodes. When the route is advertised to AS65000, the leaf nodes prepend, or add their local AS
number to the front of the AS-Path field of the advertised route. When the spine nodes in AS65000 receive the route, the AS
Path parameter is analyzed, and no loop is detected, as AS65000 is not in the AS Path list. Both spines advertise the
10.1/ 16 network to connected BGP peers, and prepend the locally configured spine AS to the AS Path. When the route
arrives on another leaf configured with AS65001, the AS Path is examined, and the rece iving router determines that the
route has been advertised in a loop, since it 's locally configured AS number is already present in the AS Path parameter.

In order to change t his behavior, BGP can be configured with the as-ove r r i de parameter or with t he loops parameter.
The as-over ride parameter overwrites values in the AS Path field prior to advertising the route to a BGP peer. Normally
the AS of the device that is overwriting the AS Path parameter is used in place of the specified value. In the example shown,
the AS65000 router would replace all instances of AS65001 with AS65000 prior to advertising the route to its peers, and
therefore the leaf device in AS65001 would not see its local ly configured AS number in the AS Path of the received route.

Alternatively, the receiving device, in this case the leaf devices, can be configured to allow the presence of the locally
configured AS number a specified number of times. In the example, the parameter l oops 2 may be used to allow the route
to contain the locally configured AS number up to 2 times before a route is considered a loop.

Chapter 3-20 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

IP Fabric Best Practices

■ Best practices:
• All spine nodes should be the same type of router
• Every leaf node should have an uplink to every spine node
• Use either all 1OOGbE, 40GbE, or all 1OGbE uplinks
• IGPs can take interface bandwidth into consideration during the SPF calculation,
which may cause lower speed links to go unused
• EBGP load balancing does not take interface bandwidth into consideration

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
21

Best Practices
When enabling an IP fabric you should f ollow some best practices . Remember, t wo of t he mai n goals of an IP fabric design is
t o provide a non-blocking architecture t hat also provides predict able load-balancing behavior.

Some of the best pract ices that should be fo llowed include:

• All spi ne nodes should be the same t ype of router. They shoul d be the same model and t hey should also have
the same linecards installed . This helps the fabric t o have a predictable load balancing behavior.

• All leaf nodes should be the same type of router. Leaf nodes do not have to be the same rout er as the spine
nodes. Each leaf node shou ld be the same model and they should also have the same linecards inst alled. This
helps the fabric t o have a predictable load balancing behavior.

• Every leaf node should have an uplink to every spine node. This helps t he fabric to have a predictable load
ba lancing behavior.

Al l uplinks from leaf node to spine node should be of the same speed. This helps the fabric to have predict able load
balancing behavior and also helps with t he non-blocking nature of the fabric. For example, let us assume t hat a leaf has one
40 Gigabit Ethernet (40GbE) uplink and one 10GbE uplink to the spine. When using an IGP such as OSPF (f or loopback
interface advertisement ), whe n calculating the shortest path to the destination, the bandwidth of the links will be take n into
co nsideration . OSPF will most likely always choose the 40Gb E interface during its shortest path first (SPF) calcu lation and
use the interface for f orwarding toward remot e next hops, which essentially blocks the 10GbE interface f rom ever being
used. In the EBGP scenario, t he bandwidth will not be t aken into consideration, so traffic will be equally load-shared over the
t wo different speed interfaces. Imagine trying to equally load share 60 Gbps of data over the two links. How will the 10GbE
interface handle 30 Gbps of traffic? The answer is ... it will not.

www .j uniper.net IP Fabric • Chapt er 3 - 21


Data Center Fabric with EVPN and VXLAN

Agenda: IP Fabric

➔ IP Fabric Scaling

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
22

IP Fabric Scaling
The slide highlights the topic we d iscuss next.

Chapter 3-22 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

IP Fabric Scaling
■ Scaling up the number of ports in a fabric network is accomplished by
adjusting the width of the spine and the oversubscription ratio
• What oversubscription ratio are you willing to accept?
• 1 to 1 ( 1: 1) - Approximately line rate forwarding over the fabric
• 3 to 1 (3: 1)- Spine (as a whole) can only handle 1/3 of the bandwidth on the server
facing interfaces (on the leaf nodes)
• Number of spines is determined by the number of uplinks on leaf devices

---- ---- ---- ----


Spine Nodes
QFX10000

Leaf Nodes
qfx5120
---- ---- ---- ---- ••• ----
Q 2019 Juniper Networks, Inc All Rights Reserved

Scaling
To increase the overal l throughput of an IP fabric, you simply need to increase the number of spine devices (a nd the
appropriate upl inks from the leaf nodes to those spine nodes). If you add one more spine node to the fabric, you will also
have to add one more uplink from each leaf node. Assuming that each uplink is 40GbE, each leaf node can now forward an
extra 4 0 Gbps over the fabric.

Adding and removing both server-facing ports (down links from the Leaf nodes) and spine nodes will affect the
oversubscription (OS) ratio of a fabric . When designing the IP fabric, you must understand OS requirements of yo ur data
center. For example, does your data center need line rate forwarding over the fabric? Line rate forward ing would equate to
1-to-1 (1 :1) OS. That means the aggregate server-facing bandwidth is equal to the aggregate uplink bandwidth. Or, maybe
your data center would work perfectly fine with a 3 :1 OS of the fabric. That is, the aggregate server-facing bandwidth is three
times that of the aggregate uplink bandwidth . Most data centers will probably not require to design around a 1 :1 OS. Instead,
you should make a decision on an OS ratio that makes the most sense based on the data center's normal bandwidth usage.
The next few slides discuss how to ca lculate OS ratios of various IP fabric designs.

www .juniper.net IP Fabric • Chapter 3 - 23


Dat a Center Fabric with EVPN and VXLAN

3:1 Oversubscription Ratio Example


■ 3:1 Topology
• 4 Spine nodes with (32) 40GbE interfaces
• From 1 to 32 leaf nodes
• Each has fully utilized (48) 1OGbE interfaces
• Each has (1) 40GbE interface to each of the 4 spine nodes
BW to and from Servers :Uplink BW
480G x 32 Leaf Nodes : 160G x 32 Leaf Nodes
15360G: 5120G

Spine Nodes
51 ---
- 52 ---
,-----, 3 1
- 53 ---- 54 ---
-

Leaf Nodes
--
.::::::+
---- ---- ---- ••• ----
L1 L2 L3 L4 L32
Can be any number of leaf nodes up to 32._ l
Regardless of number of leaf nodes, OS remains 3:1
Q 2019 Juniper Networks, Inc All Rights Reserved

3:1 Topology
The slide shows a basic 3:1 OS IP fabric . All spine nodes, four in tota l, are routers (Layer 3 switches) that each have (32)
40GbE interfaces. All leaf nodes, 32 in total, are routers (Layer 3 switches) that have (6) 40GbE uplink interfaces and (48)
10GbE server-facing interfaces. Each of the (48) 10GbE ports for all 32 spine nodes will be fully uti lized (i.e., attached to
downst ream servers). So, the total server-facing bandwidth is 48 x 32 x 10 Gbps, which equals 15360 Gbps. Each of the 32
leaf nodes has (4) 40GbE spine-facing interfaces. So, the total uplink bandwidth is 4 x 32 x 40 Gbps, which equals
5120 Gbps. The OS ratio for this fabric is 15360:5120 or 3:1.

An interesting thing to note is that if you remove any number of leaf nodes, the OS ratio does not change. For example, what
would happen to the OS ratio if only 31 nodes exist? The server-facing bandwidth would be 48 x 31 x 10 Gbps, wh ich equals
14880 Gbps. The total uplink bandwidth is 4 x 31 x 40 Gbps, which equals 4960 Gbps. The OS ratio for th is f abric is
14880:4960 or 3 :1. This fact actually makes your design calculat ions very simple. Once you decide on an OS ratio and
determine the number of spine nodes that will allow that rat io, you can simply add and remove leaf nodes f rom the topology
without affecting the original OS ratio of the fabric .

Chapter 3-24 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

2:1 Oversubscription Ratio Example


• 2:1 Topology
• 6 spine nodes with (32) 40GbE interfaces
• From 1 to 32 leaf nodes
• Each has fully utilized (48) 1OGbE interfaces
• Each has (1) 40GbE interface to each of the 6 spine nodes
BW to and from servers : Uplink BW
480G x 32 leaf nodes : 240G x 32 leaf nodes
15360G • 7680G Total of 6 spine nodes
2:1
Spine Nodes
S1 ---
- S2 ---
- S3 ---- S4 ---
- ■ ■ ■ ----

LeafNodes
--
~
---- ---- ---- ... ----
L1 L2 L3 L4 L32
Can be any number of leaf nodes up to 32._ l
Regardless of number of leaf nodes, OS remains 2:1
C> 2019 Juniper Networks, Inc All Rights Reserved

2:1 Topology
The slide shows a basic 2:1 OS IP fabric in which two spine nodes were added to the topology from the last slide. All spine
nodes, six in total, are routers (Layer 3 switches) that each have (32) 40GbE interfaces. All leaf nodes, 32 in total, are routers
(Layer 3 switches) that have (6) 40GbE uplink interfaces and (48) 10GbE server-facing interfaces. Each of the (48) 10GbE
ports for all 32 spine nodes will be fully utilized (i.e., attached to downstream servers). That means that the total
server-facing bandwidth is still 4 8 x 32 x 10 Gbps, whic h equals 15360 Gbps. Each of the 32 leaf nodes has (6) 40GbE
spine-facing interfaces. That means, t hat t he tota l uplink bandwidth is 6 x 32 x 4 0 Gbps, wh ich equals 7680 Gbps. The OS
ratio for this fabric is 15360:7680 or 2 :1.

www .j uniper.net IP Fabric • Chapter 3 - 25


Data Center Fabric with EVPN and VXLAN

1 :1 Oversubscription Ratio Example


■ 1: 1 Topology
• 6 spine nodes with (32) 40GbE interfaces
• From 1 to 32 leaf nodes
• Each is limited to (24) 1OGbE interfaces
• Each has (1) 40GbE interface to each of the 6 spine nodes
BW to and from servers : Uplink BW
240G x 32 leaf nodes : 240G x 32 leaf nodes
Total of 6 spine nodes
7680G : 7680G

---- --- --- --- ---


1:1
Spine Nodes
51 52 - 53 - 54 - ■ ■ ■

Leaf Nodes
--
.:::::::+
---- ---- ---- ■ ■ ■ ----
L1 L2 L3 L4 L32
Can be any number of leaf nodes up to 32 . _J
Regardless of number of leaves, OS remains 1:1
C> 2019 Juniper Networks, Inc All Rights Reserved

1:1 Topology
The slide shows a basic 1:1 OS IP fabric . All spine nodes, six in total, are qfx5100-24q routers that each have (32) 40GbE
interfaces. All leaf nodes, 32 in total, are qfx5100-48s routers that have (6) 40GbE uplink interfaces and (48) 10GbE
server-facing interfaces. There are many ways that an 1:1 OS ratio can be attained . In this case, although the leaf nodes
each have (48) 10GbE server-facing interfaces, we are only going to allow 24 servers to be attached at any given moment.
That means the tota l server-facing bandwidth is still 24 x 32 x 10 Gbps, which equals 7680 Gbps. Each of the 32 leaf nodes
has (6) 40GbE spine-facing interfaces. That means the total uplink bandwidth is 6 x 32 x 40 Gbps, which equals 7680 Gbps.
The OS ratio for this fabric is 7680:7680 or 1 :1.

Chapter 3 - 26 • IP Fabric www.juniper.net


Data Center Fabric w ith EVPN and VXLAN

Agenda: IP Fabric

➔ Configure an IP Fabric

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lW()PKS
21

Configure an IP Fabric
The slide highlights the topic we d iscuss next.

www .j uniper.net IP Fabric • Chapter 3 - 27


Data Center Fabric with EVPN and VXLAN

Example Topology - IGP Fabric Using OSPF


• Three-stage OSPF IP fabric topology
• All fabric links are numbered 172.16.1.x/30
• Host A and B networks advertised into OSPF on leaf1 and leaf3
• Goal: All traffic between A and B must be evenly load-shared over fabric
Fabric Link Addresses:
172.16.1.x/30 OSPF Area 0

Loopback Addresses:
spine1: 192. 168. 100.1
spine2: 192.168. 100.2
leaf1: 192.168.100.3
leaf2: 192.168.100.4
leaf3: 192.168.100.5
.6

leaf1 .2 leaf2
xe-0/0/0

10.1.1.0/24
.1 .1 10.1.2.0/24

El
Host A
El
Host B

C> 2019 Juniper Networks, Inc All Rights Reserved

IGP Fabric Using OSPF


The exhibit shows the example topology that will be used in the subsequent slides. Al l routes will be configured as part of the
OSPF Area 0 . Host A is single homed to leaf 1, and Host B is single-homed to leaf 3. The network segments and loopback
addresses with in the IP fabric are shown in the slide. The links to Host A and Host B are configured as part of the OSPF
doma in, but are configured in passive mode. The passive mode enables the advertisement of t he networks connected to the
passive link into the OSPF domain, but will not attempt to establish an OSPF neighbor, nor will it accept an attempt to
establish an OSPF neighbor, across the passive link.

Chapter 3-28 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Base OSPF Configuration Spine

■ Spine configuration
• Each spine production interface is included in OSPF
• Each spine loopback interface is included in OSPF

{master : O} [edit] {master : 0) [edit]


lab@spinel# show protocols ospf lab@spine2# show protocols ospf
area o.o.o.o { area 0 . 0 . 0 . 0 {
interface xe-0/0/1 . 0; interface xe-0/0/1 . 0 ;
interface xe-0/0/2 . 0 ; . Physical interface address to Leaf Nodes . interface xe-0/0/2 . 0 ;
interface xe-0/0/3 . 0 ; . . .:.nterface xe- 0/0/3 . 0 ;
interface loO . O; interface loO . O;
} }

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine Node OSPF Configuration


On the spine nodes, the fabric interfaces are configured under the OSPF area 0 . The loopback interface is also configured
under area 0. A similar configuration is implemented on all spine nodes.

www .j uniper. net IP Fabric • Chapter 3 - 29


Data Center Fabric with EVPN and VXLAN

Base OSPF Configuration - Leaf1 and Leaf3

• Host connected leaf configuration


• Each spine connected interface is included in OSPF
• Each host connected interface is included in OSPF
• Passive mode prevents OSPF neighbor establishment with hosts
• Passive mode includes network in OSPF advertisements
• Each loopback interface is included in OSPF
{master : O} [edit] {master : O} [edit]
lab@leafl# show protocols ospf lab@leaf3# show protocols ospf
area 0 . 0 . 0 . 0 { area 0 . 0 . 0 . 0 {
interface xe- 0/0/1 . 0 ; , . interface xe- 0/0/2 . 0 ;
interface xe-0/0/2 . 0 ; Physical interface address to leaf nodes . interface xe-0/0/3 . 0 ;
.
interface xe-0/0/0 . 0 nterface xe- 0/0/0 . 0 {
passive . ; ' passive . ;
) )
interface loO . O; interface lo0 . 0;
} }

iQ 2019 Juniper Networks. Inc All Rights Reserved

Leaf Node OSPF Configuration


On the leaf nodes, the fabric facing interfaces are configured under the OSPF area 0. The loopback address is also
configured under the OSPF area 0 . The host facing interface is configured under OSPF area 0, and is configured as a passive
interface, which places the interface and connected networks within the OSPF domain, but no OSPF neighbor will ever form
over this interface. A similar configuration is implemented on all leaf nodes.

Chapter 3-30 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Base OSPF Configuration - Leaf2

• Leaf2 configuration
• Leaf 2 connects to each spine device

Note: In this example, device leaf2 does not forward traffic


from leaf1 to leaf3, as it is not in the shortest cost path.

{master : O} [edit)
lab@leaf2# show protocols ospf
area 0 . 0 . 0 . 0 {
interface xe- 0/0/1 . 0 ;
i nterface xe-0/0/2 . 0 ; : Physical interface address to spine nodes
interface loO . O;
}}

Q 2019 Juniper Networks, Inc All Rights Reserved

Leaf 2 OSPF Configuration


Although in this example leaf2 doesn't have hosts connected, the configuration is shown. Because leaf2 is not on the
shortest cost path between leaf1 and leaf3, transit traffic will not be forwarded across leaf2. However, if hosts will be added
to leaf2 in the future, leaf2 is already a part of the IP fabric, and only the host facing interfaces will have to be added to the
OSPF domain.

www .juniper.net IP Fabric • Chapter 3 - 31


Data Center Fabric with EVPN and VXLAN

Base OSPF Configuration Leaf1 Results

■ Leaf1 results
• Two hops to destination host in routing table
• By default, only one hop is selected and installed in forwarding table
{master : 0 ) [edit ]
lab@l eafllt run s how r oute 1 0 . 1 . 2 . 1 OSPF Area 0
inet . O: 21 dest i nations , 21 routes (21 ac tive , 0 holddown , 0 h i dden )
+ = Active Route , - = Last Active , * = Both

10 .1. 2 . 0/24 * [OSPF/10 ] 01 : 02 : 08 , met r ic 3


to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
... > to 172 . 16 . 1 . 17 via xe- 0/0/2 . 0
.6

leaf1 .2
xe-0/0/0

10.1.1.0/24
.1

Selected next-hop

Source of routing information - * indicates that it is the active route


D
Host A

C> 2019 J uniper Networks , Inc All Rights Reserved

Leaf1 Results
The slide shows t hat the OSPF protocol has advertised directly connect ed host networks with in the OSPF domain. Not e that
on leaf1, OSPF has advertised two next hops to reach remote net work 10.1 .2 .0/24. A single active next hop has been
selected. At this point in the configurat ion, the next hop of 172.16.1 .17, via interface xe-0/0/2.0, is the only next hop that will
be used to forward traffic to 10.1.2.0/24 . The alternate next hop, through interface xe-0/0/1.0, is a backup next hop.

Chapter 3-32 • IP Fabric www.jun iper.net


Data Center Fabric with EVPN and VXLAN

Base OSPF Configuration


Leaf1 Forwarding Table (1 of 3)
■ Leaf1 forwarding table
• By default, only one hop is exported to the forwarding table
• Use policy to over-rid e this behavior
{master : 0) [edit ]
lab@l eafllt run show route forwarding-table destination 10 . 1 . 2 . 1 OSPF Area 0
Rout i ng table : default . inet
Internet :
Enabled protocols : Bridging,
Destination Type RtRef Next hop Type Index NhRef Netif
10 . 1 . 2 . 0/24 user 0 172 . 16 . 1 . 17 ucst 1754 6 xe -0/0/2 . 0

...
.6

leaf1 .2
xe-0/0/0

10.1.1.0/24
.1

Installed next-hop in forwarding table - physical interface from previous exhibit


D
Host A

iQ 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 Forwarding, Part 1


If the forward ing table is examined, we can see that on ly a single next hop to destination 10.1.2 .0/24 has been installed.
Th is next hop corresponds to the next hop that was selected by OSPF as the active next hop. A primary goal of implementing
an IP fabric is to utilize al l available paths for forward ing.

www .j uniper.net IP Fabric • Chapter 3 - 33


Data Center Fabric with EVPN and VXLAN

Base OSPF Configuration -


Leaf1 Forwarding Table (2 of 3)
• Over-ride default forwarding table behavior
• Load balance policy to load-balance on equal cost paths
• Per-packet specification applies per-flow load-balancing (avoids out-of-sequence
packets)
{master : 0} [edit]
• Apply load-balance policy lab@leafl# show policy-options
policy-statement load-balance {
to forwarding table term 1 { load-balance policy matches
then { all potential routes with equal-
• Overrides default behavior load- balance per- packet ; cost paths in routing table
}
}
}
{master : 0} [edit]
lab@leafl# set routing-options forwarding-table export load-balance

{master : O} [edit]
lab@leafl# show routing-options
router- id 192 . 168 . 100 . 11 ;
autonomous-system 65100 ; load-balance policy must be applied
forwarding-table {
export load-balance; to forwarding table to override default
} behavior

C> 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 Forwarding, Part 2


The default behavior of Ju nos OS is to install a single next hop to each destination in t he forward ing tab le. To uti lize multiple
equal cost fo rwarding paths, t his default behavior can be overridden through policy.

The example shows a routi ng policy used to implement load ba lancing among all avai lable equal cost next hops in the
routing table. The policy load-ba l ance has a single term, named te r m 1, wh ich has an action of l oad-bal ance
per-packet. Note that t here is no fr om statement in the term . The absence of a from statement indicates t hat the policy
will match any route, without any conditions. This policy will match any route in the routing table, and if t he route has multiple
next hops, all next hops will be accepted by the policy. If more granular load-balancing is required, a from statement can
specify additiona l match parameters to select to which routes the policy will be applied.

The policy is applied to t he forward ing table to override the default behavior of exporting a single next hop from t he routing
table. To apply a policy to the forward ing table, apply the policy as an export policy at the [ r o u t i ng-options
forwarding-table J hierarchy.
Note that although t he policy action is l oad-b a l ance per-packet, the action actually performs per-flow based load
balancing.

Chapter 3-34 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Base OSPF Configuration -


Leaf1 Forwarding Table (3 of 3)
• Load-balance policy results on forwarding table
• A unilist (unicast list) of hops is created
• All equal cost next hops are included in the unilist
{master : O} [edit]
lab@leafl# run show route forwarding-table destination 10 . 1 . 2 . 1
Routing table : default . inet
Internet :
Enabl ed protoco ls : Bridg i ng ,
Destination Type RtRef Next hop Type Index NhRef Netif
10 . 1 . 2 . 0/24 "Ser 0 l!,_1S1 131070 4
r--+172 . 16 . 1 . 5 ucst 1750 6 xe-0/0/1 . 0
l--+.172. 16 . 1 . 17 iicst 1754 6 xe- 0/0/2 . 0

physical hops in list of unicast hops

list of unicast hops created in forwarding table for destination

C> 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 Forwarding, Part 3


The slide shows the results of applying the l o a d-bal an c e policy to the forwarding table. An examination of the forwarding
table shows that a unilist has been created for the destination prefix. A uni list is a list of unicast routes. The list conta ins all
equal cost next hops that are present for the destination prefix in the routing table.

www .juniper.net IP Fabric • Chapter 3 - 35


Data Center Fabric with EVPN and VXLAN

Example Topology - BGP Fabric


• Three-stage EBGP IP fabric topology
• All fabric links are numbered 172.16.1.x/30
• Host A and B networks redistributed into BGP
• Goal: All traffic between A and B must be evenly load-shared over fabric
Fabric Link Addresses:
172.16.1.x/30 EBGP Fabric
AS 65001 AS 65002
Loopback Addresses:
spine1 (AS 65001 ): 192.168.100.1
spine2 (AS 65002): 192.168.100.2
leaf1 (AS 65003): 192.168.100.3
leaf2 (AS 65004): 192.168.100.4
leaf3 (AS 65005): 192.168.100.5
~"
e:'<>
.6 -f .18

AS 65003 -~6 _01010 AS 65005 .2


xe-0/0/0

10.1 .1.0/24
.1 .1 10.1.2.0/24

El
Host A
D
Host B

C> 2019 Juniper Networks, Inc All Rights Reserved

Example Topology
The slide shows the example topology that will be used in the subsequent slides. Notice that each router is the single
member of a unique autonomous system . Each router will peer using EBGP with its directly attached neighbors using the
physica l interface addresses. Host A is single homed to the router in AS 65003. Host B is single homed to the router in AS
65005.

Chapter 3 - 36 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Base BGP Configuration-Spine

■ Spine configuration
• Each spine peers with each leaf using physical interface addressing
• No spine-to-spine peering
EBGP Fabric
{master : O} [edit prot ocol s bgp] AS 65001 AS 65002
lab@spi nel# show
group leafs (
type external ;
l ocal- as 65001;
n e i ghbor 172 .1 6 . 1 . 6 (
peer- as 65003 ; - - -
}
ne i ghb or 172 . 1 6 . 1 . 1 0 {
AS 65003 -~6 _01010 AS 65004 AS 65005 .2
peer-as 6500 4; - - - 1 xe-0/0/0
}
neighbor 172 . 16 . 1 . 14 ( 10.1 .1.0/24
peer- as 65005 ; - - - t .1 .1 10.1.2.0/24

D
}
)
Host A Host B

Physical interface address of leaf nodes

Autonomous System of leaf nodes


Q 2019 J uniper Networks. Inc All Rights Reserved

BGP Configuration-Spine Node


The slide shows the configuration of the spine node in AS 65001. It is configured to peer with each of the leaf nodes using
EBGP.

www .j uniper.net IP Fabric • Chapt er 3 - 37


Dat a Center Fabric with EVPN and VXLAN

Base BGP Configuration-Leaf

■ Leaf configuration
• Each leaf peers with each spine using physical interface addressing
• No leaf-to-leaf eering EBGP Fabric
{master : O} [edit protocols bgp) AS 65001 AS 65002
lab@l eafli show
group spine {
type external ;
local-as 65003 ;
neighbor 172 . 16 . 1 . 5 {
peer- as 65001 ; - - - - .
} .6 .26
--
-----· --
neighbor 172 . 16 . 1 . 17 { AS 65005 .2
peer-as 65002 ; - - - - t AS 65003 -~e-o,o,o xe-0/010
}
} 10.1.1.0/24
.1 .1 10.1.2.0/24

El
Host A
El
Host B

Physical interface address of spine nodes

Autonomous System of spine nodes


Q 2019 Juniper Networks, Inc All Rights Reserved

BGP Configuration-Leaf Node


The slide shows the conf iguration of the leaf node in AS 65004. It is configured to peer with each of the spine nodes using
EBGP.

Chapter 3-38 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Verify BGP Neighbors

• Ensure that BGP neighbor relationships have been established


{master : O}
lab@spinel> show bgp summary
Threading mode : BGP I/0
Groups : 1 Peers : 3 Down peers : 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet . O
0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Own Statel#Active/Received/Accepted/Damped .. .
172 . 16 . 1 . 6 65003 4 3 0 0 38 0/0/0/0 0/0/0/0
172 . 16 . 1 . 10 65004 6 4 0 0 1 : 43 0/0/0/0 0/0/0/0
172 . 16 . 1 . 14 65005 5 4 0 0 1 : 08 OIJ I0/0 0/0/0/0 ...

'
Four numbers separated by slashes
indicate a neighbor relationship is established
(master : 0}
lab@leafl> show bgp summary
Threading mode : BGP I/0
Groups : 1 Peers : 2 Down peers : 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet . O
0 0 0 0 0 0
Peer AS I nPkt Out Pkt OutQ Flaps Last Up/Own Sta eliActive/Received/Accepted/Damped ...
172 . 16 . 1 . 5 65001 5 5 0 0 •
1 : 33 0/0,0/0 0/0/0/0
172 . 16 . 1 . 17 65002 5 4 0 0 1 : 29 0/0/0/0 0/0/0/0

C> 2019 Juniper Networks, Inc All Rights Reserved

Verifying Neighbors
Once you configure BGP neighbors, you can check the status of t he relationships using either the show bgp summary or
sho w bgp neighbo r command.

www .j uniper.net IP Fabric • Chapter 3 - 39


Data Center Fabric with EVPN and VXLAN

Route Redistribution (1 of 3)

■ Redistribute server-facing networks


• Write a policy to advertise direct routes to BGP peers
• Must be performed on each node attached to a server

{master : O} [edit policy-options]


lab@leafli/ show
poli cy-statement direct {
term 1 {
from {
protocol direct ;
route-f i lter 10 . 1 . 1 . 0/24 exact ;
}
then accept ;
}
}
Local server-facing network

C> 2019 Juniper Networks, Inc All Rights Reserved

Routing Policy
Once BGP neighbors are established in the IP fabric, each router must be configured to advertise routes to its neighbors and
into the fabric. For example, as you attach a server to a top-of-rack (TOR) switch/router (which is usually a leaf node of the
fabric) you must configure the TOR to advertise the server's IP subnet to the rest of the network. The first step in advertising
a route is to write a policy that wi ll match on a route and then accept that route . The slide shows t he pol icy that must be
configured on the router in AS 65003.

An important thing to note with this configuration is that it applies to a Layer 3 fabric that advertises all access-connected
server IP addresses throughout t he fabric. This implementation is used when servers attached to the Layer 3 fabric will
communicate using Layer 3, or IP based, methods. With this implementation, all fabric nodes will have a route to every
server in the data center present in the routing table.

If an underlay/overlay design is configured, the IP addresses of the hosts connected to the access layer, or leaf nodes, are
not maintained in the fabric node routing tables. With an underlay/overlay design, only the leaf nodes maintain the host, or
tenant, routing information. The underlay fabric is used to interconnect the fabric nodes and to propagate routing
information specific to the underlay fabric. In order to provide the reachabi lity information necessary for an overlay to be
configured, only the loopback addresses of t he underlay devices need to be advertised to other fabric nodes, and the route
distribution policies should be adjusted to perform that task.

Chapter 3-40 • IP Fabric www.juniper.net


Data Center Fabric w ith EVPN and VXLAN

Route Redistribution (2 of 3)

■ Redistribute server-facing networks


• Apply the direct policy to advertise direct routes to BGP peers
• Must be performed on each node attached to a server

{master : O)[edit protocols bgp group spine)


lab@leafl# show
type external ;
export direct ; -
local -as 65003 ; Apply the policy called direct
neighbor 172 -16 . 1 -5 {
peer-as 65001 ;
}
neighbor 172 -16 . 1 -17 {
peer-as 65002 ;
}

C> 2019 Juniper Networks, Inc All Rights Reserved

Applying Policy
After configuring a policy, the policy must be appl ied to the router EBGP peers. The s lide shows the direct policy be ing
applied as an export policy to AS 65003's EBGP neighbors.

www .j uniper.net IP Fabric • Chapter 3 - 41


Data Center Fabric with EVPN and VXLAN

Route Redistribution (3 of 3)

• Verify that the server-facing networks are being advertised to BGP


peers

{master : O}
lab@leafl> show route advertising-protocol bgp 172 . 16 . 1 . 17

inet . O: 13 destinations , 14 routes (13 active , 0 holddown , 0 hidden)


Prefix Nexthop MED Lclpref AS path
Host P.:s network advertised
t - - -+I * 10 . 1 . 1 . 0/24 Self I
because of policy
{master : 0}
lab@leafl> show route advertising-protocol bgp 172 . 16 . 1 . 5

inet . 0 : 13 destinations , 14 routes (13 active , 0 holddown , 0 hidden)


Prefix Nexthop MED Lclpref AS path
* 10 . 1 . 1 . 0/24 Self I
* 10 . 1 . 2 . 0/24 Self 65002 65005 I

C> 2019 Juniper Networks, Inc All Rights Reserved

Verifying Advertised Routes


After applying the policy, the router should begin advertising any routes that were accepted by the policy. Use the show
route advertising-pro t o col bgp command to see which routes are being advertised to a router's BGP neighbors.

Chapter 3 - 42 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Multiple Paths (1 of 3)

• Verify that received routes are installed in routing table


• By default, only one path is chosen for routing traffic
{master : O}
lab@leafl> show route 10 . 1 . 2 . 1

inet . 0 : 13 destinations, 14 routes (13 active , 0 holddown , 0 hidden)


+=Active Route , - = Last Active , *=Both

Notice that only one next-hop will 10 . 1 . 2 . 0/24 *[BGP/170] 00 : 01 : 54 , localpref 100
be used for routing even though AS path : 65002 65005 I , validation-state : unverified
two next hops are possible ~-~------- . > to 172 . 16 . 1 . 17 via xe-0/0/2 . 0
[BGP/170] 00 : 01 : 54 , localpref 100
AS path : 65001 65005 I , validation- state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
,r
'
Second possible and
unused next hop

C> 2019 Juniper Networks, Inc All Rights Reserved

Default Behavior
Assuming the router in AS 65005 is advertising Host B's subnet, the slide shows the default routing behavior on a remote
leaf node. Notice that the leaf node has rece ived two advertisements for the same subnet. However, because of the default
behavior of BGP, the leaf node chooses a single route to select as the active route in the routing table (you can tell which is
the active route because of the asterisk). Based on what is shown in the slide, the leaf node will send al l traffic destined for
10.1.2/24 over the xe-0/0/2.0 link. The leaf node will not load share over the two possible next hops by default. This same
behavior will take place on the spine nodes, as we ll.

www .j uniper.net IP Fabric • Chapter 3 - 43


Data Center Fabric with EVPN and VXLAN

Multiple Paths (2 of 3)
• Enable multipath multiple-as on ALL nodes so multiple paths
can be used for routing

{master : 0) [edit]
lab@spinel# show protocols bgp
group leafs {
type external ;
local-as 65001 ;
multipath {
multiple-as ;
I
neighbor 172 . 16 . 1 . 6 {
peer-as 65003;
}
neighbor 172 . 16 . 1 . 10 {
peer- as 65004 ;
}
neighbor 172 . 16 . 1 . 14 {
peer-as 65005 ;
}
}

C> 2019 Juniper Networks, Inc All Rights Reserved

Override Default BGP Behavior


The mu lt ipath statement overrides the default BGP routing behavior and allows two or more next hops to be used for
routi ng. The statement by itself requires that the multiple routes must be received from the same autonomous system. Use
the multiple-as modifier to override that matching AS requirement. The configuration example shown is from a spine
node. All nodes in the fabric must apply this configuration.

Chapter 3-44 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Multiple Paths (3 of 3)

■ Verify that multiple received routes can be used for routing traffic

{master : O}
lab@leafl> show route 10 . 1 . 2 . 0/24

inet . 0 : 13 destinations , 14 routes (13 active, 0 holddown , 0 hidden)


+=Active Route , - = Last Active , * = Both

10 . 1 . 2 . 0/24 *[BGP/170] 00 : 02 : 00 , localpref 100


AS path : 65002 65005 I , validation- state : unverified
Notice that two next hops to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
are available for routing
.
> to 172 . 16 . 1 . 17 via xe-0/0/2 . 0
l ,. ~,1.,uJ uu : u .. : .,., , .1oca.1pre ... 1.uu
AS path : 65001 65005 I , validation-state : unverified
> to 172 . 16 . 1 . 5 via xe - 0/0/1 . 0 : unverified
> to 172 . 16 . 3 . 1 via ge-0/0/1 . 0

iQ 2019 Juniper Networks, Inc All Rights Reserved

Verify Multipath
View the routing table to see the results of the multipath statement. As you can see t he active BGP route now has two next
hops that can be use fo r forwarding. Do you th ink the router is using both next hops for forwarding?

www .juniper.net IP Fabric • Chapter 3 - 4 5


Data Center Fabric with EVPN and VXLAN

Load Balancing (1 of 3)

■ View forwarding table to view the next hops used for forwarding

lab@leafl> show route forwarding - table destination 10 . 1 . 2 . 0/24


Routing table : default . inet
Internet :
Enabled protocols : Bridging,
Destination Type RtRef Next hop Type Index NhRef Netif
10 . 1 . 2 . 0/24 user O 172 . 16 . 1 . 17 ucst 1754 4 xe-0/0/2 . 0

10.1.20/24 nexthop xe-0/0/1.0


'
Notice that one next hop is
RT > nexthop xe-0/0/2.0 being used for forwarding even
though multipath is enabled

Only one next hop is pushed down to FT

FT
----------------
10.1.20/24 nexthop xe-0/0/2.0

C> 2019 Juniper Networks, Inc All Rights Reserved

Default Forwarding Table Behavior


The slide shows t hat because multipath was configured in the previous slides, two next hops are associated with the
10 .1.2 .0/24 route in the ro uting table . However, on ly one next hop is pushed down to the forward ing table, by defa ult. So, at
th is point, the leaf node is continuing to on ly forward traffic destined to 10 .1 .2.0/24 over a single link.

Chapter 3 - 46 • IP Fabric www.juniper.net


Data Center Fabric with EVPN and VXLAN

Load Balancing (2 of 3)

■ Write and apply a load-balancing policy that ensures multiple next


hops in the routing table are also installed in the forwarding table

{master : O) [edit)
lab@leafl# show policy-options

policy-statement l oad- balance {


No from term 1 {
statement
then {
load-balance per-packet ;
- Must be
matches on all } performed on
routes }
every node in
}
the network

{master : O}[edit]
lab@leafli show routing-options
router-id 192 . 168 . 100 . 11 ;
autonomous - system 65100 ;
forwarding-table {
export load- balance ;
}

Q 2019 Juniper Networks, Inc All Rights Reserved

Load Balancing Policy


The final step in getting a router to load share is to write and apply a policy that will cause the multiple next hops in the
routing table to be exported from the routing table into the forwarding table . The slide shows the details of that process.

www .juniper.net IP Fabric • Chapter 3 - 47


Data Center Fabric with EVPN and VXLAN

Load Balancing (3 of 3)

■ View forwarding table to view the next hops used for forwarding
{master: 0}
lab@leafl> show route forwarding - table destinati on 10 . 1 . 2 . 0/24
Routing table : default . inet
Internet :
Enabled protocols : Bri dging,
Destination Type RtRef Nex t hop Type I ndex NhRef Neti f
10 . 1 . 2 . 0/24 user 0 ulst 131070 2
172 . 16 . 1 . 5 ucst 1750 4 xe-0/0/1 . 0
172 . 16 . 1 . 17 ucst 1754 4 xe- 0/0/2 . 0
.

10.1.2/24 nexthop xe-0/0/1 .0


\
Notice that two
RT > nexthop xe-0/0/2.0 next hops are
being used for
forwarding

_____...,______
Export load-balancing policy applied to RT
Both next hops are pushed down to FT
10.1.2/24 nexthop xe-0/0/1.0
FT nexthop xe-0/0/2.0

C> 2019 Juniper Networks, Inc All Rights Reserved

Verifying the Forwarding Table


After applying the policy to the forwarding table, all available next hops in the routing table are published to the forward ing
table. You can see by the output in the example that there are now two next hops in the forward ing tab le.

Chapter 3 - 48 • IP Fabric www.juniper.net


Data Center Fabric w ith EVPN and VXLAN

Full Configuration (1 of 3)

■ Leaf1 (AS 65003) configuration


• Leaf2 and leaf3 will have similar configurations

routing-options { protocols {
router-id 192 . 168 . 100 . ll ; bgp { policy-options {
autonomous-system 65100 ; group spine { policy-statement direct {
forwarding-table { type external ; term 1 {
e xport load- balance ; export direct ; from {
} local-as 65003 ; protocol direct ;
} multipath { route-filter 10 . 1 . 1 . 0/24 exact ;
multiple-as ; )
) then accept ;
neighbor 172 . 16 . 1 . 5 { }
peer-as 65001; )
) policy-statement load- balance {
neighbor 172 . 16 . 1 . 17 { term 1 (
peer- as 65002 ; then {
} load-balance per-packet ;
) )
) }
) )
}

Q 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 Configuration
The routing-options, protocols BGP, and pol icy configuration of leaf1, in AS65003, are displayed.

www .j uniper.net IP Fabric • Chapter 3 - 49


Data Center Fabric with EVPN and VXLAN

Full Configuration (2 of 3)

■ Spine (AS 65001) configuration


• Spine2 AS 65002 1 confiquration will be similar
routing-options { protocols { policy- statement l oad-balance {
router-id 192 . 168 . 100 . 1 ; bgp { term 1 {
autonomous-system 65100 ; group leafs { then {
forwarding - table { type external ; load- balance per- packet ;
export load-balance ; local-as 65001 ; )
} multipath { }
} multiple-as ; }
}
neighbor 172 . 16 . 1 . 6 (
peer- as 65003 ;
}
neighbor 172 . 16 . 1.10 (
peer- as 65004 ;
}
neighbor 172 .1 6 . 1.14 {
peer-as 65005 ;
}
)
}
}

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine Node Configuration


The routing-options, protocols bgp, and policy configuration of spine1 in AS 65001 are d isplayed. The configuration for
spine2 in AS 65002 will be similar.

Chapter 3-50 • IP Fabric www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Described routing in an IP fabric
• Explained how to scale an IP fabric
• Configured an OSPF-based IP fabric
• Configured an EBGP-based IP fabric

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
s1

We Discussed:
• Routing in an IP fabric;

• Scaling of an IP fabric;

• Configuring an OSPF-based IP fabric; and

• Configuring an EBGP-based IP fabric.

www .j uniper.net IP Fabric • Chapter 3 - 51


Data Center Fabric wit h EVPN and VXLAN

Review Questions

1. What are some of the Juniper Networks products that can be used
in the spine position of an IP fabric?
2. What is the general routing strategy of an IP fabric?
3. In an EBGP-based IP fabric, what must be enabled on a BGP router
so that it can install more than one next hop in the routing table
when the same route is received from two or more neighbors?

C> 2019 Juniper Networks, Inc All Rights Reserved

Review Questions
1.

2.

3.

Chapter 3-52 • IP Fabric www.j uniper.net


Data Center Fabric w ith EVPN and VXLAN

Lab: IP Fabric

• Configure an IP fabric using IGP.


• Configure an IP fabric using EBGP.
• Enable and verify load balancing.

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
s3

Lab: IP Fabric
The slide provides the objectives for th is lab.

www .j uniper.net IP Fabric • Chapter 3 - 53


Data Center Fabric with EVPN and VXLAN

Answers to Review Questions


1.
The QFX10000 series devices, the QFX5200 series devices, and the later models of the QFX5100 series, including the
QFX5110 and QFX5120, are recommended as spine devices because of their throughput, port, and sca labi lity features.

2.
The general routing strategy of an IP fabric is to provide multiple paths to all destinations within a fabric, and to provide load
sharing, pred ictability, and scale by leveraging the characteristics of routing protocols.

3.
In an EBGP based IP fabric, the default BGP route selection proces.s must be modified to allow multiple paths to the remote
destination. The command to do so is to configure the mu lti -path mult iple-as parameter in BGP. In addition, the default
behavior of the forwarding table must be modified through policy to permit multiple next hops to be copied to the forwarding

tab le.

Chapter 3-54 • IP Fabric www.juniper.net


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 4: VXLAN

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Explain VXLAN functions and operations
• Describe the control and data plane of VXLAN in a controller-less overlay
• Describe VXLAN Gateways

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
2

We Will Discuss:
• Functions and operations of VXLAN
• Control and data plane of VXLAN in a contro ller-less overlay
• VXLAN Gateway f unctions

Chapter 4-2 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: VXLAN

➔ Layer2 Connectivity over a Layer 3 Network


■ VXLAN Fundamentals

■ VXLAN Gateways

C> 2019 Juniper Networks, Inc All Rights Reserved

Layer 2 Connectivity over a Layer 3 Network


Th is slide lists the topics we will discuss. We will discuss the highlighted topic first.

www .j uniper.net VXLAN • Chapter 4-3


Data Center Fabric with EVPN and VXLAN

Traditional Applications
■ Many traditional applications in a data center require Layer 2 connectivity
between devices
L2 Switch L2 Switch

VLAN 100
Host A Host B
10.1.1.1/24 Switched Network 10.1.1.2/24

■ What happens when you have traditional applications in the data center that is
built around IP fabric?
L3 Device L3 Device

~ -
1--- --i = 172.16.0/24
-
=r--- --111 ~

= VLAN 100 Routed Traffic VLAN 100


Host A Host B
10.1 .1 .1/24 IP Fabric 10.1.1.2/24

C> 2019 Juniper Networks, Inc All Rights Reserved

Layer 2 Applications
Data centers host different types of applications. There are a few ways for applications to function . A Mode One appl ication
requ ires direct layer 2 connectivity between the d ifferent devices that host the app lications. A Mode Two application doesn't
requ ire layer 2 connectivity, but can run on a Laye r 3 network by using IP reachab ility instead of Layer 2 reachabi lity between
application nodes.

As data centers have grown, and as appl ication mobility and the need to move applications from device to device w ithin the
data center has become increasingly common, t he limits of Layer 2 only data centers have become more apparent. Build ing
a Layer 2 data center, which is based on VLANs and t he manual mapping of broadcast domains to accommodate the
application needs of legacy applications, can be cumbersome from a management perspective, and is often t imes not agile
enough, or scalab le enough, to meet the demands of modern business needs.

Layer 3 Fabrics
To address the scalability and agility needs of modern businesses and applications, new methods of designing and building
data centers have been established. Instead of using a Layer 2 switched network between host devices, the concept of an IP
based fabric has been introduced. Using an IP fabric within the data center allows t he use of routing protoco ls to
interconnect edge devices, and allows the use of the sca lab ility and load sharing capabilities that are inherently bu ilt into t he
routing protocols.

Using a Layer 3 network between edge devices introduces a problem with legacy applications. Many legacy app lications, or
Mode One applications, still require d irect Layer 2 connectivity between hosts or appl ication instances. Plac ing a Layer 3
fabric in between Layer 2 broadcast domains d isrupts that layer 2 connectivity. A new techno logy has been developed to
address t he need to stretch a Layer 2 domain across a Layer 3 domain.

Chapter 4-4 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Possible Solution: A Layer 2 VPN


• Implement TOR routers or switches with Layer 2 VPN capabi lities
• Tunnel Layer 2 fra mes inside IP packets between VPN gateways
• Routers or switches that can perform encapsulation and decapsulation of VPN data are generally called gateways
• Preserves original Layer 2 frame as it transits the transport network (IP Fabric)

Encapsulation for transmission


Data Forwarding Direction- - + over fabric - - + Decapsulate - - +
IETH I IP-DA 10.1.1.2 1 ! IP-DA 17 .16.0.2 !ETH I IP-DA 10.1.1.21 !ETH I IP-DA 10.1.1.2 !
•• Original Ethernet Frame

=-------------=~~---..... . . . J=
10.1.1.0/24 • 10.1.1.0/24
.1 ~L .2 ..=
-
,:;::::i
- .1 172.160/24

L3 Switch IP Fabric L3 Switch


Host A or ..,_____;"<----------------,~ or Host B
Router Router

TOR devices with Layer 2 VPN capabilties


C> 2019 Juniper Networks, Inc All Rights Reserved 5

Layer 2 VPNs
The foundation of th is Layer 2 stretch concept is based on the functiona lity of a standard Layer 2 VPN. With a Layer 2 VPN, a
data frame is encapsulated with in an IP packet at the boundary of a Layer 2 broadcast domain and an IP routed domain. The
encapsulated packet is routed across the IP domain, and is then decapsulated once it reaches the Layer 2 broadcast
doma in at the remote side of the Layer 3 routed domain.

From the perspective of the hosts at each end of the network, they are still connected with a direct Layer 2 connection. The
encapsulation and decapsulation used to cross the IP fabric is transparent. From the perspective of Host B in the example
shown, Host A is directly connected to the same broadcast domain as Host B, and they can communicate directly across that
Layer 2 domain.

www .juniper.net VXLAN • Chapter 4-5


Data Center Fabric with EVPN and VXLAN

VPN Terminology - Data Plane


■ The data plane of a VPN describes the process of encapsulation and
decapsulation performed by the VPN Gateways
• Including the end-to-end routing/MAC table lookups, packet/frame formatting, and sometimes
MAC learning

Encapsulation for transmission


Data Forwarding Direction- ~ - _ • ~ - - - ~ over fabric - - + Decapsulate - - +
I
ETH I IP-DA 10.1.1.2 1 ~I
IP
- --DA
- 17-.1-6.-0.-2 ~,E-TH
~l~IP
---
DA_1_0-.1-.1~.2 I !ETH I IP-DA 10.1.1.2 !
•• Original Ethernet Frame


10.1.1.0/24 •• •• 10.1.1.0/24

---- - ----
• •
• •
=
- .1 =
.2 ..
.1 172.16 0/24 .2 -

HostA
GW \ IP Fabric ) ) GW
Host B

HostA MAC > ge-0/0/0


\ I
Hosts MAC > ge-0/0/0
Hosts MAC > tunnel HostA MAC > tunnel

MAC Table MAC Table

C> 2019 J uniper Networks , Inc All Rights Reserved

Data Plane
There are generally two components of a VPN : t he data plane (as described on this diagram) and the contro l plane (as
described on the next diagram).

The data plane of a VPN describes the method in wh ich a gateway encapsulates and decapsulates the original data. Also, in
regards to an Ethernet Layer 2 VPN, it m ight be necessary for the gateway to learn the MAC addresses of both local and
remote servers, much like a normal Ethernet switch learns MAC addresses. In almost all forms of Ethernet VPNs, the
gateways learn the MAC addresses of locally attached servers in the data plane (i.e. from rece ived Ethernet frames). Remote
MAC addresses can be learned either in the data plane (after decapsulating data received from remote gateways) or through
the control plane.

Chapter 4-6 • VXLAN www.juniper.net


Data Center Fabric w ith EVPN and VXLAN

VPN Terminology - Control Plane


■ The control plane of a VPN describes the process of learning performed by the
VPN Gateways
• Including the IP address of remote VPN gateways, VPN establishment, and sometimes MAC
addresses of remote hosts
• Remote VPN gateways can be statically configured or dynamically discovered

10.1.1.0/24 10.1.1.0/24
-= .1
--
--+

--+
~

.1 172.16 0/24 .2 - ---


--+
.2 .. =
Host A
GW \ IP Fabric ) ) GW
Host B

Remote MACs are


sometimes learned HostA MAC > ge-0/0/0
\ I
HostB MAC > ge-0/0/0
form the signaling HostB MAC > tunnel HostA MAC > tunnel
protocol
MAC Table MAC Table

C> 2019 Juniper Networks. Inc All Rights Reserved

Control Plane
The control plane describes the process of learning performed by VPN gateways. This includes lea rn ing the remote IP
add ress of other VPN gat eways, establ ish ing the tunnels that interconnect the gateways, and sometimes learning the MAC
add resses of remote hosts. The contro l plane information can be statica lly conf igured o r dynamically d iscove red through
some type of dynamic VPN signa ling protocol.

Static configuration works fi ne but it does not sca le well. Fo r example, imagine t hat you have 20 TOR routers participating in
a statica lly configu red Layer 2 VPN. If yo u add a nother TOR router to the VPN, you wou ld have to manual ly configure each of
the 20 routers to recogn ize the newly added gateway on the VPN. In addition, imagine that the workloads, or applications
ru nning in the network, are constantly being moved from phys ica l host to physica l host. The abi lity to make net work
adj ustments to t he consta ntly moving work loads is d ifficu lt , if not impossib le, to ach ieve.

Normally a VPN has some fo rm of dynam ic s ignaling protocol for t he control plane. The signaling protocol can allow for
dynam ic adds a nd deletions of gateways from the VPN. Some signal ing protocols a lso al low a gateway to advertise its loca lly
learned MAC addresses to remote gateways. Usually a gateway has to rece ive an Et hernet frame f rom a remote host before
it can learn the host's MAC add ress. Learn ing remote MAC add resses in t he co ntro l p la ne allows t he MAC tables of all
gateways to be more in sync. Th is has a positive s ide effect of causing the forwardi ng behavio r of the VPN t o be more
efficient (less flood ing of data over the fabric) .

www .juniper.net VXLAN • Chapter 4 - 7


Dat a Center Fab ric with EVPN and VXLAN

Layer 2 VPNs
■ Layer 2 VPN options

VPN Deployment Model Native Encapsulation Control (Signaling)


Packet/Frame
CCC WAN ATM/ FR/PPP/ Ether MPLS Static
net
BGP Layer 2 VPNs WAN ATM/FR/PPP/ Ether MPLS/GRE BGP
net
LOP Layer 2 WAN ATM/FR/PPP/ Ether MPLS/GRE LOP
Circuits net
VPLS WAN Ethernet MPLS/GRE BGP or LOP
EVPN DC Ethernet MPLS/VXLAN BGP
DC Ethernet IP Static, Multicast,
BGP (using EVPN),
or OVSDB

C> 2019 Juniper Networks, Inc All Rights Reserved

Layer 2 VPN Options


Different types of Layer 2 VPN technologies exist. Some of these technologies were developed and a re most beneficial in a
WAN environment, and a re not suitable for data center envi ronments. Some examples are circuit cross-connect, BGP Layer 2
VPNs, LOP signaled Layer 2 circuits, and Virtua l Private LAN Services, or VP LS. These are often services t hat a re provided by
se rvice providers to connect corporate sites across WAN connections. However, t hese techno logies a re not optimized for
data cente r environments where applications and work loads can frequently move, and phys ica l and virtual hosts can
number in the thousands or tens of thousands.

Two options that have been developed for a data center environment, and which work together to provide flexibi lity and
scalabi lity, are EVPN and VXLAN . These are two separate technologies that can work together to manage both the fo rwarding
and control planes in a data center.

Chapter 4-8 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

VXLAN Broadcast Domains


• VXLAN manages broadcast domains
• Broadcast domain is similar to the concept of VLANs
• Virtual Network Identifier (VNI ) is used to label a broadcast domain
• 24 bit value (16 m illion possible values)
• Frames within a broadcast domain are forwarded unchanged to all broadcast domains that are assigned the same VNI
• Layer 2 frames are encapsulated by the source VTEP and forwarded in IP packets across the Fabric (tunneled), then decapsulated by
the remote VTEP and forwarded in their original form on the remote V NI

..r::
Host1
VM1
VM2
-cg
<>
~
Router
VTEP
BMS

= !APP1 !
VNI 5100
IP Fabric
..r::
172.16.0/24
VM3 '.'.J ~
Host2 .-<>
:: '.'.J ~ BMS
~ ~r::; VN1s200
~r::;
VM4 cg
Router = ! APP2 !
VTEP
..r::
Host3
VMS
VM6
-cg
<>
~

C> 2019 J uniper Networks , Inc All Rights Reserved 9

VXLAN Broadcast Domains


VXLAN was developed as a Layer 2 encapsulation protocol. VXLAN encapsulates a Layer 2 frame in a Layer 3 IP packet. It
inserts identifiers with in the packet header to label and identify to which broadcast domain an encapsulated packet belongs.
Th is is similar to how a VLAN tag identifies a broadcast domain in a VLAN switched network.

The Virtual Network Identifier (VNI), identifies a broadcast domain by using a numeric value . Unlike a VLAN tag, which can be
in the range from Oto 4095 (12 bit value), the VNI va lue can range from Oto 16,777,215 (24 bit value). A packet with a VNI
tag in the header can be associat ed with any one of up over 16 million broadcast domains. This alleviates a significant
sca ling limit associated with VLANs. Juniper networks recommends using a VNI that starts with a value of 5000 or higher to
avoid confusing a VNI value with a VLAN tag value.

In a data center, a VLAN is mapped to a VN I value. However, when the packet enters the Layer 3 doma in and is encapsulated
in a VXLAN packet, the VLAN ID is discarded and is not transmitted with the original packet. Once the packet arrives at the
remote gateway, the VLAN ID associated with the VNI at the remote gateway may be placed on the packet before it reenters
the Layer 2 domain.

www .juniper.net VXLAN • Chapter 4-9


Dat a Center Fabric with EVPN and VXLAN

VXLAN Control Plane Evolution

• VXLAN Control Plane


• Multicast signaled VXLAN
• MAC learning through multicast
• Resource intensive
• Slow convergence and updates
• Not agile (requires some manual configurationNNI mapping)
• EVPN signaled VXLAN
• MAC learning through BGP signaling
• Scalable
• Fast convergence and updates
• Automated Virtual Tunnel Endpont (VTEP)NNI discovery through BGP

C> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Control Plane


The process of encapsulating, decapsulating, tagging, and forwarding VXLAN traffic in a network corresponds to the data
plane. In order for a local VXLAN gateway t o identify which remote VXLAN gateway shou ld receive an encapsulated packet,
the local gat eway must learn where the remot e gateways are, and what MAC addresses and broadcast domains are
reachable through those remote gateways. The process of learn ing where remote destinations are and how to reach them
corresponds to the control plane.

The VXLAN cont rol plane has gone t hrough some changes and evolution. Originally, VXLAN gateways learned about remote
destinations through multicast. Although this cont rol plane function worked, it was very resource int ensive, was slow to
converge and to update when changes were made, and required the running of a multicast protocol in the underlay network.
It also required manual configuration on every gateway to associate locally connected broadcast domains with multicast
group addresses.

In order to improve performance of the VXLAN control plane, EVPN was developed. EVPN signaling is an extension to t he BGP
routi ng protocol. EVPN signaling utilizes BGP routing updat es to advertise local broadcast domains and locally learned MAC
addresses to remote tunnel endpoints. The BGP routing protocol was designed for high scalabi lity, fast convergence, and
flexibility.

Chapter 4-10 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

VXLAN Benefits
■ VXLAN is being embraced by major vendors and supporters of virtualization
• Standardized protocol
• Support in many virtual switch software implementations
• The focus of this course is on VTEP support in the physical network environment, not on VTEP support in the
virtual switch environment

VM1 BMS
Host1
VM2
> =
IP Fabric
VM1 172.16.0/24
Host2 '.'..J ~ BMS
VM2 ~G
Router
VTEP
=
VM1
Host3
VM2

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(l\\'OPKS
11

VXLAN Benefits
The slide lists the benefits of VXLAN .

www .juniper.net VXLAN • Chapt er 4 - 11


Dat a Center Fabric with EVPN and VXLAN

Virtual Machines in the Data Center


• Modern data centers are becoming more and more dependent on virtual machines (VMs)
• VM: A software computer running an OS and applications
• Host: Physical machine on which VMs run
• Introduces application and VM migration through automation (requires agility and responsiveness to
changes)
• Introduces on-demand workload migration for resource optimization

.c
Host1
VM 1
.................. -cg
(.)

,::
R3 BMS

••
•• •
••• •( VM2 :
; ............... :
VTEP

~~
=I APP1 I
•• IP Fabric • • •••• • •• ••• •

••
.... .c 172.16.0/24
~r:; L~P.?..~..~·-..•
Host2
VM2
VM4
-cg
(.)

,:: ~~
~r:;
BMS
••


••

••

R1 R2 =I APP2 l·l

VTEP VTEP
.c
Host3
VMS
VM6
-cg
(.)

,::

C> 2019 Juniper Networks, Inc All Rights Reserved

Virtua lization
With the introduction of virtua lization within a data center, the manner in which applications and physical hardware interact
has changes drastically. With th is change came new networking requirements. Virtua l machines, or software based
computers that run an operating system and applications, can be run on an underlying physica l host. A single physica l host
machine can be host to numerous virtual machines. The ability to move a virtual machine to any physical host within the
data center a llows a more efficient use of resources. If the resources of a physical host become taxed, due to increased
memory or CPU requirements of the virtual machine or applications, a portion or all of the processes of that virtua l mach ine
can be migrated to a different physical host that has sufficient resources to support the virtua l mach ine requirements.

Physical resources within the data center can be monitored and analyzed in order to track historical resource usage, as we ll
as pred ict future resource requirements within the environment. Because the workloads are software based, automated
systems can be used to migrate workloads with little to no user intervention.

The ability to move workloads to the physica l resources that can most efficiently handle those workloads introduces a new
set of issues with regards to the network that interconnects the workloads. As a workload m igrates to a new physical or
logical server, other applications or hosts within the data center environment must be able to continue to communicate with
the migrated resource, regardless of where the resource exists. This can, at times, involve a change of physica l server,
change of rack, and at t imes, even a change of data center locations. In addit ion, the original MAC address of the virtual or
physica l mach ine on wh ich the application is running changes, and the mechanisms used to interconnect the devices at a
Layer 2 level must be able to adapt to an ever-changing environment.

Chapter 4-12 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: VXLAN

■ Layer 2 Connectivity over a Layer 3 Network


➔ VXLAN Fundamentals
■ VXLAN Gateways

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
13

VXLAN Fundamentals
The slide highlights the topic we d iscuss next.

www .j uniper.net VXLAN • Chapter 4 - 13


Data Center Fabric with EVPN and VXLAN

VXLAN Fundamentals
■ VXLAN is a Layer 2 VPN
• Defined in RFC 7348
• Encapsulates Ethernet Frames within IP packets
• Data plane component
• Encapsulation: Includes adding an outer Ethernet header, outer IP header, outer UDP header, and VXLAN
header to the original Ethernet Frame (original VLAN tag is usually removed)
• Decapsulation: Includes removing all of the above outer headers and forwarding the original Ethernet frame to
its destination (adding the appropriate VLAN tag if necessary)
• Signaling component (learning of remote VXLAN gateways)
• RFC7348 discusses static configuration and multicast using PIM
• Other methods include using EVPN signaling or OVSDB

ii:> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Fundamentals
VXLAN is defined in RFC 7348 and describes a method of tunneling Et hernet frames over an IP network. RFC 7348
describes the data plane and a signaling plane for VXLAN. Although RFC 7348 discusses PIM and multicast in the signal ing
plane, other signaling methods f or VXLAN exist, including Multiprotocol BGP (MP-BGP) Ethernet VPN (EVPN) as wel l as Open
vSwitch Database (OVSDB).

Chapter 4-1 4 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

VXLAN Data Plane


■ VXLAN packet format
Original VLAN tag is usually removed
during encapsulation

ii!
/
F
OUTER VXLAN
OUTER MAC OUTER IP Original L2 Frame C
UDP HEADER
s

48 DEST MAC 72 IP HOR 16 SOURCE 8 FLAGS


DATA PORT

48 SRCMAC 8 PROTO:UDP 16 VXLAN 24 RESERVED


PORT
VLAN UDP
32 (Optional)
16 CKSUM 16 24 VNI VXLAN Network Identifier - On a
LENGTH
VXLAN gateway, the 24-bit VNI is
ETH Type SRC IP: CHKSUM
16
(Ox0800)
32

32
MY VTEP

OSTIP:
16
OxOOOO

~
8 RESERVED
"' mapped statically through configuration
to a host/server facing VLAN, allowing
for ~ 16 million broadcast domains in a
DESTVTEP
\ data center
VXLAN Port = 4789
C> 2019 Juniper Networks, Inc All Rights Reserved Jun1PerNElWOPKS
15

VXLAN Data Plane


The d iagram shows the encapsulation format of a VXLAN packet.

www .juniper.net VXLAN • Chapter 4 - 15


Data Center Fabric with EVPN and VXLAN

VTEP (1 of 3)
• A VTEP is the endpoint of a VXLAN tunnel
• It takes Layer 2 frames from VMs and encapsulates them using VXLAN
encapsulation
• Based on preconfigured mapping of VLAN to VN I

Host Machine 1
Static mapping on
VTEP of VM-fac ing
VM1 VM2 VLAN to outgoing VNI
BMS
10.1 .1.3
..
c::::J

Virtual Switch 1

IP Fabric
172.16.0/24

R1 R2
(VTEP) (VTEP)
172.16.1.1/24 172.16.2.1/24

C> 2019 Juniper Networks, Inc All Rights Reserved

VTEP, Part 1
The d iagram shows how a VTEP handles a VXLAN packet on a source VTEP that must be encapsulated and sent to a remote
VTEP. Here is the step by step proces.s taken by the network a nd R1:

1. VM2 sends an Ethernet frame to the MAC address of the remote BMS.

2. The Ethernet frame arrives on the interface connected to R1 (VTEP).

3. R1 (VTEP) receives the incoming Ethernet frame and associates the VLAN on which the frame arrived with a
VNI.

One th ing you should notice about the VLAN tagging between the VMs and the virtual switches is that, since the VLAN tags
are stripped before send ing over the IP Fabric, the VLAN tags do not have to match between remote VMs. This allows for
more flexibility in VLAN ass ignments from server to server and rack to rack.

Chapter 4-16 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

VTEP (2 of 3)
• A VTEP is the endpoint of a VXLAN tunnel (contd.)
• Forwards VXLAN packets to a remote VTEP over the Layer 3 network
• Based on MAC-to-remote-VTEP mapping
Host Machine 1
BMS
VM1 VM2
10.1.1.3 I
=

Virtual Switch ·
IP Fabric
~ 172.16.0/24
10. 1.1.2

ETH IP-DA 10.1. 1.2 ~


~~i-=:=======~======~~~~
~r:;, - ,
R1
• ~r:;
R2
(VTEP) (VTEP)
172. 16.1.1/24 172.16.2.1/24
ETH
IP-DA 172.16.2.1 IP-DA 10.1. 1.2
I..._ _ _ _ __,JI....__ _ _ ___.,
I . I
VXLAN Encapsulation Original Ethernet Frame
(minus VLAN tag)

C> 2019 Juniper Networ1<s, Inc All Rights Reserved

VTEP, Part 2
1. R1 (VTEP) analyzes the local VXLAN bridging table to determine which remote VTEP has advertised the MAC
address associated with the destination MAC address in the received frame.

2. R1 identifies which VTEP interface (tunnel) next hop should be used to forward the frame.

3. R1 encapsu lates the original Ethernet frame in a VXLAN/I P packet with a destination IP address of the remote
VTEP address, and forwards the IP packet to the next physical device in the IP fabric.

www .juniper.net VXLAN • Chapter 4 - 17


Data Center Fabric with EVPN and VXLAN

VTEP (3 of 3)
■ A VTEP is the endpoint of a VXLAN tunnel (contd.)
• Takes Layer 3 packets received from the remote VTEP and strips the outer MAC, outer IP
header, and VXLAN header
• Forwards resulting Layer 2 frames to the destination based on VNl-to-interface mapping

Host Machine 1
BMS
VM1 VM2
c::::J
10.1 .1.3
Virtual Switch 1
IP Fabric 10.1.1.2
172.16.0/24
~gr-~=====~~====~·~g,..--
R1 R2
I I
IP-DA 10.1 .1.2
Original L2 Frame
(VTEP) (VTEP) (plus VLAN tag if necessary)
172.16.1.1/24 172.16.2.1 /24
II..._ _ _ _ _ I
IP-DA 172.16.2.1 ETH j 1P-DA 10.1 .1 .2
_,11.__ _ _ _ _.-1
I
I I
VXLAN Encapsulation Original Ethernet Frame
(minus VLAN tag)
C>2019 Ju 18

VTEP, Part 3
1. The remote VTEP (R2) receives an IP packet destined to the IP address associated with the local VTEP tunne l
endpoint.

2. The IP packet is decapsulated, which exposes the VXLAN header, wh ich contains the VNI value associated with
the Ethernet segment.

3. The remote VTEP removes the VXLAN encapsulation and forwards the original Ethernet frame out the interface
associated w ith the destination MAC address in the local bridging table . If the loca l interface has a VLAN tag
associated w ith it, the VLAN tag associated with the VNI is included in the transm itted frame.

Chapter 4-18 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

MAC Address Learning


• Local MAC address
• Locally attached servers/VM MACs are learned from locally received packets
• Remote MAC addresses can be learned in two ways
• Data plane
• Using multicast forwarding of BUM traffic
• Control plane
• Using EVPN signaling to advertise locally learned MACs to remote VTEPs
(Juniper recommended solution)

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lW()PKS
19

MAC Address Learning


To forward packets within a bridge, or broadcast, domain, a device must maintain a bridging or switching table that maps
MAC addresses to physical next hops. A VTEP learns about MAC addresses in the following manner:

• Locally attached server/VM/ Host MACs are learned from locally received packets.

• Remote MAC addresses are learned through a control plane mechanism.

Through the data plane, using multicast forwarding of BUM traffic.

Through a control plane signaling protocol, such as EVPN, that advertises locally learned MACs to remote
VTEPs (Juniper recommended solution).

www .juniper.net VXLAN • Chapter 4 - 19


Data Center Fabric with EVPN and VXLAN

Multicast MAC Learning (Controller-less)


• BUM traffic with a controller-less overlay
• VTEP sends packets to multicast group address (DA)
• Network must support some form of PIM
• Each VXLAN segment is associated with a multicast group
• Each VTEP joins the multicast group associated with the VNI
• IGMP join is used by the VTEP to join the multicast group if the VTEP is a
server
• IGMP join triggers a PIM join in the network
• A multicast tree is built in the network and the BUM traffic is forwarded to all
members of the multicast group
• VXLAN multicast traffic is also used for discovery of remote VTEP

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
20

BUM Traffic
The multicast learning model is one method of handling BUM traffic in a VXLAN enabled network. In this model, you shou ld
note that the underlay network must support a multicast routing protocol, preferably some sort of Protocol Independent
Multicast Sparse Mode (PIM-SM). Also, the VTEPs must support Internet Group Membership Protocol (IGM P) so that the
VTEP can register the multicast group that is associated with a configured VNI.

For every VNI used in the data center, there must also be a multicast group assigned. Remember that there are 2"24 {~16M)
possible VNls, so your customer will be able to have up to 2"24 group addresses. Luckily, 239/8 is a reserved set of
organ izationally scoped multicast group addresses (2"24 group addresses in total) that can be used freely within your
customer's data center.

Chapter 4-20 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Building the Multicast Tree


• An RPT is a multicast forwarding tree used during the initial flow of
multicast traffic from a new source in a PIM-SM network
• Routers along RPT maintain a state called (*, G)

(*,G) State for R1


Source: * (Any)
Host Machine 1 Group: 239.1.1.1 Host Machine 2
Incoming interfcace: C (RP facing interface)
Ougoing Interface List: B
VM1 VM2 VM3 VM4

10.1.1.3 10.1 .1.2

Virtual Switch 1
Virtual Switch 1
(VTEP B)
A
'.:J ~ '.:J ~\
s~G C ~GI
R1 DR
VTEPA VTEPB
(Receiver) (Source)

Q 2019 Juniper Networks. Inc All Rights Reserved

Multicast Trees
With multicast MAC learning, a VTEP sends broadcast frames (suc h as ARP packets) rece ived from a locally configured VNI to
a multicast group that is manua lly mapped to the VNI. Remote VTEPs that will receive t raffic from a remote VNI send an
IGM P join within the core network, wh ich indicates the desire t o receive the traffic.

When a VNl-to-Multicast group address mapping is conf igured, t he VTEP (VTEP A in the exam ple) registers interest in
receiving traffic destined to the group (239.1 .1.1 in the exam ple) by sending an IGM P j oi n message within the fabric network.
A multicast tree is built t o t he Rendezvous Poi nt with in th e multicast domain. With th is syst em, all VTEPs t hat are co nfigured
t o participate in a VNI are registered on the RP as interested rece ivers for that m ulticast group, and VTEP A will receive
broadcast f rames that are sent to the RP through multicast t raffic d istribution.

www .j uniper.net VXLAN • Chapter 4 - 21


Data Center Fabric with EVPN and VXLAN

Multicast Tree Forwarding


• BUM packets destined to the VXLAN segment group address are
forwarded down the RPT
• VTEP A learns the IP address of VTEP Band its membership to 239.1.1.1
• Associates MAC of VM3 to VTEP B - - ► BUM Traffic
···········• BUM Traffic encapsulated in multicast
(*,G) State for R1
Source: * (Any)
Host Machine 1 Group·. 239. 1.1 .1 Host Machine 2
Incoming in terfcace: C (RP facing interface)

VM1 VM2 VM3 VM4


I
10.1.1.3 10.1.1.2
I
'Y
Virtual Switch 1 Virtual Switch 1

·-- -- ~r,; C
B
'.:J~ ◄-
~r,;
- - --
R1 DR Register encapsulated traffic is sent to
the RP as unicast
VTEPA VTEPB
Q 2019 Juniper Networks, Inc All Rights Reserved

Multicast Forwarding
When VTEP B receives a broadcast packet from a local VM or host, VTEP B encapsulates the Ethernet frame into the
appropriate VXLAN/U DP/I P headers. However, it sets the destination IP address of the outer IP header to the VNl's group
address (239.1.1.1 on the slide). Upon receiving the mu lticast packet, VTEP B's DR (the PIM router closest to VTEP 8)
encapsulates the multicast packet into unicast PIM register messages that are destined to the IP address of the RP. Upon
receiving the register messages, the RP de-encapsu lates the register messages and forwards the resulting multicast packets
down the(* ,G) tree. Upon receiving the multicast VXLAN packet, VTEP A does the following:

1. Strips the VXLAN/UDP/IP headers;

2. Forwards the broadcast packet toward the VMs using the virtual switch;

3. If VTEP B was unknown, VTEP A learns the IP address of VTEP B; and

4. Learns the remote MAC address of the sending VM and maps it to VTEP B's IP address.

For all of this to work, you must ensure that the appropriate devices support PIM-SM, IGMP, and the Pl M DR and RP
functions.

Although it is not shown in the diagram, once R1 receives the first native multicast packet from the RP (source address is
VTEP B's address), R1 wi ll build a shortest path tree (SPT) to the DR closest to VTEP B, which will establish (S, G) state on all
routers along that path . Once the SPT is formed, multicast traffic will f low along the shortest path between VTEP A and
VTEP 8. However, direct communication between VM2 and VM3 is now possible by transiting the VXLAN tunnel between
VTEP A and VTEP B, since after the initial MAC learning process, the MAC addresses of each VM is registered in the
MAC-to-VTEP tables on R1 and the DR.

Chapter 4-22 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: VXLAN

■ Layer 2 Connectivity over a Layer 3 Network


■ VXLAN Fundamentals
➔ VXLAN Gateways

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lWOPKS
23

VXLAN Gateways
The slide highlights the topic we d iscuss next.

www .j uniper.net VXLAN • Chapter 4 - 23


Dat a Center Fabric with EVPN and VXLAN

VXLAN Device Functions


• Devices in a VXLAN deployment require Layer 2 or Layer 3 Gateway
capabilities
• Layer 2 Gateway functions (intra-VLANNNI forwarding)
• Encapsulates a Layer 2 frame in a VXLAN packet and performs a Layer 3 forwarding
operation (on the IP fabric)
• Decapsulates a VXLAN encapsulated packet and performs a Layer 2 forwarding
operation (to a host or virtual host device)
• VTEPs are Layer 2 Gateways
• Layer 3 Gateway functions (inter-vlanNNINRF forwarding)
• Receives a VXLAN encapsulated packet and changes the destination VNI of the
underlying frame
• Replaces the source-MAC of the original frame with the MAC of the IRB gateway address
(emulates the L2 frame originating from the Gateway)
• Allows an Ethernet frame to be bridged from one VNI to another VNI
C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per
NElWOPKS
24

VXLAN Gateway Functions


A VTEP in a VXLAN network is a Layer 2 Gateway. Layer 2 Gateways bridge Ethernet frames within a VLAN/ VNI, and do not
forward traffic between different broadcast domains. A Layer 2 Gateway:

• Encapsulates Layer 2 frames in VXLAN packets and performs a layer 3 forwarding operation on the Layer 3 IP
fabric;

• Decapsulates a VXLAN encapsulated packet and perf orms a Layer 2 forwarding operation (bridging) t o a host or
virtual host device.

A Layer 3 Gateway has a different function . A Layer 3 Gateway:

• Functions as a bridge between VNls, and allows traffic from one VNI to be forwarded on a different VNI;

• Modifies the source MAC address of the underlying Ethernet frame, and places the local IRB interface virtual
MAC address (gat eway MAC) in its place prior to forwarding the Ethernet frame in the new VNI.

Chapter 4-24 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

VXLAN Layer 2 Gateways


■ A VXLAN gateway is a networking device that can perform the VTEP function
• Often referred to as just a VTEP
• A VXLAN Layer 2 Gateway allows servers (VMs and BMS) within the same broadcast domain
to communicate across a Layer 3 fabric

Host Machine 1

VM 1 VM2
( IP-DA 172.16.0.2 IETH I IP-DA 10.1.1.2 I
.1 10.1.1.0/24 .3
I
VXLAN Encapsulation
I
Original Ethernet Frame
(minus VLAN tag)
' Bare Metal
Server
=
Virtual Switch 1 VXLAN Tunnel

--- ---,
J
IP Fabric
Router _ _ _ _ _1_72
_._16_.0_/2
_4_ _ _ _ Rout> - - . E _ T_H_ !I_P-_
D_A _10_.1_.1_.2~I
..,.1

VTEP ,....__ __,,---__,I


VTEP I
172.16.0.1 _
1 _ _
172 6 0 2 Original Ethernet Frame (plus
a VLAN tag if necessary)

C> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Layer 2 Gateways


A device that has VTEP functions enab led is a Layer 2 Gateway, as it has the capabi lity to perform VXLAN encapsu lation and
decapsulation functions (it is the endpoint of a VXLAN tunnel). A VXLAN Layer 2 Gateway is often just referred to as a VTEP . A
VTEP, or Layer 2 Gateway, allows servers that are directly connected to its physical interfaces to communicate across a Layer
3 Fabric or network with devices that are in the same broadcast domain as the original source. This is what is often referred
to as a Layer 2 stretch, or stretching a Layer 2 domain across a Layer 3 domain .

www .j uniper.net VXLAN • Chapter 4 - 25


Dat a Center Fabric with EVPN and VXLAN

VXLAN Layer 2 Gateway Functions


• VXLAN processing sequence
• Layer 2 Frame arrives at VTEP from VM or Host
• Layer 2 Frame is encapsulated in VXLAN packet (original MAC addresses preserved in encapsulated
frame)
• Packet transits IP fabric (IP forwarding)- outer MAC changes at each hop
• IP packet arrives at destination gateway (destination IP matches VTEP address, so VTEP processes the
packet)
• Destination VTEP removes VXLAN header and forwards native Layer 2 Frame (original source and
destination MAC)
IP Fabric Host
=,~--+----1 '.'.J C 172.16.0.0/16 (Physica l or Virtual)
~r:.;~ - - - -
R1
= IP: 10.0.0.2
Host MAC: BB:BB:BB
(Physica l or Virtual) VTEP
IP: 10.0.0 .1 '.'.JC
MAC: AA:AA:AA ~r:.;
SRC MAC: AA:AA:A/J. R3
leaf
DST MAC : BB:BB:BB
VTEP
SRC IP:10.0 .0 .1 '.'.JC spine
DST IP 10.0.0 .2 ~r:.;
leaf
C> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Layer 2 Gateway Functions


When a Layer 2 frame arrives at a VTEP f rom a VM or a host, the entire packet is encapsulated in a VXLAN packet. The
original IP address and MAC address are preserved with in t he encapsulated packet. The encapsulated packet is forwarded
across the IP fabric, where the outer MAC address is modified with each hop (which is the standard IP forwa rding process).

When the IP packet arrives at the destination gateway, which is the remote VTEP, the VTEP identifies the destination address
as a local address and removes the outer Layer 3 header to process the packet. Once the outer header is removed, the inner
VXLAN header is identified and processed according to the information within t hat header.

Once the VXLAN header information is processed, the local ly connected broadcast domain is identified . The outbound
interface is identified based on the VN I tag in the VXLAN header and local switching table, and t he origina l packet is
forwarded toward the destination device based on the switching table information . During this proces.s, the IP add resses of
the original packet are not examined.

Chapter 4-26 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

VXLAN Layer 2 Processing


• Layer 2 Gateway (VTEP) must be able to encapsulate/decapsulate Layer 2 frames and forward
across IP Fabric network
• Tran sit routers forward IP packets (no VXLAN capabilities required)
IP Fabric
=,- -- - ~c 172.16.0.0/16
Host
(Physica l or Virtual)
~r:;~
R1
~- = IP: 10.0.0.2
Host MAC: BB:BB:BB
(Physica l or Virtual) VTEP
IP: 10.0.0.1 ~c
MAC: AA:AA:AA ~r:;
SRC MAC: AA:AA:A/l R3 SRC MAC: AA:AA:AA
leaf DST MAC : BB:BB:BB
DST MAC : BB:BB:BB
VTEP
SRC IP:10.0.0 .1 ~c spine
SRC IP:10.0 .0 .1
DST IP:10.0.0.2 ~r:; DST IP:10.0.0.2
leaf
Original Frame Original Frame

SRC MAC: R1 MAC SRC MAC R2MAC


VXLAN Header (not _ DST MAC: R2MAC DST MAC: R3MAC
all fields shown) SRC IP: R1 -VTEP-IP SRC IP R1 -VTEP-IP
DST IP: R3-VTEP-IP DST IP: R3-VTEP-IP
SRC MAC:AA SRC MAC:AA
DST MAC: BB DST MAC: BB
SRC IP:10.0.0.1 SRC IP: 10.0.0.1
DST IP:10.0.0.2 DST IP:10.0.0.2
Q 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Layer 2 Processing


A more detailed examination of the process displays the information in the packets as they transit the VXLAN domain. A
Layer 2 Gateway, or VTEP, must be able to perform the encapsulation and decapsulation processes for VXLAN packets.
Transit devices within the network are not exposed to the VXLAN header information, and therefore are not required to have
any special processing capabilities. Only the VXLAN Gateways must have VXLAN processing capabilities.

www .j uniper.net VXLAN • Chapter 4 - 27


Data Center Fabric with EVPN and VXLAN

VXLAN Layer 2 Gateway Capable Devices


• VXLAN Layer 2 Gateway support
• Contrail vRouter (virtual router on host)
• QFX5100, 5110, 5120
• QFX5200 Series
• QFX 10000 Series
• MX Series
• vMX Host
(Physical or Virtual)
= - - -r'.'.J~ C
Host
VTEP (Physical or Virtual)
=
VTEP VTEP
'.'.JC
'.'.JC ~r,;
~
VTEP VTEP

C> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Layer 2 Gateway Devices


Software capabi lities of networking devices change over time, as new software is developed. At the time this course was
developed, the following Juniper Networks platforms support Layer 2 VXLAN Gateway functions:

• Contrail vRouter;

• QFX 5100 Series (including the QFX 5100, 5110, and 5120);

• The QFX 5200 Series;

• The QFX 10000 Series (fixed format and modular chassis);

• MX Series routers; and

• vMX virtual router.

Chapter 4-28 • VXLAN www.ju n iper .net


Data Center Fabric with EVPN and VXLAN

VXLAN Layer 3 Gateways


• A VXLAN Layer 3 Gateway provides a gateway (IRB) between _ __
IRB enables
VXLAN network segments and the rest of the world device to
.-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Router
--- - - - -
B (logical view)
- - 1 perform VXLAN
Layer Gateway 3
function (inter-
inet.O domain bridging)
10.1.1.254 '.;; _ _..,,
Dest MAC= Default GW MAC (i.e. Router B IRB) ... Porforms
Host Machine 1 I I VXLAN Layer 2

VM1 VM2
l IP-DA 172.16.0.2 I
I
ETH J IP-DA 1 .1.1 .1 l
I
Default Switch Gateway
function

t t VXLAN Tunnel
VXLAN Encapsulation Original Ethernet Frame
.1 10.1.1.0/24 .3 to remote VTEP
(minus VLAN tag)
Router B performs lookup
Virtual Switch 1 1.1.1.1
p
IP Fabric
172.16.0/24
VNl5001
~G
Rou'te;.,,r' - - - - - - - - - - - - - - . - , > - - : .~ '-- I ETH j 1P-DA 1.1. 1.1 l__,J
VTEP Router B/VTEP ._,- - - -t - - - •
172.16.0.1/24 VXLAN L3 Gateway New Ethernet Frame but
172.16.0.2/24 original IP packet

C> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Layer 3 Gateway Functions


To perform VXLAN Layer 3 Gateway functions, a device must have the capability to perform additional forwarding tasks. A
Layer 3 gateway forwards packets rece ived from a VXLAN domain to a device that is not within the origina l VNI or broadcast
doma in.

A Layer 2 gateway device forwards a decapsu lated frame to a locally attached device based on the VNI in the VXLAN packet
without examining the underlying Ethernet frame. A Layer 3 gateway device must have the capabi lity to forward the original
packet to a device that may reside within a different broadcast domain than the original frame, which requires a different
process.

When a Layer 2 frame arrives at a VTEP from a VM or a host, the entire frame is encapsu lated in a VXLAN packet. The
original IP and MAC addresses (source and destination) are preserved within the encapsulated packet. The encapsulated
packet is forwarded across the IP fabric, where the outer MAC address is modified with each hop (which is the standard IP
forwarding process). During this process, the inner frame is unmodified.

The key difference between a Layer 2 Gateway and a Layer 3 Gateway is how the packet is processed when it arrives at the
remote VTEP that is also acting as a Layer 3 Gateway. Un like with a Layer 2 Gateway, a Layer 3 Gateway must remove the
original source MAC address and replace it with the virtual MAC address associated with the L3 Gateway within the
desti nation VNI, and then bridge the frame to a different VNI than the source VNI. To perform this task, the IRB of the L3
Gateway must be set as t he "default gateway" for all hosts within the broadcast domain, and the IRB interface must be
configured within the VNI as a host address that is reachable to all other devices within that broadcast domain, just as a
default gateway in a Layer 3 network must reside within the same subnet as the hosts that wish to use the gateway. When a
host, VM2 in t he example, is required to transmit data to an IP address that does not belong to the locally configured subnet,
VM2 sends the Ethernet frame to the configured default gateway MAC address, which is the virtual MAC address of the IRB
interface on the L3 Gateway.

When the Ethernet frame arrives at the L3 Gateway, the destination MAC address is an IRB virtual MAC address. The inner
frame is processed, the destination IP address is exam ined, and the VNI associated with the destination IP address is
determined. The L3 Gateway then replaces the source MAC address in the original Ethernet frame header with the MAC
address of the virtual GW address (IRB virtua l MAC). The frame is re-encapsulated in a VXLAN packet on the new VN I and is
forwarded across the VXLAN tunne l to the remote VTEP that is connected to the destination MAC address of the inner frame.
The remote VTEP decapsulates the VXLAN packet, and forwards the Ethernet frame to the end host, which sees the frame
come from the MAC address of the gateway, and the IP address of the original source.

www .juniper.net VXLAN • Chapter 4 - 29


Data Center Fabric with EVPN and VXLAN

VXLAN Layer 3 Gateway Functions


■ VXLAN processing sequence
• Layer 2 Frame arrives at VTEP from VM or Host, with destination MAC address of the GW Virtual MAC
• Layer 2 Frame is encapsulated in VXLAN packet
• Packet transits IP fabric (IP forwarding) - outer MAC changes at each hop during fabric transit
• IP packet arrives at Layer 3 gateway (destination IP matches VTEP address, so VTEP processes the packet)
• Destination MAC matches local IRB Virtual MAC, so it is processed locally
• Gateway performs bridging lookup, and forwards frame to next-hop MAC address associated with destination IP
address (re-applying a new VXLAN header for the remote VTEP and the destination VNI if necessary)

Host
(Physical or Virtual) Host
= '-;:I::::::-' =
IP: 192.168.1.100

VTEP
SRC MAC: AA
DST MAC: GWMAC
SRC IP:10.0.0.1
DST IP 192.168.1.100

C> 2019 Juniper Networks, Inc All Rights Reserved

Layer 3 Gateway Functions


When a frame must be routed or bridged to a different broadcast segment, the packet processing differs from a standard
Layer 2 bridging function . With a Layer 3 Gateway, the L3 Gateway acts in a similar fashion to a standard Layer 3 Gateway,
w ith some d ifferences .

A Layer 3 Gateway is also a Layer 2 Gateway, or VTEP. When a device in the network forwa rds a packet to an IP address that
is part of a d ifferent subnet, the device forms an Ethe rnet frame w ith the destination MAC add ress of the locally configured
default gateway, or the next hop in a routing table . The source VTE P receives the frame and perfo rms a MAC add ress lookup
in the forwa rding table. The Layer 3 Gateway must be configu red with an IRB interface within the source VNI so that the
Virtual MAC of the IRB is reachable by both the origina l host and the source VTEP.

The source VTE P encaps ulates the original f rame w ith the IP/UDP/VXLAN header information and forwards the f rame
through t he VTEP tunnel that terminates on the Layer 3 GW. Once the IP packet arrives at the gateway VTEP endpoint, the
frame is processed as normal, with the IRB interface as the local "host" to wh ich the original frame is destined. The Laye r 3
Gateway processes the Layer 2 frame normally, and identifies the destination IP address in the inner packet. The L3 Gateway
then performs another bridge table loo kup to determ ine the physical next hop (VXLAN tunnel) that is associated with t he
destination IP address. Once the next hop is determined, the Et hernet frame, sourced from the IRB virtual MAC address, is
forwarded over the next hop to t he destination IP add ress. If the next hop toward the destination IP address is a VXLAN
tunnel, the appropriate VXLAN IP/U DP/VXLAN header information is placed on the packet prior to transmission .

Chapter 4-30 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

VXLAN Layer 3 Processing


■ Layer 3 Gateway (VTEP) must be able to decapsulate VXLAN
packets and forward to a different broadcast domain
• GWMAC address is
virtual IRB MAC on GW Host
(Physical or V irtua l)
(IRS IP address reachable = Host
within VXLAN domain) =
R1 IP: 192.168.1.100
IP: 10.0.0.1 VTEP '.'.J ~
MAC:AA ~r;
SRCMAC:M ~r; R2
DST MAC: GWMAC VTEP VTEP
SRC IP:10.0.0.1 L3 SRC MAC: R2MA C
Gateway DST MAC: NHMAC
DST IP:192.168.1.100
. .0 .0 . 1
SRC IP·10
SRC MAC: R1 MAC DST IP:192.168.1.100
VXLAN Header (not
all fields shown)
.. DST MAC: R2MAC
SRC IP: R1 -VTEP-IP
DST IP: R2-VTEP-I P
SRCMAC:M
DST MAC: GWMAC
SRC IP:10.0.0.1
DST IP:192.168.1 .100
Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per
NOWOPKS
31

VXLAN Layer 3 Processing


The example shows the encapsu lation information as a packet and frame transits t he VXLAN network t oward a Layer 3
Gateway.

www .j uniper.net VXLAN • Chapter 4 - 31


Data Center Fabric with EVPN and VXLAN

VXLAN Layer 3 Gateway Capable Devices

■ VXLAN Layer 3 Gateway support


• QFX 5110/5120
• QFX 10000 Series
• MX Series
•vMX

Host

R1
=
VTEP ~C\.---"r-
~G
~G R2 IP: 192.168.1.100
VTEP VTEP
L3
Gateway

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
32

VXLAN Layer 3 Gateway Capable Devices


At the writing of this course, the following devices are capable of Layer 3 Gateway functions:

• QFX 5110/5120;

• QFX 10000 Series;

• MX Series; and

• vMX Series.

Chapter 4-32 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Layer 3 Gateway Placement Options

= ===
• • • •• • •• • • • • • • • •••
11111111111111 1111 111111111111111111111 1111111 11111111111111111111I 1111111 111111 1111111

Spine Layer 3 Gateways Fabric Layer 3 Gateways Leaf Layer 3 Gateways

• Centralizes L3 GWs in spine • Centralizes L3 GW s in fabric • Distributes L3 GW in leaf


• VXLAN L2 GW at leaf • VXLAN L2 GW at leaf nodes
• No VXLAN requirement for • No VXLAN requirement for • No VXLAN requirement for
fabric switches spine switches spine switches

C> 2019 Juniper Networks, Inc All Rights Reserved

Layer 3 Gateway Placement


The location of Layer 3 Gateways can have an impact on traffic processing and st at e information. There are three common
locations to place Layer 3 Gateways. Two methods fall into the Centrally Routed Bridge and Edge Routed Bridge deployment
methods.

Centrally Routed Bridging places the VN i transition points, or L3 Gateways, on spine devices or within the IP fabric. With CRB
deployments, Layer 3 GW functions are centra lized, and the spine devices depicted in the f irst of the examples shown are
requ ired to support VXLAN L2 Gateway capab ilities, and are VTEPs within the VXLAN domain.

The second option, Fabric Layer 3 Gateways, is also a CBR deployment type. With the Fabric Layer 3 Gateways, the spine
devices of each data center pod or fabric branch are not required to have any VXLAN capabilities, as they only forward IP
packets to and from leaf devices t o the L3 Gateways within the fabric.

The th ird option shown is an Edge Routed Bridge, or ERB, deployment method. With an ERB deployment, bridging between
broadcast domains, or VNls, occurs on the leaf nodes at the edge of the network.

Each of these designs has benefits and drawbacks. Some benefits of CRB designs are the conso lidation of L3 Gateway
functions with in sub-domains. This allows the deployment of non-L3 GW capable devices at the leaf nodes, and provides
modularity and scalability.

One drawback of a CBR design is a process ca lled hair-pinning. Hair-pinning refers to the forwarding path of all t raffic that
must be forwarded between broadcast domains. Multiple broadcast domains can be connected to the same Leaf device,
and even present on the same host. Traffic that must be forwarded between those broadcast domains must be forwarded to
the L3 Gateway device, wherever it is configured, and then return to the destination host.

The ERB design al leviates the hair-pinning of traffic in the network by transitioning between broadcast domains on the leaf
nodes themse lves, which can t hen forward the transitioned frames to the remote VTEPs directly. The drawback to
configuring ERB deployments is that the leaf nodes must support L3 Gateway functions, wh ich can, at times, require a more
advanced or expensive device at the leaf level. It also req uires the configuration of gateway functions and addresses on each
leaf with in the deployment, which increases configuration and management overhead.

www .j uniper.net VXLAN • Chapter 4 - 33


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Explained VXLAN functions and operations
• Described the control and data plane of VXLAN in a controller-less overlay
• Described VXLAN gateways

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
34

We Discussed:
• VXLAN functions and operations;
• The control and data plane of VXLAN in a contro ller-less overlay; and
• VXLAN Gateways.

Chapter 4-34 • VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Review Questions

1. What is the purpose of VXLAN?


2. What is the role of a VXLAN Layer 2 Gateway device?
3. What is the role of a VXLAN Layer 3 Gateway device?
4. What is the purpose of a VNI?
5. How are MAC addresses propagated in a controller-less VXLAN
network?

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lV.'OPKS
35

Review Questions
1.

2.

3.

4.

5.

www .j uniper.net VXLAN • Chapter 4 - 35


Data Center Fabric with EVPN and VXLAN

Answers to Review Questions


1.
The purpose of VXLAN is to provide a Layer 2 VPN across a Layer 3 broadcast domain, commonly used in an IP Fabric data
center.
2.
A VXLAN Layer 2 Gateway encapsulates packets in a VXLAN header and decapsulates VXLAN packets within a VXLAN
doma in.
3.
A VXLAN Layer 3 Gateway forwards packets between a VXLAN broadcast domain and another VXLAN or non-VXLAN broadcast
doma in.
4.
A VNI, or VXLAN Network Identifier, is used to identify a broadcast domain within a VXLAN network.
5.
In a controller-less VXLAN domain, Multicast (PIM) is used to propagate MAC addresses within the VXLAN network by
forwarding BUM traffic between Virtual Tunnel End Points. A control plane protocol, such as EVPN, can also be used to
propagate MAC addresses within the VXLAN network.

Chapter 4 -36 • VXLAN www.juniper.net


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 5: EVPN VXLAN

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Describe EVPN functionality
• Describe EVPN control in a VXLAN deployment

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
2

We Will Discuss:
• EVPN functiona lity; and
• EVPN contro l in a VXLAN deployment.

Chapter 5 - 2 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda:EVPN
➔ VXLAN Management
■ VXLAN with EVPN Control
■ EVPN Routing and Bridging

C> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Management
The slide lists the topics we will discuss. We wi ll discuss t he highlighted topic first.

www .juniper.net EVPN VXLAN • Chapter 5-3


Data Center Fab ric with EVPN and VXLAN

VXLAN Review
■ Many traditional applications in a data center require Layer 2 connectivity
between devices
L2 Switch __---....:::--~-,,,,-----.,_.....----.. L2 Switch

Host A Host B
Switched Network
■ VXLAN allows Layer 2 tunneling between devices on the same broadcast
segment across a Layer 3 transit network or fabric
L3 Switch or Router L3 Switch or Router
~-~

~
10.1.1.0/24 10 02
~ .1 . :::::-d---------\Vl>XCLLAN---.. . . . . . . . .._1 ..:::::- · 1· 1 · ' :
..:::::- Tunnel ..:::::- t--..:.:.J

Host A _
....__ _..,____172.16.0/24 Host B
IP Fabric
C> 2019 Juniper Networks , Inc All Rights Reserved

VXLAN Review
VXLAN is a Layer 2 VPN technology that extends a Layer 2 broadcast domain across a Layer 3 IP domain. VXLAN is commonly
used in data center deployments that use a Layer 3 fabric design.

Layer 2 Applications
The needs of the applications that run on t he servers in a data center usually drive t he designs of those data centers. There
are ma ny server-to-server applications that have strict layer 2 connectivity requirements between servers. A switched
infrastructure that is built around xSTP or a Layer 2 fabric (like Juniper Network's Virtual Chassis Fabric or Junos Fusion) is
perfectly suited for this type of connectivity. This type of infrastructure allows for broadcast domains to be stretched across
t he data center using some form of VLAN tagging. A Layer 2 Fab ric has several lim itations, however, such as scalability and
manageability.

IP Fabric
Many of today's next generation data centers are being built around IP Fabrics which, as their name implies, provide IP
connectivity between t he racks and devices of a data center. How can a next generation data center based on IP-only
connectivity support the Layer 2 requirements of t he traditiona l server-to-server applications? We will discuss the possible
solutions to the Layer 2 connectivity problem.

Chapter 5-4 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Multicast Signaled VXLAN


■ VXLAN VNI manually configured on each VTEP
• Multicast groups can be configured to advertise MAC addresses to other VTEPs
• VTEPs must be configured to join multicast groups

VNI 1== Group 239.1.1.1 Muticast Enabled Network VNI 1== Group 239.1.1.1

=
.. .1 -
+-- .
<>·9 .'\ .'\ .'\
• '[.;,
Join •23
' 9 11 -
+-- .
2 ~
-
+--
join • 172.16.0/24 · . .1 -
+--

VTEP-A VTEP-B
VMA VM B
Multicast Enabled
IP Fabric

Configure common multicast group on participating VTEPs

C> 2019 J uniper Networks . Inc All Rights Reserved

Multicast Signaled VXLAN VNI Configuration


Within a VXLAN environment, broadcast domains are identified by using a VXLAN Network Identifier, or VNI. Each broadcast
doma in within the VXLAN environment is ass igned a unique VNI. In order to forward traffic between remote hosts that be long
to the same virtual network, the Virtua l Tunnel Endpoints, or VTEPs, must be configured to participate in a VNI domain .
Multicast protocols can be used with in the IP fabric to propagate tunneled traffic between the VTEPs that participate in the
same virtual network.

In a mu lticast signaled VXLAN deployment, a VTEP must be configured to join a multicast group that is assigned to a virtual
network. This is a manua l, static process. When a new host (physical or virtua l) is connected to a VTEP, the multicast group
that represents the virtual network in which the new host will participate must be configured on the VTEP.

www.juniper.net EVPN VXLAN • Chapter 5-5


Data Center Fabric with EVPN and VXLAN

Multicast Signaled VXLAN Scaling

■ Dynamic data enters


• Virtual hosts can be instantiated, moved, or destroyed
• VTEPs must be manually configured for changes or moves
• Not scalable or dynamic
VNI 1== Group 239.1.1.1 Muticast Enabled Network VNI 1== Group 239.1.1.1

'\ •• J,
VTEP-A still listens to = - <>9 '\ '\. ••• Ojn *23 - 2 ..=
group 239.1.1.1 after ·· ·1 +=-. 2
jo\n • · " · · ••••••• 172.16 0/24 ,, ' 9 -1. 1. 1 +=-. ·
VM migration ;'l VTEP: ....•• ~EP-B
-- VM B
VTEP-C
-
VM A Migration
=
VTEP-C must be configured to listen
to group 239.1.1.1 after VM migration
VMA

C> 2019 Juniper Networks, Inc All Rights Reserved

Dynamic Data Centers


In a dynamic data center environment, where virtua l hosts can frequently be added, removed, or moved, t his process of
adding or removing mu lticast groups from VTEPs can quickly become unmanageable. Additionally, the flooding mechanism
used to propagate MAC address information across the net work increases processing and net work traffic overhead, and
requ ires the implementation and support of PIM within t he underlay net work .

Chapter 5 - 6 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Controlled VXLAN Data Center

■ Dynamic data centers


• Virtual hosts can be instantiated, moved, or destroyed
• EVPN leverages BGP to advertise locally connected VMs and VNls to remote
VTEPs
••
VTEP-A no longer
advertised as VNI 1 VN I 1 Muticast Enabled Network VN I 1
participant '

---+
+-- =
.2 ..
+--

VTEP-B
172.16.0/24

-
-

VTEP-C
1------....
-------- VM B

=
VTEP-C advertises VNI 1 participant

VMA

Q 2019 Juniper Networks, Inc All Rights Reserved

EVPN Control Plane for VXLAN


EVPN, or Ethernet VPN, is a control plane mechanism for VXLAN environments that leverages the route advertising and scale
capabilities of multiprotocol BGP (M P-BGP) to propagate virtual network information and MAC addresses. When a host
(physical or virtual) is attached to a Layer 2 Gateway, the network information (VNI, MAC address, etc.) associated with that
newly attached host is propagated to remote VTEPs using BGP route advertisements.

www .juniper.net EVPN VXLAN • Chapter 5- 7


Data Center Fabric with EVPN and VXLAN

Agenda:EVPN
■ VXLAN Management
➔ VXLAN with EVPN Control
■ EVPN Routing and Bridging

C> 2019 Juniper Networks , Inc All Rights Reserved Jun1Per


N(lW()PKS
s

VXLAN with EVPN Control


The slide highlights the topic we d iscuss next.

Chapter 5-8 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Overview
■ VXLAN is a Layer 2 VPN
• Defined in RFC 7348/RFC8365
• Encapsulates Ethernet Frames into IP packets
■ EVPN is a control plane
• Based on BGP
• Highly scalable
• Ability to apply policy
• All active forwarding
• Multipath forwarding over the underlay
• Multipath forwarding to active/active dual-homed server
• Control plane MAC learning
• Reduced unknown unicast flooding
• Reduced ARP flooding
• Distributed Layer 3 gateway capabilities
C> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN with EVPN Control


VXLAN is defined in RFC 7348 and describes a method of tunneling Ethernet frames over an IP network. RFC 7348
describes the data plane and a s ignaling plane for VXLAN. Although, RFC 7348 discusses Protocol Independent Multicast
(PIM) and mu lticast in the signa ling plane, other signa ling methods for VXLAN exist including Multi-protocol Border Gateway
Protoco l (MP-BGP) EVPN as we ll as Open Virtua l Switch Database (OVSDB).

EVPN Control Plane


Although we cover EVPN as the s ignaling component for VXLAN in this module, it should be noted that EVPN can also be
used as the signal ing component for both MPLS/MPLS and MPLS/GRE encapsu lations as well. Those encapsulation types
are not covered in this course.

EVPN leverages the scale and flexib ility of MP-BGP to manage and control a VXLAN environment. An EVPN controlled VXLAN
can utilize multiple equal-cost paths for load sharing and redundancy, wh ich provides multi-path forwarding over the
underlay network. Additiona lly, it provides the capab ility of multihome servers with active/active forwarding.

A key difference between EVPN-based VXLAN and traditional multicast-based VXLAN signaling is the manner in which MAC
address information is propagated between VTEPs. With EVPN, locally learned MAC addresses are advertised to remote
VTEPs by using BGP updates, rather than the multicast f looding mechanisms that are used in multicast-signaled VXLANs .
Th is process reduces unknown unicast flood ing and ARP flooding. The use of EVPN as a control plane a lso provides the
ability to implement distributed Layer 3 gateways, wh ich can share traffic and provide redundancy to the network.

www .j uniper.net EVPN VXLAN • Chapter 5-9


Data Center Fabric with EVPN and VXLAN

VPN Terminology - Control Plane


■ EVPN is based on MP-BGP
• AFI 25: Layer 2 VPN
• SAFI 70: EVPN
• Runs between VXLAN Gateways (VTEPs) that support the capability

BGP Open Message


Capability- AFI 25, SAFI 70 •
••
••

BGP Open Message :

Capability - AF! 25, SAFI 70 :
4
•••
T •

.--Jt~----------~:\

-- -=-
~ - . - c::::J
,~ --1 -=- -- -
Leaf1'------..,.. I
- i----1
Leaf2
Host A '
VXLAN L2 Gateway------......__ ___..
Host B
VXLAN L2 Gateway
IP Fabric

C> 2019 Juniper Networks, Inc All Rights Reserved

MP-BGP
EVPN is based on Multiprotocol BGP (MP-BGP). It uses the Address Fami ly Identifier (AFI) of 25, which is the Layer 2 VPN
address family. It uses the Subsequent Address Fami ly Identifier of 70, wh ich is the EVPN address fam ily.

BGP is a proven protoco l in both service provider and enterprise networks. It has the abi lity to scale to millions of route
advertisements. BGP a lso has the added benefit of being po licy oriented. Using pol icy, you have complete control over route
advertisements allowing you to control which devices learn which routes.

Chapter 5-10 • EVPN VXLAN www.juniper.net


Data Center Fabric wit h EVPN and VXLAN

Active/Active Forwarding
■ All active forwarding allows for:
• Multipath in the fabric

••••• ••• •••••••• ••• •• • •• •••• •


•• • • • • • •••• • •• Leaf2
••• •••• ••
•••••
.. •• ------+ \~{Ir~
------+
-
Host A
-------+

Host B
IP Fabric

• Multipath for a multi homed server


Leaf2
LAG

=
Host A Leaf3 Host B - - VXLAN Tunnel
.. ······· ·► BUM Traffic
IP Fabric
Q 2019 Juniper Networks, Inc All Rights Reserved

Active/Active Forwarding
When using Pl M in the cont rol plane for VXLAN, it is really not possible to have a server attach to two different top of rack
switches with the ability to forward data over both links (i.e., both links active). When using EVPN signaling in the control
plane, active/ active forwarding is total ly possible. EVPN allows for VXLAN gateways (Leaf1 at the top of the slide) to use
multiple paths and multiple remote VXLAN gateways to forward dat a to multihomed hosts. Also, EVPN has mechanisms (like
split horizon, etc.) to ensure that broadcast, unknown unicast, and multicast traffic (BUM) does not loop back toward a
multihomed host.

www .juniper.net EVPN VXLAN • Chapter 5- 11


Data Center Fabric with EVPN and VXLAN

Unknown Unicast

• EVPN minimizes unknown unicast flooding


• Locally learned MAC addresses are advertised to remote VXLAN gateways
• Makes the MAC "known" when it otherwise would have been "unknown"
__________
.-----------------------------------,
EVPN MAC Ad vertisement Route .,:
:,___
: MAC - MAC Host B :
0 Host B sends frame to Leaf 2 : Next Hop - Leaf 2 IP
'-----------------------------------' 0 : Leaf 2 MAC Table
MAC Host C > vtep.32557
Leaf 2 forwards frame
0 toward Host C Leaf 1 MAC Table •. •···········jp··F~b·~ic•········· MAC Host B > ge-0/0/2

Leaf 2 advertises learned I MAC Host B > vtep.325555 I .... . .•·· · · -_ · •. . ..


0 MAC of host B to Leaf 1
= •·
-
............
·••• Leaf2
10.1.1.3

HostA Host B

2 SA - MAC Host B
= DA - MAC Host C

I 1P - DA 10.1.1. 1 I ETH I 1P - DA 10.1.1.1 I


- - VXLAN Tunnel
HostC VXLAN Original Ethernet ..........► BGP Update
Encapsulation Frame (minus VLAN tag)
C> 2019 Juniper Networks, Inc All Rights Reserved

Minimizing Unknown Unicast Flooding


The diagram shows how EVPN signal ing minim izes unknown unicast f looding.

1. Leaf2 rece ives an Et hernet frame with a source MAC address of HostB and a destination MAC address of
Hoste.

2. Based on a MAC table lookup, Leaf2 forwards t he Ethernet frame to its destination over the VXLAN tunnel.
Leaf2 also popu lates its MAC table with HostB's MAC address and associates with the outgoing interface.

3. Since Leaf2 just learned a new MAC address, it advertises t he MAC address to the remote VXLAN gateway,
Leaf1. Leaf1 installs the newly learned MAC address in its MAC table and associates it with an outgoing
interface, the VXLAN tunnel to Leaf2.

Now, when Leaf1 needs to send an Ethernet frame to HostB, it can send it d irectly to Leaf2 because it is a known MAC
address. Without the sequence above, Leaf1 would have no MAC entry in its table for HostB (making the frame destined to
HostB an unknown un icast Et hernet frame), so it would have to send a copy of the frame to al l remote VXLAN gateways.

Chapter 5-12 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Proxy ARP

■ A VXLAN gateway can learn MAC/IP binding through snooping


certain messages to or from the hosts
• Local VTEP can be configured to respond to locally received ARP requests
.-----------------------------------,
: EVPN MAC Advertisement Route :
~------- I

0 Host B sends frame to Leaf 2 :I MAC - MAC Host B


: Next Hop - Leaf 2 IP
:I
: '2\ I
Leaf 2 MAC Table
MAC Host B > ge-0/0/2
Leaf 2 advertises learned ·-----------------------------------' ············ \V
0 MAC of host B to Leaf 1
.. ...•···········1P Fab~lc···········
·•.•
---~

•• •

0 Leaf 1 responds to local


ARP requests for Host B IP
= A'
•••
.... ••

-- •••
••
·•.• Leaf2
10.1 .1 .3
---- Spine 1

Host A Leaf1 ---- Host B


0 :I I
ARP (Broadcast)
IP 10.1.1 .2
I
SA - MAC Host B
I
I _ _ _ _ _ _I = DA - MAC Host C

I ARP Reply I
I 10.1.1 .2 is Host B MAC I - - VXLAN Tunnel
HostC ..........► BGP Update

Q 2019 Juniper Networks, Inc All Rights Reserved

Proxy ARP
The EVPN RFC mentions that an EVPN Provider Edge (PE) router, Leaf1 in the example, can perform Proxy ARP. It is possible
that if Leaf2 knows the IP-to-MAC binding for HostB (because it was snoop ing some form of IP traffic from HostB), it can send
the MAC advertisement for HostB that also conta ins HostB's IP address. Then, when HostA sends an ARP request for HostB's
IP address (a broadcast Ethernet frame), Leaf1 can send an ARP reply back to HostA without ever having to send the
broadcast frame over the fabric.

www .juniper.net EVPN VXLAN • Chapter 5 - 13


Dat a Center Fabric with EVPN and VXLAN

Distributed Layer 3 Gateway


■ Multiple Layer 3 gateways can act as default gateway for a single broadcast
domain
• One IP/MAC address with multiple owners (anycast)
• Advertised to other VTEPs using a MAC/IP advertisement
• Remote VTEPs load share traffic toward multiple gateways
Internet
Spine A Spine B
irb.O = 10.1 .1 .254 ,...-L---, ~ __,,irb.O = 10.1 .1 .254

CD Spine A sends EVPN MAC Advertisement Route with IP


address 10.1.1.254 and GW virtual MAC address, with
next-hop Spine A VTEP address (IP address)

0 Spine B sends EVPN MAC Advertisement Route with IP


address 10.1.1.254 and GWvirtual MAC address, with
next-hop Spine B VTEP address (IP address)

0 Leaf nodes see 10.1 .1.254 with GW virtual MAC through


multiple next hops ~
10.1.1.1
0 10.1 .1.2
0 -, 10.1.1.3
. ...... 0
Host A Host B Hoste
C> 2019 Juniper Networks, Inc All Rights Reserved

Distributed Layer 3 Gateways


The EVPN control plane also helps enab le d istributed Layer 3 gateways. In the s lide, Hoste has a defau lt gateway configured
of 10.1 .1 .254. SpineA and Spines have been enabled as VXLAN Laye r 3 Gateways. They both have been configured with the
same virtua l IP address of 10.1.1.254. The spine devices also sha re the same virtual MAC address, 00:00:5e:00:01:01 by
default (same as VRRP even though VRRP is not used). SpineA and Spines send a MAC Advertisement to Leafe for the same
MAC. Leafe can load s hare traffic from Host e to the default gateway.

Altho ugh an a utomat ic Virtua l MAC address for the Layer 3 Gateway is generated, Juniper Networks recommends manually
configuring a virtual MAC address for the IR S interfaces.

Chapter 5-1 4 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda:EVPN
• VXLAN Management
• VXLAN with EVPN Control
➔ EVPN Routing and Bridging

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPl(S
1s

EVPN Routing and Bridging


The slide highlights the topic we d iscuss next.

www .j uniper.net EVPN VXLAN • Chapter 5 - 15


Data Center Fabric with EVPN and VXLAN

EVPN Terminology
• VXLAN is a Layer 2 VPN
• Term inology often overlaps with other VPN terminology
• PE router (Provider Edge) - VTEP Gateway or Leaf Node (VXLAN Gateway)
• CE device (Custmer Edge) - Host (or virtual host within a physical host)
• Site - set of hosts or virtual hosts connected to a VTEP Gateway
• NVE - Network Virtualization Edge
• NVO - Network Virtualization Overlay
• IBGP is used regardless of underlay routing protocol

-- --

-- -- --
= = =
Host A Host B Host C
C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Terminilogy
A VXLAN environment is a Layer 2 VPN. Terminology in an EVPN VXLAN environment resembles that of a trad itional VPN
environment, including the concept of:

• Provider Edge (PE) devices : A VPN edge device that performs encapsulation and decapsulation (VXLAN
Gateway, or VTEP). These are often t imes referred to as Leaf nodes;

• Customer Edge (CE) devices: A device that is associated with a customer s ite . In a data center, a "site" is often
referred to as a Host (physica l or virtual ) that is connected to a Leaf node, and sites;

• Sites: A set of hosts or v irtual hosts connected to a VTEP gateway. It is common to have multiple hosts or virtual
hosts that participate in the same v irtual network connected to the same VTEP or Leaf node.

Regardless of the underlay routing protocol, MP-IBGP is used for the EVPN control plane signa ling.

Chapter 5 -16 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Instances
■ EVPN Instance (EVI)
• EVI is a virtual switch
• EVI is a MAC VRF
■ Bridge domain
IP-IBGP Session (EVPN )
• BD is equivalent to a VLAN ••••• •••••••••• ••••• ••••••••••• ••••••••• ••• ••••• •• ••• ••••• •••

• BD is a Layer 2 broadcast domain VXLAN Tunnel

Spine Spine
-
+--- -
+---

EVPN lnstnace (EVI) = Virtual Switch = MAC VRF --+-


Leaf Leaf
• ••• • •••
Bridge Domain (8D) = VLAN --+-+I ••• ••••• • • • • •• •
IP-IBGP Session IP-IBGP Session
(EVPN) (EVPN)

Host A Host B Hoste

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Instances
An EVPN Instance (EVI) is a virtual switch within a VTEP. It is essentially a MAC VRF that is unique to a broadcast domain.
Because m ultiple broadcast domains may be connected to a single VTEP, the VTEP maintains a separate MAC VR F for each
instance in order to maintain traffic isolation between broadcast domains.

Bridge Domain
A bridge doma in is equ ivalent to a VLAN . It is a Layer 2 broadcast domain within a VXLAN environment. There can be many
bridge domains for a given EVI.

www .j uniper.net EVPN VXLAN • Chapter 5- 17


Data Center Fabric with EVPN and VXLAN

Ethernet Segment
■ An Ethernet segment is a set of Ethernet links that connects a
customer site to one or more leaf devices
• Single-homed sites use reserved Ethernet Segment ID (ESI) of O (Ox00:00:00:00:00:00:00:00:00:00)
• Multi homed sites must be assigned a unique, 10 octet ESI
• Interfaces A and B belong to the same Ethernet segment and should be assigned a network-wide, unique, non-reserved ESI
• Site 2's ESI , interface C, would be assigned an ESI of 0
• Used to prevent traffic replication and loops on multihomed sites
• •• •••••••••••••••••••
.. •• • • ••••••
•• ..·•..
Route
Advertisement •. •·· ..-··
-
•· ••

+-- +--
- ··
••••
Reserved ES ls:
Single homed site: Ox00:00:00:00:00:00:00:00:00:00
····••••••••~

+-- ii:::: :::ii - MAX-ES I: OxFF:FF :FF:FF:FF :FF :FF:FF: FF:FF

Leaf1 •
•••

Leaf2
- •
.... .. ············· · ······ ► - .
Leaf2 Leaf3 •
-
~~-:;-:-:;-:-:;-:-:;-:--:;-:-:;-:-f-~n~ ~ >.c:::- - - - - ,~~ n ~ Type 1 n -:,___ _ _ _ _.....,
-,..,....,;:
~
-- ~
!r;::
ESl=Ox0:1:1:1:1:1 :1:1 :1:11 ~ -. - ~-~ _ ESl=OxO _ I.________.I
Site 2 P c

Site 1
I
host1 host2
C> 2019 Juniper Networks, Inc All Rights Reserved

Ethernet Segments
When a host is multi homed to multiple Leaf nodes, the Link Aggregation Group (LAG) represents a s ingle logical connection
to the VXLAN domain. To enable the EVPN control plane to properly manage the multip le links in the LAG, the LAG is assigned
a unique Ethernet Segment ID (ESI). By default, a single-homed link is assigned the ESI value of 0
(Ox00:00:00:00:00:00:00:00:00:00). A single-homed site does not require loop prevention or load sharing across multiple
Ii nks.

With a multi homed site, a unique 10 octet ESI must be assigned to the link bundle. This va lue is assigned by the
adm inistrator. In t he example, the LAG that connects CE1 to Leaf1 and Leaf2 is assigned a va lue of Ox0.1.1.1 .1 .1.1.1.1.1.
This enables the EVPN control plane to identify that t he device connected to Leaf 1 and Leaf 2, over link A and link B, are the
same site. Because the EVPN control plane advertises the ESI associated with the LAG group to all VTEP, remote VTEPs can
install multiple next hops associated with MAC addresses assigned to CE1 and two VXLAN tunnels are available for
forward ing. In addition, Leaf1 and Leaf2 are able to manage the LAG bund le without implementing MC-LAG, and traffic loops
and traffic replication between the two LAG interfaces can be prevented .

Chapter 5 - 1 8 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Route Distinguisher


■ Value combined with routes advertised in a VPN environment
• Ensures that routes from different tenants/clients remain unique within a service provider or data center domain
• Expands an 1Pv4 route from a 4-byte value to 12 byte value

Route Distinguisher Original Route Prefix Modified Route


Type O: 10458:22 1.1 .1.1 /24 10458:22: 1.1.1 .1/24
Type 1: 1.1.1.1 :33 1.1.1.1 /24 1 . 1 . 1. 1 :33: 1. 1 . 1. 1/24

■ Route Distinguisher Formats


8-Byte Route Distinguisher
• Type 0:
• ADM field is 2 bytes. Should contain an ASN from IANA
(Type) (ADM) (AN)
• AN field is 4 bytes: A number assigned by service provider
• Example: 10458:22

• Type 1:
• RFC7432 recommended
• ADM field is 4 bytes: Should contain an IP address assigned by IANA
• AN field is 2 bytes: A number assigned by service provider
• Example: 1.1.1.1 :33
C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Route Distinguisher


The route distingu isher is a value that is comb ined with routes that are advertised in a VPN environment. The purpose of the
route distinguisher is to provide a mechanism t hat can be used to ensure routes with in a multi-tenant domain remain unique
when advertised in a service provider network. A data center is essentially a "multi-tenant service provider" environment, where
tenants are hosts that be long to independent broadcast domains, and the data center is t he service provider that interconnects
those hosts.

When routes are advertised within t he data center (service provider) environment, they may be stored in local RIB-IN databases
on each PE device (Leaf node). To avoid any possible overlap of the site addresses between d ifferent broadcast domains, a
route distinguisher that is unique to each Ethernet segment is prepended to each route before it is advertised to remote PEs. In
the examp le, two cl ients, or broadcast domains, are using the same IP prefix of 1.1.1.0/24. The route d istinguisher a llows the
service provider to distinguish between the 1 .1 .1 .0/ 24 network of each customer by making each network address unique while
it transits the provider domain.

The route distingu isher can be fo rmatted two ways:

• Type O: This format uses a 2-byte administration f ield t hat codes the provider's autonomous system number,
fol lowed by a 4-byte assigned number f ield. The assigned number field is administered by t he provide r and shou ld
be unique across the autonomous system.

• Type 1 : This format uses a 4-byte administration f ield t hat is normally coded with the router ID (RID) of t he
advertising PE router, fol lowed by a 2-byte assigned number f ield t hat caries a unique va lue for each VRF table
supported by the PE router.

The examples on the sl ide show both the Type O and Type 1 route distinguisher formats. The first example shows the 2-byte
adm inistration field with the 4-byte assigned number f ield (Type 0 ).

RFC 7 432 recommends using the Type 1 route distinguisher for EVPN signal ing.

www.j uniper.net EVPN VXLAN • Chapter 5 - 19


Data Center Fabric with EVPN and VXLAN

EVPN Route Target Extended Community


■ BGP Route Target
• An extended BGP community associated with an MP-BGP route
• Used to identify the source or destination VRF of an MP-BGP route

■ VRF-lmport Policy
• Policy that examines received MP-BGP routes and identifies route target communities associated with
each route
• Used to import routes with a matching route target community into an associated VRF routing table
■ VRF-Export Policy
• Policy that applies a configured route target community value to advertised MP-BGP routes
• Used to announce connectivity to local VRF instances
■ Route Isolation between MAC-VRF tables
• Careful policy administration allows separation of MAC-VRF entries on a node
• Example route target community: target:64520:1

Q 2019 Juniper Networks, Inc All Rights Reserved

Route Target Extended Community


Each EVPN route advertised by a PE router contains one or more route target communities. These communities are added using
VRF export pol icy or explicit configuration.

When a PE router receives route advertisements from remote PE routers, it determines whether the associated route target
matches one of its local VRF tables. Matching route targets causes the PE router to install the route into the VRF table whose
configuration matches t he route target.

Because the application of policy determines a VPN 's connectivity, you must take extra care when writing and applying VPN
policy to ensure that the tenant's connectivity requirements are faithfully met.

Chapter 5-20 • EVPN VXLAN www.juniper.net


Data Cente r Fabric wit h EVPN and VXLAN

VRF Export Policy


■ VRF export policies determine which routes are advertised, and what target
communities are attached
Applying a vrf-target policy to a MAC-VRF causes a chain reaction. (Example shows Type 2 route handling):

1. Locally learned MACs are copied into EVl-specific VRF table as EVPN Type 2 routes.
2. Locally generated EVPN routes are advertised to remote nodes with target community attached .

VRF Table for Green EVPN (default-switch.evpn.0)


0 MAC/IP Advertisement

01 :01 :01 :01 :01:01 > nh indirect target:1 :1


................................• ••••••• 01 :01 :01 :01 :01 :01 > nh Leaf1 ' s loO, target: 1:1
••
..'-----------------'
·• ••••
•••• ••
0 MAC Table for Green EVPN
S ine1 S ine2
•• ••
•• •••
••
01 :01 :01 :0 1:01 :01 > nh ge-0/0/0
---
- ---
-
•••••••
••
••
••
••

••

= -+- - - - - - --10
10.10.10.1 1/24 ge-0/0/0 ............. 10.10.10.22/24 Site 2 P

MAC 01 :01 :01 :01:01 :01 "---' ---------------------------


MP-IBGP Session (EVPN) MAC 05:05:05:05:05:01
Site 1 host1 Leaf1 Leaf2 host1
2.2.2.2/32 4.4.4.4/32
Q 2019 Juniper Networks, Inc All Rights Reserved

VRF Export Policy


VR F export pol icy fo r EVPN is applied using t he v rf - t arget statement. In the example, the st ateme nt vr f -tar ge t
t arge t : 1 : 1 is appl ied to Leaf2 's o range EVI. That st atement ca uses a ll loca lly learned MACs (in the MAC tab le ) to be
copied into the VRF ta ble as EVPN Type 2 routes. Each of the Type 2 routes associated with locally lea rned MACs will be
t agged with t he comm unity target:1:1. These tagged ro utes are t hen advertised to al l remot e PEs .

The vr f -ta rge t statement always sets the target commun ity (using hidden VRF import and export policies) of Type 1
ro utes. By default, the v rf - ta rget statement a lso sets the ta rget comm u nity of Type 2 and Type 3 ro utes as well.

www .j uniper.net EVPN VXLAN • Chapt er 5- 2 1


Data Center Fabric with EVPN and VXLAN

VRF Import Policy


■ VRF import policies match on target communities to determine which routes are
accepted for use
Applying a vrf-import statement to an EVI causes a chain reaction. (Example shows Type 2 route handling):

1. A received route is compared to the vrf-import policy. If the route's target community matches the import policy, the route
is copied into EVPN RIB-IN and also the EVl's VRF table.
2. Newly learned MAC (from the update) is copied into EVl's MAC table and a recursive lookup is used to determine
outgoing VXLAN tunnel.
MAC/IP Advertisement

VRF Table for Green EVPN (default-switch.evpn.0)


01 :01 :01 :01 :01 :01 > nh Leaf1 's loO, target:1: 1

•. •.•............... ..•.•.•..•..•.. Route placed in global RIB-IN (bgp.evpn.0)


0
01 :01 :01 :01 :01:01 > nh indirect target:1 :1
•·•···••••• and EVl's VRF Table (default-switch.evpn.O)
••

MAC Table for Green EVPN ·············•... I 01:01 :01:01 :01:01 > nh indirecttarget:1:1 I
01 :01 :01 :01:01 :01 > nh ge-0/0/0
S ine1
--
+--
S ine2
--
+--
•· •• •••
••
•• MAC co ied to Green EVPN MAC Table
0
•••
+-- +--

•• 01 :01 :01 :01:01 :01 > nh VXLAN tunnel to Leaf 1

••

= 10.10.10.11/24 ge-0/0/0 ..........,... 10.10.10.22/24 Site 2 P


-+--------1□
MAC 01 :01 :01 :01 :01:01 ~ ---------------------------
MP-IBGP Session (EVPN) MAC 05:05:05:05:05:01
Site 1 host1 Leaf1 Leaf2 host2
2.2.2.2/32 4 .4.4.4/32
Q 2019 Juniper Networks, Inc All Rights Reserved

VRF Import Policy


VRF import policy can be applied using the vrf-ta r ge t statement or it can be enabled by manually writ ing a policy and
then applying it with the vrf- import statement. The vrf-target statement is used to enable an export policy that
advertises EVPN routes tagged with the target community. The statement also enables the assoc iated import policy, which
w ill accept routes that are tagged w ith that target commun ity. At a m in imum, you must configure the vrf-target
statement to enable export pol icy. To override the import policy instantiated by that statement, you can apply the
vrf- i mport statement.
In the example, t he vrf- ta rget target : 1: 1 is app lied to Leaf2's EVI. When Leaf2 receives the MAC Advertisement
from Leaf1, it runs the route through the configured import po licy which will accept routes tagged with target:1:1. Once
accepted, the route is copied into the Leaf2's global RIB-IN table and t hen copied into the appropriate VRF table (the one
configured with the vrf-ta rget ta r ge t: 1: 1 statement). Finally, the route is converted into a MAC entry and stored in
Leaf2 's MAC table for t he Green EVI. The outgoing interface associated w ith the MAC is the VXLAN tunnel that terminates on
Leaf1.

Chapter 5-22 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

BUM Traffic

• Forwarding of BUM traffic can be done in two different ways


• Underlay replication: Enable multicast using some form of PIM in underlay
network to handle BUM replication
• Ingress replication: Ingress leaf copies and forwards each BUM packet it
receives on its server-facing interfaces to each remote leaf that belongs to the
same EVPN
• When EVPN signaling is used, Juniper Networks devices only support ingress replication

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
23

BUM Traffic
When EVPN signaling is used with VXLAN encapsulation, Juniper Networks devices only support ingress replication of BUM
traffic. That is, when BUM traffic arrives on a VTEP, the VTEP will unicast copies of the BUM packets to each of the individual
VTEPs that belong to the same EVPN. This is the default behavior and setting on a Ju nos OS device.

www .juniper.net EVPN VXLAN • Chapter 5- 23


Dat a Center Fabric with EVPN and VXLAN

EVPN Unicast and Broadcast Routes


■ EVPN is signaled using MP-BGP and has eight route types
Route Type Description Usage Standard
Type 1 Auto-Discovery (AD) route Multipath and Mass Withdraw RFC7432
Type 2 MAC/IP route MAC/IP Advertisement RFC7432
Type 3 Multicast route BUM Flooding RFC7432
Type 4 Ethernet segment route ES Discovery and Designated RFC7432
Forwarder Election (OF)
Type 5 IP Prefix route IP Route Advertisement Draft-rabedan-12vpn-evpn-
prefix-advertisement
Type 6 Selective Multicast Ethernet Route Enables efficient core MCAST draft-ietf-bess-evpn-ig m p-mld-
Tag forwarding proxy
Type 7 IGMP-Join synch Synchronizes multihomed peers draft-ietf-bess-evpn-igmp-mld-
when an IGMP-Join is received proxy

Type 8 IGMP-Leave synch Synchronizes multihomed peers draft-ietf-bess-evpn-igmp-mld-


when an IGMP-Leave is received proxy

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Routes
EVPN routes are c lass ified accord ing to route types. Each route type performs a specific task in an EVPN environment. We
w ill discuss the following route types in more detai l:

• Type 1: Auto- Discovery (AD Route);

• Type 2 : MAC/ IP Route;

• Type 3 : Multicast Route;

• Type 4 : Ethernet Segment Route;

• Type 5: IP Prefix Route

The following route types are d iscussed in a different module:

• Type 6 : Selective Mult icast Ethernet Route;

• Type 7: IGMP-Join synch;

• Type 8 : IGM P-Leave synch

Chapter 5 - 24 • EVPN VXLAN www.juniper.net


Data Center Fabric wit h EVPN and VXLAN

BGP Route Validation

• BGP validates the protocol next hop of all received routes


• Protocol next hop is the remote gateway used to reach an advertised
destination
• Recursive lookup of advertised protocol next hop
• Unicast route: Protocol next hop is validated through inet.O table
• MPLS VPN route: Protocol next hop is validated through inet.3 table
• EVPN route: Protocol next hop is validated through the :vxlan.inet.Otable
- An EVPN route is discarded if the :vxlan.inet.O table does not contain a route that
points to the advertised remote gateway
- :vxlan.inet.O table is populated by local interfaces (including local VTEP interfaces)
- Route to remote VTEP loopback is automatically placed in :vxlan.inet.Otable as a
static route

C> 2019 Juniper Networks, Inc All Rights Reserved

Route Validation
A BGP route advertisement consists of a dest ination prefix. The prefix that is advertised incl udes a set or properties
associated with that prefix. These properties are used for a variety of tasks. One of t he properties is called t he protocol next
hop. The protocol next-hop property lists the remote gateway to use to reach t he advertised prefix. The remote gateway, or
protocol next hop is not always set to t he device that advertises the route, since an advertising router may simply be relaying
the route from another peer.

If a route prefix is received by a BGP router, t he BGP router m ust validate whether or not the remote gateway to the prefix is
reachable in the loca l routing table. After all, a device can 't forwa rd an IP packet toward a remote gateway if the device does
not have a route toward that gateway.

The process of va lidating the protocol next hop of a rece ived route is cal led a recursive lookup. A recursive lookup describes
the process of looking within a local routing tab le to find a physical next hop that points toward a received remote next hop,
or gateway address. In the case of an IP prefix, the local ro uter examines the entires in the local inet.O routing table to
determine the next physica l hop toward the advertised protocol next hop. The phys ica l next hop to t he remote gateway is
then associated with the advertised prefix and installed in the local routing-table as the physical next hop for the advertised
prefix.

In the case of an M PLS VPN route, an MPLS destination must be reachab le through an MPLS tunne l. MPLS tunne ls are
installed in the inet.3 routing table, and therefore all M PLS VPN advertised routes must have a route to t he protocol next hop
present in the inet.3 routing table.

In the case of an EVPN advertised destination, t he EVPN destination must be reachable thro ugh a VXLAN t unnel before it
can be placed in t he local bgp.evpn .O routing table. The VXLAN tunnel routes are installed in t he :vxlan .inet.O routing table.
Therefore, when an EVPN route is received by a router through MP-BGP, the route to t he protocol next hop of t hat prefix,
which is the loopback address of the remote VT EP device, m ust exist in the :vxlan .inet.O route tab le. When a local subnet is
configured as a VNI, the local interface is added to t he :vxlan .inet.0 table. When a remote VTEP is discovered, a route to the
remote VTEP loopback is placed in the :vxlan .inet.O table as an automatically generated static route. The VTEP logical tunnel
interfaces are created when a remote VTEP advertises connectivity to one of the VN ls that is configured locally. If a remote
BG P peer advertises an EVPN ro ute for a VNI that does not correspond to a locally configured VN I, then t he local VTEP will
not have a VTEP tunnel to the remote gateway, and therefore will not have a route to the protocol next hop. This ca uses the
local VTEP to drop the advertised EVPN prefix, because the prefix is for a VNI that is not associated with any locally
configured VNls. This saves resources by not storing routes to remote VN ls that are not configured locally in the bgp.evpn.O
routing tab le.

www .juniper.net EVPN VXLAN • Chapter 5- 25


Data Center Fabric with EVPN and VXLAN

EVPN Type 1: Ethernet Auto-Discovery (1 of 3)


• Each leaf node attached to a multi homed site should advertise an attachment to
an Ethernet segment
Route Type Ethernet Auto-Discovery Route (Type 1)
Route Distinguisher (RD) RD of EVI on Leaf1 (or update source Leaf)
NLRI Ethernet Segment ID (ESI) Ox0:1:1:1:1:1:1:1:1:1
Ethernet Tag ID MAX-ET
MPLS Label 0
Next hop 2 .2.2.2
Extended Community ESI Label Extended Community (Single-Active Flag)
Other attri butes (Origin, AS -Path, Local-Pref, etc.)

... .······················· ---.... ....


• •••••
Reserved ESls:
Leaf1 ••• •· -
••
L-T_yp_e_1--{•······
+--

-
+--

+--
••
• ••
•••
••
••
••••
Single homed s ite: Ox00:00:00:00:00:00:00:00:00:00
MAX-ES I: OxFF :FF :FF:FF:FF :FF :FF:FF: FF:FF

Leaf1 -
• n ,...
Leaf2
-• • ••
. .....................• ,,,.
Leaf2 Leaf3
-- ►

_ _ _ _----=:~..._~ :-----,<: n -
~ ~ ~
>-
Type 1
!ESl=Ox0:1:1: 1:1 :1:1: 1:1:11 r" - - ~
I ESl=OxO
I
Site 2 P

Site 1
I
host1 host2
•·•······· · ··• BGP Route Advertisement

C> 2019 Juniper Networks, Inc All Rights Reserved

Type 1- Ethernet Autodiscovery Route


The Type 1 Route, or Ethernet Autodiscovery Route, is used to propagate ESI information to remote VTEPs when a
non-default ESI is configured for a s ite. This is used when multihomed sites are attached to the EVPN VXLAN.

Once you have configured a non-reserved ESI value on a site-facing interface, the PE will advertise an Ethernet Autod iscovery
route to al l remote VTEPs. The route carries the ESI va lue as well as the ESI Label Extended Community. The community
contains the Single-Active Flag. This flag lets the remote VTEPs know whether or not they can load share traffic over the
multiple links attached to the site. If the Single-Active flag is set to 1, that means only one link associated with the Ethernet
segment can be used for forward ing. If the Single-Active flag is set to 0, that means that all links associated with the Ethernet
segment can be used for forwarding data (we ca ll this active/active forwarding). Juniper Networks devices only support
active/active forwarding (we always set the f lag to 0).

Chapter 5-26 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Type 1: Ethernet Auto-Discovery (2 of 3)


• Leaf 3 can reach the remote ESI and its associated MACs using 2 different
VXLAN tunnels

host1 's MAC might have only been advertised from Leaf1, but
VXLAN Tunnels Leaf3 assumes the MAC is reachable by any leaf attached to the
••• Ethernet segment

- -
-
.. ---
-- ..
.. ••
••

~ ~ ?;--....~~~-
-:;- K
Leaf3 MAC-VRF Table
MAC host1 > vtep.32678
.... > vtep.32679

Leaf1
- Leaf2
- Leaf3

! ESl=Ox0:1:1 :1:1 :1:1 :1:1 :11 ESl=OxO


I
Site 2 P

Site 1 host1 host2


C> 2019 Juniper Networks, Inc All Rights Reserved

Remote VTEP Behavior


When a remote VTEP, Leaf 3 in the example, receives the Ethernet Autodiscovery routes from Leaf1 and Leaf2, it now knows
that it can use either of the two VXLAN t unnels to forward data to MACs learned from Site 1. Based on the forwarding choice
made by host1, it may be that Leaf1 was the only VTEP attached to Site1 t hat learned host 1's MAC address, which means
that Leaf3 may have only received a MAC Advertisement for host1's MAC from Leaf1 . However, because Leaf1 and Leaf2 are
attached to the same Ethernet Segment (as advertised in t hei r Type 1 routes), Leaf3 knows it can get to host1's MAC
through either Leaf1 or Leaf2. You can see in Leaf3 's MAC table that both VXLAN tunnels have been installed as next hops
for host1's MAC address.

www .juniper.net EVPN VXLAN • Chapter 5- 27


Data Center Fab ric with EVPN and VXLAN

EVPN Type 1: Ethernet Auto-Discovery (3 of 3)


■ Fast convergence
• If link A fails, Leaf1 can withdraw the Ethernet Autodiscovery router for the ESI, which notifies Leaf3 to
update its next hops
Withdraw
Ethernet Auto Discovery Route
ESI = Ox0:1:1:1:1:1:1:1:1:1

Leaf3 MAC-VRF Table


MAC host1 , 'l'tei,.926r8
> vtep.32679

Leaf1 Leaf3

! ESl=Ox0:1:1: 1:1 :1 :1: 1:1:11 I ESl=OxO


I
Site 2 P

Site 1 host1 host2


C> 2019 Juniper Networks, Inc All Rights Reserved

Network Convergence
Another benefit of t he Ethernet Aut odiscovery route is t hat it helps to enable faster co nvergence times when a link fa ils.
Normally, when a site-faci ng link fails, a VTEP will withdraw each of its individual MAC Advertiseme nt . Think about t he case
where there are t housa nds of MACs associated with that link. The VTEP would have to send thousands of withdrawals. When
the Ethernet Autod iscovery route is being advertised (because the esi statement is configured on the interface), a VTEP
(like Leaf1 on the s lide) can send a single withdrawa l of its Ethernet Autodiscovery route and Leaf3 can immediately update
the MAC table for all of t he t housa nds of MACs it had learned from Leaf 1. Th is allows convergence times to greatly improve.

Chapter 5 - 28 • EVPN VXLAN www.j uniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Type 2: MAC/IP Advertisement


Route Type MAC/IP Advertisement Route (Type 2)
Route Distinguisher (RD) RD of EVI on Leaf1
Ethernet Segment ID (ESI) Ox00:00:00:00:00:00:00:00:00:00 (single homed site)
NLRI Ethernet Tag ID VXLAN VNID
MAC Address 01 :01 :01 :01:01 :01
IP Address 10.10.10.1 1 (Optional)
MPLS Label VXLANVNID
Next-hop 2 .2.2.2
Extended Community Route-target for EVI on Leaf1
Other attri butes (Origin, AS-Path, Local-Pref, etc.) . ..

EVPN Type 2 Route Advertisement


••••• • ••••• • •• •• ••• • ••••••
••• • ••
••• • ••
•• ••
•• ••
•••• Spine1 Spine2 •• •. •

••
••••
••
•••

+--

+--
-- +--

+--
-- ••

••
••
••
••
••
••

..·····~ ~ ·.·....
=
·- 10.10.10.11/24
••

' 10.10 .10.22/24 Site 2 P

MAC 01:01 :01 :01 :01:01


D
- ----------------------------·MP-IBGP Session (EVPN)
+- I

MAC 05:05:05:05:05:01
11
Site 1 host1 Leaf1 Leaf2 host2
2.2.2.2/32 4 .4.4.4/32
C> 2019 J uniper Networks , Inc All Rights Reserved Jun1Per
NFf\\'OPKS
29

EVPN Type 2 Route - MAC/IP Advertisement Route


The Type 2 route has a very simple purpose, which is to advertise MAC addresses. Optionally, th is route can be used to
advertise a MAC address and an IP address that is bound to t hat MAC address. Leaf1, an EVPN VTEP, will learn MAC
addresses in the data plane f rom Ethernet frames rece ived from attached hosts, host1 in the example. Once Leaf1 learns
host1's MAC address, it wil l automatically advertise it to remote PEs and attach a route target community. Leaf2, another
EVPN VT EP, upon receiving the route must decide on whether it should keep t he route. It makes this decision based on t he
received route target community. In order to accept and use this advertisement, Leaf2 must be configured with an import
policy that accepts routes tagged with the ro ute ta rget community associated with t he EVI. Without a configured policy that
matches on the route target, Leaf2 wou ld discard the advertisement. So, at a m inimum, each EVI on each participating VT EP
for a given EVPN must be configured with an export policy t hat attaches a unique target community to MAC advertisements.
It also must be configured with an import po licy that matches and accepts advertisements based on that unique target
community.

www .juniper.net EVPN VXLAN • Chapter 5- 29


Data Center Fabric with EVPN and VXLAN

EVPN Type 3:
Inclusive Multicast Ethernet Tag Route
Route 'Type Inclusive Multicast Ethernet Tag Route ('Type 3)
Route Distinguisher (RD) RD of EVI on Leaf1
NLRI
Ethernet TAG ID VXLANVNID
Originator IP Address 2.2.2.2
Flags 0 (No Leaf info req.)
Tunnel Type Ingress Replication, PIM
Provider Multicast Service Interface (PMSI) Tunnel
MPLS Label N/A
Tunnel ID Multicast Group or Sender IP (2.2.2.2)
Extended Community Route-target for EVI on Leaf1
Other attributes (Origin, AS -Path, Local-Pref, etc.) ...

EVPN Type 3 Route Advertisement


••••• • ••••• • •• •••• •• ••••••
• ••
•• •• ••• •• ••
•• • •• •
•••• Spine1 Spine2 •• •. •

••••
••••


+-- -- +-- -- ••

••
••
• ••
••
+-- +--
• •
••

..·····~ ~ ·..••..
=
..
10.10.10.11/24
••

' 10.10.10.22/24 Site 2 P

MAC 01 :01 :01 :01 :01 :01


D
- ----------------------------·
MP-IBGP Session (EVPN)
+- I

MAC 05:05:05:05:05:01
11
Site 1 host1 Leaf1 Leaf2 host2
2.2.2.2/32 4.4.4.4/32
Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per
NOWOllKS
30

Type 3 - Inclusive Multicast Ethernet Tag Route


Th is EVPN route is very simple. The route informs remote VTE Ps of how BUM traffic should be handled. This information is
ca rried in the provider multicast service interface (PMSI) Tunnel attribute. It specifies whether PIM or ingress replication wi ll
be used and the addressing that should be used to send t he BUM traffic. In the diagram, Leaf1 advertises that it is expecting
and using ingress repl ication and that Leaf2 shou ld use 2 .2 .2.2 as the destination address of t he VXLAN packets that are
ca rrying BUM traffic.

Chapter 5-30 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Split Horizon (1 of 2)


■ EVPN has some default split horizon rules
• If a leaf receives a BUM packet from a local host (e.g. host3 attached to Leaf3)
• Flood to local hosts in same VLAN
• Flood to remote leaf nodes in same VLAN
• No flooding to original host device
= =
Source host4
host3

--------~~-~~ .
......., . •••••••• • BUM Traffic
·•.
••••• ••• •• - - - VXLAN Tunnel

- Leaf3

=
Spine2

=
host1 Leaf1 Leaf2 hosts
C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per
N(lW()PKS
31

Split Horizon Rules, Part 1


EVPN has default split horizon rules to avoid unnecessary t raffic replication and to prevent forwa rd ing loops. If a VTEP
receives a BUM packet from a loca l host (e.g. host host3 attached to Leaf 3):

• Flood the packet to local hosts in the same VLAN;

• Flood the packet to remote VTEPs in the same VLAN;

• Do not flood the packet to the original host device.

www .j uniper.net EVPN VXLAN • Chapter 5- 31


Data Center Fabric with EVPN and VXLAN

EVPN Split Horizon (2 of 2)


■ EVPN has some default split horizon rules (contd.)
• If a leaf receives a BUM packet from a remote leaf (e.g., Leaf3 to Leaf1)
• Flood to local hosts in same VLAN
• No flooding to remote leaf devices

Source
= = host4
host3
• • • • • • • • • BUM Traffic

- - - VXLANTunnel

- Leaf3

=
Spine2

=
host1 Leaf1 Leaf2 hosts
i9 2019 Juniper Networks, Inc All Rights Reserved

Split Horizon Rules, Part 2


Split horizon are also in place f or traffic received from remote PE devices (leaf devices). If a leaf/VTEP/ PE receives a BUM
packet from a remote leaf/VTEP/ PE (e.g. Leaf3):

• Flood the packet to local hosts in the same VLAN;

• Do not flood the packet to the remote leaf/VTEP/ PE device.

Chapter 5 - 3 2 • EVPN VXLAN www.juniper.net


Data Cente r Fabric with EVPN and VXLAN

Active/Active Ethernet Segment Problems


■ EVPN's default split horizon behavior does not stop a multi homed host from
receiving multiple copies of BUM traffic in an Active/Active scenario
• PEs attached to multihomed host must select a designated forwarder

• • • • • • • • • BUM Traffic

- - VXLAN Tunnel

Leaf2 Leaf2

Source

= =
.... ... ► ◄· ..... .
Leaf1 Leaf3 Leaf1 Leaf3
host2 host1
(Source Leaf) (Source Leaf)

C> 2019 Juniper Networks, Inc All Rights Reserved

Active/Active Breaks Split Horizon


The Type 1 Ethernet Autodiscovery route can enable mu ltipath forwarding w hen a s ite is multihomed to 2 or more VTEPs .
That advertisement works well for known unicast traffic. However, the examp le shows what happens w hen Leaf1 must send
BUM traffic to remote VT EPs.

In the diagram on t he left, Leaf1 makes copies of the BUM packets and unicasts them to each remote PE that belongs to the
same EVPN. This w ill ca use host2 to receive multiple copies of the same packets.

In the diagram on the right, Leaf3 receives BUM traffic f rom the attached host. It makes copies and unicasts them to the
remote VTEPs, whic h includes Leaf2. Because of the default split horizon rules, Leaf2 forwards BUM traffic back to the
so urce, which creates a loop.

Electing a designated forwarder for an ESI wi ll solve these problems.

www .juniper.net EVPN VXLAN • Chapter 5 - 33


Dat a Center Fabric with EVPN and VXLAN

Designated Forwarder
■ A designated forwarder is a device that is elected to forward traffic toward a
multihomed site
• Once a designated forwarder is elected, BUM traffic will not be sent toward the non-
designated forwarder
• Ethernet segment route (Type 4) is used to elect the designated forwarder

• • • • • • • • • BUM Traffic

- - VXLAN Tunnel
Leaf2
(DF) Leaf2

Source Source

= = =
......... - ◄· ..... .
Leaf1 Leaf3 Leaf1 Leaf3
host2 host1
(Source Leaf) (Source Leaf)

C> 2019 Juniper Networks, Inc All Rights Reserved

Designated Forwarder
To fix the problems described on the previous example, a ll of the VTEPs attached to the same Ethernet Segment will elect a
designated forwarder for the Ethernet segment (two or more VTEPs advertising t he same ESI ). A designated forwarder is a
device that is elected to forward traffic to an Ethernet segment. A designated forwarder will be elected for each broadcast
doma in . Remember that an EVI can contain one or more broadcast domains or VLANs. The Ethernet Segment Route (Type 4 )
is used to help wit h the election of the designated forwarder.

Chapter 5 - 34 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Type 4: Ethernet Segment


Route Type Inclus ive Multicast Ethernet Tag Route (Type 3)
Route Distinguisher (RD) RD of EVI on Leaf2
NLRI
Ethernet Segment ID Ox0:1:1:1:1:1:1:1:1:1
Originator IP Address 4.4.4.4
Extended Community ES-Import Route-target for EVI on Leaf2
Other attributes (Origin, AS-Path, Local-Pref, etc.)

Leaf2
4.4.4.4/32 - -+ Type 4 Route

- - - VXLAN Tunnel

Source

= = All leaf devices use the same rou nd-robin


algorithm to assign a DF per V LA N for the ESI:

Leaf1 Leaf3 Leaf2 - VLAN 200


host2 Leaf3 - VLAN 201
3.3.3.3/32
Leaf2 - VLAN 202
Leaf3 - VLAN 203

C> 2019 Juniper Networks, Inc All Rights Reserved

Designated Forwarder Election


Once an ESI is configured on an interface, the VTEP will advertise the Ethernet Autodiscovery Route (Type 1) and also an
Et hernet Segment Route (Type 4). The type 4 route solves two problems: it helps in the designated forwarder election
process and it helps add a new spl it horizon ru le.

In the example, Leaf2 and Leaf3 wi ll advertise a type 4 ro ute to every VTEP that belongs to an EVPN. However, the ro ute is
not tagged with a target community. Instead, it is tagged with an ES-import target community. The ES-import target
community is automatical ly generated by the advertising VTEP and is based on the ESI va lue. Since Leaf1 does not have an
import policy that matches on the ES-import target, it will drop the type 4 routes. However, since Leaf2 and Leaf3 are
configured with the same ESI, the routes are accepted by a hidden policy that matches on the ES-import target community
that is on ly known by the VTEPs attached to t he same Ethernet Segment. Leaf2 and Leaf3 use the Originator IP address in
the Type 4 route to bu ild a table that associates an Originator IP address (i.e. the elected designated forwarder) with a VLAN
in a round-robin fash ion . After the election, If a non-designated forwarder for a VLAN receives BUM traffic f rom a remote
VTEP, it will drop t hose packets.

www .juniper.net EVPN VXLAN • Chapter 5- 35


Data Center Fabric with EVPN and VXLAN

Distributed Layer 3 Gateways

• Layer 3 gateways advertise their virtual MACs using MAC/IP


advertisement route
• Remote leaf devices load-share to virtual default gateway

inet.O
_1 _1 _
10 0 0 254124
a.: Enables device to perform
0
+-----+--t VXLAN Layer 3 Gateway 10.254.255.254
function
6D
Default Switch \JD
(VXLAN L2 Gateway)
irb .Ovirtual IP = 10.10.10.254/24 irb.O virtual IP = 10.10.10.254/24
VXLAN Tunnels irb virtua l MAC = 00:00:5e:00:01 :01 irb virtual MAC = 00:00:5e:00:01 :01
to remote Leafs

Spine1 Spine2

= =
10.10.10.11/24 10.10.10.22/24
host1 Leaf1 Leaf2 host2

Q 2019 Juniper Networks, Inc All Rights Reserved

Distributed Default Gateways


It is possible to have multiple default gateways that share the same IP address for a subnet. Shown is an example
configuration on an MX Series router.

[ edit inte rfaces irb]


lab@sp ine l# show
irb {
unit O {
f amily inet {
address 10 . 1 . 1 . 10/24 { <<<<<this address must be uni q ue o n each GW
virtual-gateway-address 1 0 . 1.1 . 254 ; <<this should be the same on each GW
}

If both Spine1 and Spine2 are configured in th is manner, and use the same virtua l gateway address, both devices will share
the same virtual IP address and the same virtual MAC address of 00:00:5e:00:01:01. The spine nodes wi ll each advertise
that MAC address to the other VTEPs. The remote VTEPs will be able to load share traffic over the multiple paths to the same
virtua l MAC address. In the event that one of the gateway devices fails, the remain ing gateway continues to service traffic
that is forwarded to the virtua l IP and MAC addresses.

Chapter 5-36 • EVPN VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Described EVPN functionality
• Described EVPN control in a VXLAN deployment

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(l\\'OPKS
37

We Discussed:
• Described EVPN functionality;

• Described EVPN control in a VXLAN deployment.

www .juniper.net EVPN VXLAN • Chapter 5- 3 7


Data Ce nter Fabric with EVPN and VXLAN

Review Questions

1. What are some of the benefits of EVPN controlled VXLAN


compared to a multicast signaled VXLAN?
2. What type of community tag is used with an Ethernet segment
route?
3. In what scenario is a designated forwarder used?

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lWOPKS
38

Review Questions
1.

2.

3.

Chapter 5-38 • EVPN VXLAN www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Answers to Review Questions


1.
EVPN allows for CE devices to multi home to more than one Leaf node, such that all interfaces are actively forwarding data.
EVPN signaling minimizes unknown unicast flood ing because PE routers advertise locally learned MACs to all remote PEs .

2.
An Ethernet Segment route is tagged with an ES-Import Route Target Commun ity.

3.
A designated forwarder is used in a mu ltihomed site environment.

www .j uniper.net EVPN VXLAN • Chapter 5-39


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 6: Configuring VXLAN

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

• After successfully completing this content, you will be able to:


• Configure EVPN controlled VXLAN

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
2

We Will Discuss:
• Configuring EVPN controlled VXLAN.

Chapter 6-2 • Configuring VXLAN www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: Configuring VXLAN

➔ Configure EVPN controlled VXLAN

iQ 2019 Juniper Networks, Inc All Rights Reserved

Configuring Multicast Signaled VXLAN


The slide lists the topic we will discuss. We will discuss the high lighted topic first.

www .juniper.net Configuring VXLAN • Chapter 6 - 3


Data Center Fabric with EVPN and VXLAN

EVPN VXLAN Example Toplogy (1 of 3)


■ Underlay topology
• Underlay is an IP Fabric based on EBGP routing
• Goal : Ensure all loopbacks are reachable and when equivalent paths exist, traffic should be
load shared
Loopback Addresses:
RR RR
spine1: 192. 168.100.1
spine2: 192.1 68. 100.2
leaf1: 192.168.100.11
leaf2: 192.168.1 00.1 2
leaf3: 192.168.100.1 3

Fabric Link Addresses:


172.16. 1.x/30 t
S:S ~e-0
1012 ·18
·
14 +c:,
><e-0101/."' ~
.6 4' < % .26
Host Addresses: leaf1 leaf2 leaf3
host1 vlan 10: 10.1.1.1/24 65003 65004 65005
host2 vlan 10: 10.1 .1.2/24 xe-0/0/0

ens4

host2
EBGP Session host1

ii:> 2019 Juniper Networks, Inc All Rights Reserved

Example Topology
This slide s hows the examp le topology that will be used for t he EVPN-VXLAN example.

In the example topology fo r th is configuration section, traffic from host1 and host2 will be tunne led across an IP fabric
network. The IP fabric network consists of five Layer 3 capable switches, which act as routers.

Each switch in the IP fabric is assigned a unique, private autonomous system ID (AS ID). EBGP sessions will be established
between each sw itch device. An alte rnative to EBGP pee ring between each switch would be to configure an IGP, such as
OSPF or IS-IS, between the switch devices. This BGP peering configuration provides the connectivity of t he underlay network,
which provides reachability information among all underlay devices.

The leaf nodes are VTEP devices, or Layer 2 Gateways, and are QFX series devices. The sp ine nodes a re configured as VXLAN
Layer 3 gateways.

The goal is to ensure that host1 and host2 can communicate with each other.

Chapter 6 -4 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN VXLAN Example Toplogy (2 of 3)


■ Overlay topology
• All TOR switches establish IBGP sessions to spine1 and spine2 (RRs)
• EVPN signaling (no unicast route advertisement- only EVPN routes)

Loopback A ddresses:
spine1: 192.168.100.1
RR -----
--------------------- - --....... RR
spine2: 192.1 68. 100.2
Overlay AS: 65100
leaf1: 192.168.100.11
leaf2: 192.168.100.12
leaf3: 192.168.100.13

Fabric Link Addresses:


172.16. 1.x/30

Host Addresses: leaf2 leaf3


host1 vlan 10: 10.1.1.1/24 65004 65005
host2 vlan 10: 10.1 .1.2/24 xe-0/0/0

ens4

host2
-------- IBGP Session host1

i9 2019 Juniper Networks, Inc All Rights Reserved 5

Logical View
To help you understand the behavior of the example, the diagram shows a logical view of the overlay network. Using the help
of VXLAN, it wi ll appear that host1, host2, and the IRBs of the routers in AS 65001 and AS 65002 will be in the same
broadcast domain and IP subnet. Also, the IRBs of the routers in AS 65001 and AS 65002 wi ll share the same virtual
gateway address, and virtual gateway MAC address to represent a distributed gateway.

www .juniper.net Configuring VXLAN • Chapter 6-5


Data Center Fabric with EVPN and VXLAN

EVPN VXLAN Example Toplogy (3 of 3)


■ VXLAN Tunnels
• All switches act as VXLAN Layer 2 Gateways (VTEPs)
• TORs are VTEP endpoints due to locally configured Ethernet segments in the VXLAN
• Spines are VTEP endpoints due to locally configured IRB interfaces in the VXLAN

• VXLAN tunnels established automatically to devices that advertise EVPN route destinations
Loopback Addresses:
RR RR
spine1 : 192.168.100.1
spine2: 192.168.100.2
Overlay AS: 65100 spine1 spine2
leaf1: 192.168.100.11 65001 65002
leaf2: 192.168.100.12
leaf3: 192.168.100.13

Fabric Link Addresses:


172. 16.1.x/30

Host Addresses: leaf1 leaf2 leaf3


host1 vlan 10: 10.1.1.1/24 65003 65004 65005
host2 vlan 10: 10.1.1.2/24 xe-0/0/0

ens4

host2
VXLAN Tunnels host1

C> 2019 Juniper Networks, Inc All Rights Reserved

VXLAN Tunnels
You m ust ensu re that all VTEP addresses are reachable by all of the routers in the IP fabric. Generally, the loopback interface
will be used on Juniper Networks' routers as the VTEP interface. Therefore, you must make sure that the loopback addresses
of the routers are reachable. The loopback interface for each route r in t he IP fabric was configured in the 192.168.100.0/24
range.

The diagram shows t he t unnel overlay between the devices. These tunnels will be automatically generated by the EVPN
control plane. Each leaf device wi ll be a VTEP tunnel endpoint when it advertises reachability to a local EVPN-VXLAN VN I. In
the diagram, yo u can see that leaf2 is not a VTEP in this example. Th is is because there are no locally connected hosts, and
therefore leaf2 does not advertise connectivity to any VNls. However, leaf2 wi ll have a similar configu ration as leaf1 and
leaf3. If a host is connected to, or migrated to, an interface connected to leaf2, the newly activated VNI will be advertised to
remote BGP peers, and VTEP tunne ls to all other VTEPs will be automatically generated.

Chapter 6 - 6 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

BGP Configuration (1 of 3)
■ Common configuration
All route rs
{master : 0} [ edit I
l ab@spineli show routing- options
router- id 192 . 168 . 100 . 1 ;
Autonomous system should be set to overlay topology AS
autonomous- system 65000;
(useful for automatic route target generation discussed later)
forwardi ng- table {
export l oad-bal ance ; --
Applies load balance policy to forwarding table
chained- composite - next- hop {
ingres::1 {
evpn; Allows groups of routes to share a common next-hop, rather
}
than an individual next-hop for each route
}
)
{master : O} [edit)
lab@leaf3 # show policy-options
policy- statement export-directs {
term loopback {
from {
protoco l direct ; . Advertise the local loopback address so that the overlay will
inter f ace lo0 . 0 ; have reachability to loopback addresses
}
then accept ;
)
}
policy-statement load-balance {
term load-bal ance {
then { Load balanc ing policy to install multiple next hops in
load-balance per-packet; --
forwarding table - applied to forwa rding table
accept ;
}
)
)

C-2019 J umpe r Ne!won<s Inc All R,g1hts Reserveo 7

Common BGP Configuration


Some configuration parameters will be common among a ll devices. For the overlay network, the autonomous system
number is configured under the [ed it routing-options] hierarchy. Since the loopback addresses and directly connected
networks on a ll devices shou ld be advertised, a policy that matches the loopback interface and direct protocol is configured .
A common policy that allows load-balancing is a lso created on a ll devices and applied to the fo rwarding tab le.

The chained-composite-next-hop statement a llows the device to create an indirect next hop in the routing table, and
associate many routes with the same next hop to a single indirect next hop locally. This can provide significant benefits when
a remote leaf fa ils or withdraws reachability to a large number of remote ly connected destinations, or when a fabric link or
node fails and the physical next hop to remote destinations must be changed for a large number of destinations. On the local
device, only the indirect next hop must be changed or re-mapped to the new physical next hop to adjust the forwarding table
for all prefixes that are mapped to that next hop, and each individual next hop doesn't need to be updated one at a time.

www .juniper.net Configuring VXLAN • Chapter 6- 7


Data Center Fab ric with EVPN and VXLAN

BGP Configuration (2 of 3)
spine1 Configuration spine2 Configuration
{master : OJ[edit protocols bgp) {master:O J [edit protocols bgp)
■ Route Reflector/Spine Nodes lab@.!!pineli show l ab@.!!pine2J .!!how
group fabric { group fabric {
type external; type external;
export export- directs ; export export- directs ;
Underlay AS for this device local-as 65001 ; local-as 65002 ;
multipath { multipath {
multiple- as ; mul tiple- as ;
I I
neighbor 172 . 16 . 1 . 6 { neighbor 172 . 16 . 1 . 18 {
peer-as 65003 ; peer-as 65003 ;
I I
The neighbors exchange 1Pv4 BGP routes neighbor 172 . 16 . 1 . 10 { neighbor 172 . 16 . 1 . 22 {
(underlay reachability) peer-as 65004 ; peer-as 65004 ;
I I
neighbor 172 .16 . 1 . 14 { neighbor 172 . 16 . 1 . 26 {
peer-as 65005 ; peer-as 65005 ;
I I
I I
group overlay { group overlay {
type internal; type internal;
Sourec address of BGP peering session and protocol next-
l ocal-address 192 . 168 . 100 .1; local-address 192 . 168 . 100 .2 ;
hop of routes that originate on this device
family evpn { family evpn {
signaling; signaling;
I I
cluster 1 . 1 . 1 . 1 ; cluster 2 . 2 . 2 .2 ;
Overrides AS configured in [edit routing- options . local -as 65000; local-as 65000;
autonomous-system] if desired -
multipath; multipath;
neighbor 192 . 168 . 100 . 2; neighbor 192 . 168 . 100 . 1;
The neighbors exchange MP-BGP EVPN routes (overlay neighbor 192 .168 . 100 . 11 ; neighbor 192 . 168 . 100 . 11 ;
reachability) neighbor 192 .168 . 100 . 12 ; neighbor 192.168 . 100 . 12 ;
neighbor 192 . 168 . 100 . 13 ; neighbor 192 . 168 . 100 . 13 ;
I I

C> 2019 Juniper Networks, Inc All Rights Reserved 8

Spine Node Configuration


With the example shown, the spine devices are configured as route reflectors to forward BGP routes between leaf devices.

There are two peer groups configured . One peer group is for the fabric underlay. The other peer group is for the overlay
network, wh ich is used to advertise the EVPN routes.

In the f a b r i c group, the local-as is configured as the un ique AS number that is used for the EBGP peering sessions.
Neighbors are configured that use the connected interface IP address, and the peer-as of each neighbor is configured . The
export po licy e x po rt -di rec t s is app lied to advertise the loopback address of the local node to all fabric peers. This wil l
allow reachability for the overlay network peering sessions. The mu lt i-pa t h mul tip l e-as parameter is configured to
allow the BGP route selection process to include a ll equal cost forwarding paths in the forwarding table.

The overlay network is configured as an IBGP peering group, or type internal. An internal peering session req u ires that the
peers belong to the same autonomous system. This can be performed in one of two ways: the local-as can be defined within
the group or neighbor statements, or the autonomous system ID defined under [e dit r out ing-op t i o n s
auton omous-sys t em ] can be used. If the local-as parameter isn't defined in the peering group, the global AS number is
inherited by the BGP protocol.

Because all lBGP peers belong to the same autonomous system, the mu ltip a t h statement is adequate to allow ECMP
among the IBGP peers, and the multiple-as parameter is not needed .

The cl u ster parameter identifies a group of peers to which routes will be d istributed, or reflected . This configuration
statement is what identifies the device as a route reflector and changes the default BGP route advertising parameters for
internally learned routes. An in-depth explanation of route reflectors is not covered in th is course. Just keep in mind that the
route reflector will "reflect" , or advertise, a route received from any member of the cluster to all other members of the
cluster.

The fam i ly evpn s i gna l i n g is configured on the overlay. Th is indicates that the BGP session will on ly be used to
advertise EVPN type routes . No standard unicast routes will be advertised between underlay peers.

Chapter 6-8 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

BGP Configuration (3 of 3)
■ Leaf Nodes
Leaf1 Configuration
{master : O}[edit protocols bgp]
lab@leafl# show
group overlay { Overlay AS inherited from [edit routing- options autonomous-system 65000] parameter
type internal ;
when group configured as type internal
local - address 192 . 168 . 1 00 . 11 ;
family evpn {
signa l ing ;
}
neighbor 192 . 168 . 100 .l; Leaf nodes only need to peer with route reflectors (overlay
neighbor 192 . 168 . 100 . 2 ; reachability)
}
group fabric {
type external;
export export-directs ;
local - as 65003; - Underlay AS
mul tipath {
multiple-as ;
}
neighbor 172 . 16 . 1 . 5 {
peer-as 65001 ;
}
The neighbors exchange 1Pv4 BGP routes
(underlay reachability)
neighbor 172 . 16 . 1 . 17 {
peer-as 65002 ;
}
}
Note: Leaf2 and leaf3 will have similar confi g urations

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
9

Leaf Node Configuration


The example shows the configuration necessary to enab le VXLAN Layer 2 Gateway functionality on a QFX Series device
acting as a leaf. Each leaf device pee rs to directly connected neighbo rs in the underlay network. Each leaf device a lso pee rs
w ith the routers from which it will be receiving EVPN routes. In th is examp le, the route reflecto rs, spine1 and sp ine2, are the
only peers necessary, since they w ill relay routing information to all lBGP peered devices . Once aga in, the overlay is only
going to advertise and receive EVPN routes, so the f amil y evp n s igna l i ng is configured for the overlay pee rs.

www .juniper.net Configuring VXLAN • Chapter 6-9


Data Center Fabric with EVPN and VXLAN

Verify Underlay Network Routing


■ VTEP addresses (loO) must be reachable by all routers and VTEPs in the data
center
{master:0}
lab@leafl> show route 192.168.100/24

inet.O: 13 destinations, 15 routes (13 active, 0 holddown, 0 hidden)


+=Active Route, - = Last Active , *=Both

spine1 1 - - - - - . i 192.168.100.1/32 *[BGP/170] 00:18:48, localpref 100


AS path: 65001 I , validation-state: unverified
> to 172.16.1.5 via xe- 0/0/1.0
spine2 t - - - - - i 192.168.100.2/32 *[BGP/170] 00:25:22, localpref 100
AS path: 65002 I , validation- state: unverified
> to 172.16.1.17 via xe-0/0/2.0
leaf1 1----- 192.168.100.11/32 *[Direct/OJ 05:15:11
:===::: > via loO. 0
leaf2 1 - - - - - i 192.168 . 100.12/32 *[BGP/170 ] 00:01:04 , localpref 100
.__ __. AS path: 65001 65004 I, validation-state : unverified
> to 172.16.1.5 via xe- 0/0/1.0
to 172.16.1.17 via xe-0/0/2.0
[BGP/170] 00:01:04, localpref 100
AS path: 65002 65004 I, validation-state: unverified
> to 172.16.1.17 via xe-0/0/2.0
leaf3 1-------i 192.168.100.13/32 *[BGP/170] 00:01:04, localpref 100
.__ __. AS path: 65001 65005 I, validation-state: unverified
> to 172.16.1.5 via xe-0/0/1.0
to 172.16.1.17 via xe-0/0/2.0
[BGP/170] 00:01:04, localpref 100
AS path: 65002 65005 I , validation- state: unverified
> to 172.16 . l.17 via xe- 0/0/2 . 0

C> 2019 Juniper Networks. Inc All Rights Reserved 10

Underlay Network Routing


The key ro le of the underlay network is to provide routing paths between all VTEPs in the network. The VTETP addresses are
normally the loopback interfaces of each device, so make sure that the loopback addresses of all devices are active in the
routing table. If these addresses aren't present, the BGP sessions for the overlay cannot be establ ished, since there wouldn 't
be a route to the IGP peers.

Chapter 6 -10 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Verify EVPN Signaling


■ Verify MP-BGP EVPN sessions are Established

{master : O}
lab@leafl> show bgp summary
Threading mode : BGP I/0
Groups : 2 Peers : 4 Down peers : 0
Tabl e Tot Paths Act Paths Suppressed History Damp State Pending
bgp . evpn . 0
6 3 0 0 0 0
inet . O
6 6 0 0 0 0
Peer AS InPkt Out Pkt OutQ Flaps Last Up/Dwn Statel#Active/Received/Accepted/Damped ...
172 . 16 . 1 . 5 65001 72 69 0 0 30 : 02 Establ
inet . O: 3/3/3/0
172 . 16 . 1 . 17 65002 86 86 0 0 36 : 37 Establ
inet . O: 3/3/3/0
192 . 168 . 100 . 1 65000 47 43 0 0 16 : 11 Establ
_ default_ evpn_ . evpn . O: 0/0/0/0
bgp . evpn . 0 : 3/3/3/0
default - switch . evpn . O: 3/3/3/0
192 . 168 . 100 . 2 65000 45 43 0 0 16 : 07 Esta bl
_ default_evpn_ . evpn . O: 0/0/0/0
bgp . evpn . O: 0/3/3/0
default-switch . evpn . 0 : 0/3/3/0
- \ \.
EVPN routes have been received from RRs MP·BGP sessions are Established with RRs

C> 2019 Juniper Networks, Inc All Rights Reserved

Verify EVPN Signaling


The s h ow b gp s u mmary command displays the BGP neighbor status. The underlay and overlay BGP peering sessions are
displayed. The key d iffe rence between the two types of sessions is the types of routes that a re advertised between peers.
The fabric peering sessions are exchanging 1Pv4, or f ami l y inet routes. The overlay peering sess ions are only
exchanging EVPN type routes, as indicated by the bgp . evp n. o routing table information related to those peering sessions.

www .juniper.net Configuring VXLAN • Chapter 6 - 11


Data Center Fabric with EVPN and VXLAN

QFX - VXLAN Layer 2 Gateway


■ Configure the server facing interface, VLAN , and VXLAN encapsulation
{master : 0 } [edit )
lab@leafli show v lans
default {
vlan-id 1 ; Specify the same VLAN tagging (or no
} tagging) that is being used by the
v lO { attached server
vlan -id 10 ;
vxlan {
vni 5010 ,· --
VNI associated with local VLAN
}
}

{master : 0 } [edi t]
lab@leafl i show interfaces xe - 0/0/0
unit 0 {
family ethernet-switching {
vlan {
Server {edge device) VLAN tag
members vlO ;
information
)
}
}

Q 2019 Juniper Networks, Inc All Rights Reserved

Layer 2 Gateway Configuration


The VLAN to VXLAN VNI mapping is configu red under the [e dit vl an s J hierarchy. Once a VLAN-to-VNI mapping is
perfo rmed, and a loca l interface is configured with the corresponding VLAN, the device creates a local VTEP interface in the
local : xl an . ine t. o routing tab le. The local VNI is t hen advertised to the remote peers, and the local device is advertised
as a Layer 2 gateway to reach the VN I.

Chapter 6 - 1 2 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

QFX - Apply VRF Import/Export Policy


■ Minimum configuration necessary to advertise and receive EVPN routes

{master : 0} (edit )
lab@ leafl JI sh o w protocols evpn
vn i - options {
.
vni 50 10 { List each VNI you wish to have participate in EVPN signaling
vrf - t arget target : 65000 : 5010 ; ~
}
} Default encapsulation is MPLS
encapsula tion vxlan ;
e x tended - vni - list 5010 ;

(mas ter : 0} (edit)


lab@ leaflJI show s witch-options
vtep - source- i nterface l oO . O;
route -di stinguisher 192 . 168 . 100 . 11 : 1 ; Used as source address of VXLAN encapsulated data
vrf-target t arget : 65000 : 1 ; -

Notes on vrf-target statement


1. Always specifies the route target attached to locally generated Type 1 routes
- 2. If no other vrf-target or vrf-export statement is specified, this statement tags
all locally generated EVPN routes with target:65100:1 (except Type 4s)

Note: More than one target community can be added to a route


C> 2019 Juniper Networks, Inc All Rights Reserved

VRF Import/Export Policies


To advertise an EVPN route, a device must have al l of the required parameters for that route. The parameters that must be
included in an EVPN route advertisement include bot h EVPN and VXLAN information. The EVPN information that is required
includes the encapsulat ion type, the target commun ities associated with locally configured VNls, and to which VNls the
device is connected. The [edi t p r otocol s evpn ] hierarchy defines the BGP target communities associated with each
VNI, the encapsu lat ion type, and the list of VNls that are supported locally.

The [edi t swi tch-opt i ons] hierarchy identifies the source address of the VTEP tunnel, the route-d istinguisher that
must be added to advertised route pref ixes, the target community that w ill be added to routes leaving the local switching
table, and the target community t hat will identify which routes received from remote BGP peers are to be imported into the
local VXLAN switching table.

The vr f -expo r t statement adds the associated vrf-target community value to the BGP updates that will be sent. The
vrf-import pol icy eva luates received BGP routes, and if the vrf-target community in the vrf-import policy matches a
community value in the route, the information from the route is imported into the local EVPN route table, and subsequently
the VXLAN switch ing table.

The vr f -target statement under the [edit switch-opti o n s] automatically creates hidden vrf-import and
vrf-export policies for the specified community va lue. If a v r f- t a r get statement is included for a VNI in the
vni-opt i o n s hierarchy, the vni-opt i ons vr f - t a r get statement overrides the switch-opt i ons vr f - t arget
value for Type 2 and Type 3 EVPN routes . The switch-opt i ons vr f -tar ge t statement is still applied automatica lly to
Type 1 EVPN routes.

www .j uniper.net Configuring VXLAN • Chapter 6 - 13


Data Center Fabric with EVPN and VXLAN

Default Behavior of vrf-target Statement


■ Configuring only the vrf-target statement causes VTEPs to advertise and accept
all EVPN routes with that target
• Leaf2 receives MAC advertisements for VNI 5020, which it doesn't need (imagine 1OOOs of
MACs)
• Leaf2 installs the unneeded routes in its RIB-IN and VRF table, wasting valuable memory

CE3 CE4

11111
VLAN 10 > VNI 5010
VLAN 20 > VNI 5020

CE2

----
MAC Advertisement Routes
VNI 5010, VNI 5020

Leaf3

CE1 Leaf2 CES

111111
VLAN 10
_
Leaf1
_____-b::::
MA: :C
:::
vA:::
N~:::
~e:0~:::~::
·e:'
:::Ne:1::n5:t0R
:2~::
u::
te::::
s :i,_..-,.,. I~ 1
I o~~
_v_LA_N_1
-

C> 2019 Juniper Networks, Inc All Rights Reserved

VRF-Target Default Behavior


When only the vrf-target statement is configured, all VNls advertised by the VTEP are tagged w ith the same vrf-target
community. Remote VTEPs that a re configured with the same vrf-target community accept a ll received ro utes, even if a local
VNI for the advertised routes is not present. In the example, if the vrf-target target : 65 oo o: 1 is configured on all leaf
devices, and that is the on ly vrf-target statement configu red, then the target : 65000 : 1 wi ll be app lied to all advertised
ro utes, and all received routes w ith that target wi ll be accepted, regard less of which VNls are active on the local device.
When leaf2 receives routes for VNI 5020, the routes wi ll be accepted and stored in the loca l routing tab le, even though
VNl5020 is not active on leaf2. Best practice is to assign a unique vrf-target community to each VN I within the domain, even
if a uto-generated VNls are used.

Chapter 6-14 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Take Control of EVPN Routes (1 of 2)


■ Set target communities on a per-VNI basis
• Use a vrf-target statement to automatically create both import and export policies for each
VNI
{master : 0} [edit ] {master : 0} [edit ]
l ab@leafl# show v l ans l ab@ l eafl# s how protocols evpn
default { vni - options {
vlan-id 1 ; VnJ. 5010 {
} vrf- target target : 65000 : 5010 ;
vlO { }
vlan- id 10; vni 5020 {
vxlan { vrf- target target : 65000 : 5020 ;
vni 5010 ; }
} }
} encapsuJ.aLJ.On vxian ;
v20 { e xtended- v ni - list [ 5010 5020 ] ;
vlan - id 20 ;
vxlan {
vni 5020 ;
}
} ...

C> 2019 Juniper Networks, Inc All Rights Reserved

Per-VNI Target Communities


The vrf-target community can be configured for each VNI. This ensures that only routes received that are associate with
locally active VNls are accepted and stored in the local routing table. Because VNI to vrf-target mappings should be
consistent across the network, it is a simple matter of copying the same configuration to all devices as a standard, common
configuration parameter. The VN I to vrf-target mapping is configured under the [edi t protocols evpn
vn i-opti o n s] h ierarchy.

www .juniper.net Configuring VXLAN • Chapter 6 - 15


Dat a Center Fabric with EVPN and VXLAN

Take Control of EVPN Routes (2 of 2)


■ Automatic generation of target communities and policies using auto statement

{master : 0} [edit vlan s] lab@leafl t s how switch- options


u ser@leaflt s h ow vtep- source - i nterface lo0 . 0 ;
v l OO { route - disti ng uish er 192 . 168 . 100 .11 : 1;
vlan- id 10 ; vrf- target {
vxlan { target : 65000 :1;
vni 5010 ; au t o ;
} }
}
v200 {
vlan- id 20 ; Auto-generated vrf-import and vrf-export policies are created
v xlan { and applied that match the auto-generated per-VNI vrf-target
Vnl. 5020 ; communities and the vrf-target value configured under switch-
options

{master : 0} [edit protocols evpn]


u ser@leafl # show
encapsulat ion vxlan ; Not required but it does ease
extended- vni -list a l l ; _ configuration tasks

Note: EVPN route targets and policies c an be configu red fo r multiple tenants, and c an be applied on a per-tenant basis

C> 2019 Juniper Networks, Inc All Rights Reserved

Automatic VRF-Target
Manually configuring all VNI to vrf-target commun ities in a large network can become cumbersome. The [edi t
sw i tch-op t i o n s v r f-target a u to J parameter enables t he device to automatically generate a vrf-target community
to each VNI t hat becomes active on the device. When this function is used, an automatic vrf-target community is derived for
each active VNI on the VTEP. The commun ity value is created using the global AS (overlay AS) and the VLAN ID associated
with the VNI. In this manner, the auto-generated vrf-target community is synchronized across all VTEPs that belong to the
same BGP overlay network and VLANs/ VNls across the domain .

Chapter 6-16 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Original VLAN Tag Options


■ Change the VTEP's default processing behavior of the original VLAN tag
• Encapsulation
• Configure VTEP to NOT remove the original VLAN tag
• Decapsulation
• Configure receiving VTEP to NOT discard frame with an inner VLAN tag

QFX5100 Configuration MX Configuration


{master : 0} [edit vlans] [edit bridge - domains]
user@leafl t show user@ spinelt show
vlOO { v l OO {
vlan- id 10; vlan- id 10;
vxlan { routing- inter f ace irb . O;
vni 50 10; vxlan {
encapsulate- 1nner- v lan ; vni 5010;
decapsulate- accept - inner- vlan encapsuiace- 1nner- vian ;
} decapsulate- accept- inner- vlan ;
} }
}

C> 2019 Juniper Networks, Inc All Rights Reserved

VLAN Tag Behavior


By default, when a local VTEP encapsulates a locally received Et hernet frame, it does not include the VLAN tag in the
encapsulated frame that is sent to the remote VTEP. Additionally, when a VTEP receives a VXLAN encapsulated frame f rom a
remote VTEP that contains a VLAN tag within the original packet, the local VTEP d iscards the packet. Th is behavior can be
changed through the configuration, as shown in t he examp le .

www .juniper.net Configuring VXLAN • Chapter 6 - 17


Data Center Fabric with EVPN and VXLAN

Determine BGP Neighbor Status


■ View status of BGP neighbors
{master : O}
lab@leafl> show bgp summary
Threading mode : BGP I/0
Groups : 2 Peers : 4 Down peers : 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
bgp . evpn . 0
6 3 0 0 0 0
i net . 0
6 6 0 0 0 0
Peer AS InPkt OutPkt OutQ Fl aps Last Up/Dwn State l #Active/Received/Accepted/Damped ...
172 . 16 . 1 . 5 65001 121 119 0 0 52 : 35 Establ
inet . O: 3/3/3/0
172 . 16 . 1 . 17 65002 135 135 0 0 59 : 10 Establ
inet . 0 : 3/3/3/0
192 . 168 . 100 . 1 65000 112 99 0 0 38 : 44 Establ
_ default_ evpn_ . evpn . O: 0/0/0/0
bgp . evpn . 0 : 3/3/3/0
default- switch . evpn . O: 3/3/3/0
192 . 168 . 100 . 2 65000 111 100 0 0 38 : 40 Establ
_ default_evpn_ . evpn . O: 0/0/0/0 EVPN RIB-IN
bgp . evpn . O: 0/3/3/0
default-switch . evpn . O: 0/3/3/0 • : - - - - EVPN VRF

C> 2019 Juniper Networks, Inc All Rights Reserved

BGP Neighbor Status


Verify the status of BGP neighbors with the show bgp neighbor command.

Chapter 6 -18 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

View the EVPN RIB-IN


■ View all EVPN routes for all VNI that have been accepted by import policy
• Configure keep all to override import policy and accept all EVPN routes (good for
trou bl eshooti na
{master : 0}
lab@leafl> show route table bgp . evpn . O

bgp . evpn . O: 3 destinations , 6 routes (3 active , O holddown, O hidden)


+=Active Route , - = Last Active , *=Both

2 : 192 . 168 . 100 . 13 : l : : 5010 :: 52 : 54 : 00 : 2c: 4b : a2/304 MAC/IP


* [ BGP/170] 00 : 13 : 26 , l ocalpref 100, from 192 . 168 . 100 . 1
AS path : I , validation - state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
to 172 . 16 . 1 . 17 via xe - 0/0/2 . 0
[ BGP/170] 00 : 13 : 26 , l ocalpref 100, from 192 . 168 . 100 . 2
AS path : I , va l idation-state : unverified
> to 172 . 16 . 1 . 5 via xe - 0/0/1 . 0
to 172 . 16 . 1 . 17 via xe-0/0/2 . 0
2 : 192 . 168 . 100 . 13 : 1 : : 5010 :: fe : 05 : 86 : 71 : 13 : 03/304 MAC/IP
* [ BGP/170] 00 : 13 : 26 , l ocalpref 100, f r om 192 . 168 . 100 . 1
AS path : I , validation - state : unverified
to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
> to 172 . 16 . 1 . 17 via xe-0/0/2 . 0
[BGP/170] 00 : 13 : 26 , l ocalpref 100, f rom 192 . 168 . 100 . 2
AS path : I , validation-state : unverified
to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
> to 172 . 16 . 1 . 17 via xe - 0/0/2 . 0

i9 2019 Juniper Networks, Inc All Rights Reserved 19

EVPN RIB-IN
The EVPN RI B-IN table is the BGP routing table that stores a ll received BG P routes that are of the fami ly EVPN type. If an
EVPN route is received that does not have a vrf-target community that is accepted by a loca l vrf-import policy, the route is
discarded prior to being placed in the bgp. evpn . o tab le .

www .j uniper.net Configuring VXLAN • Chapter 6 - 19


Data Center Fabric with EVPN and VXLAN

EVPN Route Identification


■ EVPN Route Format
• Consists of route type and information associated with that route type

2 : 192 . 168 . 100 . 13 : 1 :: 5010 : : 52 : 54 : 00 : 2c : 4b : a2/304 MAC/IP


• • •
I I
I

Type RD VNI MAC Address

2 : 192 . 16 8 . 10 0 . 11 : 1 : : 5010 : : 5 2 : 5 4 : 0 0 : 5 e : 8 8 : 6a : : 1 0 . 1 . 1 . 1/30 4 MAC/IP


•• • • •

Type RD VNI MAC Add ress IP Address

1:1 92. 168 .1 0 0.11:1:: 01 0 1 01 010 1 010 1 0 1 01: : 0/ 1 92 AD / EVI


~~----;:::::::i,c:::;---_,• ·■------;::::Jtc:::;"'_ ___,,
~ I I
RD I I
ESI

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Route Format


An EVPN route is formatted according to the parameters and values assoc iated with the route. The example in the slide
shows examples of Type 2 and Type 1 EVPN routes.

Chapter 6-20 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

View the EVPN VRF Table


■ View all EVPN routes
• Contains:
• All locally generated EVPN routes
• All received EVPN routes that have been accepted by policy
{master : O}
l ab@ l eafl > show r ou te t abl e defaul t-s witch . evpn . 0

default - switch . evpn . 0 : 6 dest inat i o n s , 9 r ou tes (6 active , 0 hol ddown , 0 h idden)
+=Act i ve Route , - = Last Active , *=Both

2 : 192 . 168 . 100 . 11 : 1 : : 5010 :: 52 : 5 4 : 00 : 5e : 88 : 6a/30 4 MAC/IP


*[EVPN/170] 02 : 51 : 12
Indi r ec t
2 : 192 . 168 . 100 . 11 : 1 : : 5010 :: fe : 05 : 86 : 71 : cb : 03 /3 0 4 MAC/IP
*[EVPN/170 ] 02 : 33 : 24
Indirect
2 : 192 . 168 . 100 . 13 : 1 : : 5010 : : 52 : 54 : 00 : 2c : 4b : a2/304 MAC/IP
*(BGP/170] 00 : 14 : 05 , localpref 100, from 192 . 168 . 100 . 1
AS path : I , validation- s tate : unverified
> to 172 . 16 . 1 . 5 via xe - 0/0/1 . 0
to 172 . 16 . 1 . 17 via xe - 0/0/2 . 0
[ BGP/170] 00 : 1 4 : 05 , localpref 100, from 192 . 168 . 100 . 2
AS path : I , validation-state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
to 172 . 16 . 1 . 17 via xe-0/0/2 . 0

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Switch Table


The EVPN routes that are accepted in the bgp. evpn. O route table are placed in the de f a u lt-switc h. evpn . O table as
we ll. Th is table conta ins all locally generated EVPN routes as well.

www .juniper.net Configuring VXLAN • Chapter 6 - 21


Data Center Fabric with EVPN and VXLAN

BGP Troubleshooting Commands

■ Some other BGP troubleshooting commands include


•show bgp neighbor
•show route advertising-protocol bgp neighbor-IP-address
•show route receive-protocol bgp neighbor-IP-address
•show bgp group

Q2019 Juniper Networks, Inc All Rights Reserved Juniper


NEl\\'OPKS
22

BGP Troubleshooting
Th is course does not cover extensive BGP configuration and troubleshooting. However, many of the common BGP issues can
be resolved by analyzing the output from a few troubleshooting commands, which are shown in the slide.

Chapter 6-22 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

VTEP Interface
■ Verify status of VTEP interfaces
user@leafl> show interfaces vtep
Physical interface : vtep , Enabled , Physical link is Up
Interface i ndex : 641 , SNMP ifindex : 518
Type : Software-Pseudo, Link-level type : VxLAN-Tunnel-Endpoint, MTU : Unlimited , Speed : Unlimited
Device f lags .. Pr esent Running
Link type .. Full-Duplex
Li nk flags .. None
Last flapped .. Never
Input packets .. 0
Output packets : 0
One VTEP interface will automatically get
Logical interface vtep . 32768 (Index 578) ( SNMP ifindex 557) instantiated for locally attached network
".Lags : vp ~•""'" • raps vx" uvv .,ncapsu.La 1.-1.on : "'""'',
Ethe r net segment value : 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00 , Mode : s i ngle-homed , Mu lti-homed statu s : Forwarding
VXLAN Endpoint Type : Source , VXLAN Endpoint Address : 192 . 168 . 10 0 . 11 , L2 Routing Instance : default - switch , L3 Routing Instance :
default

Input packets .. 0
Output packets : 0
One VTEP interface will automatically get
Logical interface vtep . 32771 (Index 559) (SNMP ifindex 560) - instantiated for each remote VTEP that is
".Lags : vp ,.., , raps ~ncapsu.1ac1.on : 'L discovered through received EVPN routes.
VXLAN Endpoint Type : Remote , VXLAN Endpoint Address : 192 . 168 .1 00 . 13 , L2 Routing The interface is used to tunnel data to and nee :
default from a remote VTEP
Input packets .. 30
Output packets : 104
Protocol eth-switch , MTU : Un limited
Flags : Trunk-Mode
' ..

iQ 2019 Juniper Networks, Inc All Rights Reserved

VTEP Interfaces
The status of VTEP interfaces can be verif ied through the CLI. The show interfaces vtep command shows the local
VTEP interfaces. One VTEP logical interface will be c reated for each locally attached LAN segment in the VXLAN. One VTEP
logical interface wil l be created for each remote VTEP, which is discovered th rough t he EVPN control plane. Local VTEP
endpoint addresses are always t he local loopback interface address, and remote VTEP endpoints should always be the
remote loopback add ress.

www .juniper.net Configuring VXLAN • Chapter 6 - 23


Data Center Fabric with EVPN and VXLAN

VTEP Source and Remote

■ Verify status of local and remote VTEPs


{maste r: 0}
user@leafl> show ethernet-switching vxlan-tunnel-end-point sourc94o---------1 Verify that the local VTEP is configured with
Logical System Name Id SVTEP-IP IFL L3-Idx SVTEP Mode the correct Source IP and VNI
<default> 0 192 . 168 . 100 . 11 loO . O 0
L2-RTT Bridge Domain VNID MC-Group-IP
default-switch vlO+lO 5010 o. o.o.o
{master: OJ
user@leafl> show etherne t-swi t ching vxlan-tunnel-end-poin t remote.e• t - - - - - - - 4 1
Log ical System Name I d SVTEP-IP IFL L3-Idx SVTEP Mode -
Verify which remote VTEPs are reachable I
-
<default > 0 192 . 168 . 100 . 11 loO . O 0
RVTEP-IP IFL-I dx NH-Id RVTE P Mode
192 . 168 . 100 . 13 559 1764 RNVE
VNID MC-Gr oup-IP
5010 o.o.o. o

user@spinel > show 12-learning vxlan-tunnel-end-poi nt source . Equivalent MX commands


user@spinel> show 12 - learning vxlan - tunnel- end- point remote -

iQ 2019 Juniper Networks, Inc All Rights Reserved

Verify Source and Remote VTEPs


The show ethernet-switching vxlan-tunnel-end-point source and
show ethernet-swi tching vxlan-tunnel-end-po int remote commands are used to verify the status and
properties of VXLAN tunnel interfaces.

Chapter 6-24 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

MAC Table
,, I
■ 1ew MAC add resses th ath ave been Iearne db th e VTEP
user@leafl> show ethernet-switching table - Shows locally and remotely learned MACs
MAC flags (S - static MAC , D - dynamic MAC, L - l ocal ly l earned , P - Persistent static
SE - statistics enabled , NM - non configured MAC , R - remote PE MAC , 0 - ovsdb MAC)

Ethernet switching table .. 2 entries , 2 learned Remotely learned MAC of host2


Routing instance .. defaul t - s witch
Vlan MAC MAC Logical Active
name address f lags interface sou r ce
vlO l52 : 54 : 00 : 2c : 4b: a2 D vtep. 32771-- 192 . 168 . 100 . 13
vlO 52 : 54: 00 : 5e : 88 : 6a D xe-0/0/0. g..

{mas t e r : 0} t Locally learned MAC of host1


I
user@leafl> show ethernet-switching vxlan-tunnel-end-po int remote mac-table

MAC flags ( S - static MAC , D - dynamic MAC , L - l oca l ly l earned, C - Control MAC
• I Shows only remotely learned MACs
I
SE - Stat i s tics e nabled, NM -No n configured MAC , R - Remote PE MAC, P -Pinned MAC)

Logical system .. <defa ult>


Rou ting instance .. d efault-switch
Bridg i ng domain .. vlO+lO , VLAN .. 10 , VNID .. 5010
MAC MAC Logical Remote VTEP
address flags interface IP address
52 : 54 : 00 : 2c : 4b : a2 D 1 92 . 168 . 1 00 . 13

user @spinel> show 12- learnin g vxl an- tunn el- end- point source
use r @spinel> show 12- learning vxlan-tu nnel-end-po int remo te Equivalent MX commands
Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per
NOWOPKS
25

MAC Table
The show ethernet-swi tching table comma nd displays the MAC addresses that have been lea rned by t he VTEP. To
view the MAC addresses that have been learned from a remote VTEP, the sho w ethernet-swi tching
vxlan-tunnel-end-po int remo te mac -table command can be used .

www .j uniper.net Conf iguring VXLAN • Chapter 6 - 25


Dat a Center Fabric with EVPN and VXLAN

Spine - VXLAN Layer 3 Gateway


■ To configure the bridge domain, VXLAN encapsulation, IRB interface, virtual
switch, and virtual router (QFX example shown)
.------------------ spine1 and spine2 will have the same
{master : 0} (edit] virtual gateway configuration. Each {master : 0} [edit]
lab@spineli show interfaces irb router will automatically advertise the lab@spineli show vlans
unit 10 { same IP ( 10.1.1.254) and MAC default {
proxy-macip-advertisement ; address (00:00:5e:01 :01 :01 ) to remote v lan-id 1 ;
virtual-gateway-accept-data ; PEs }
fami ly inet { vlO {
address 10 . 1 . 1 . 100/24 { vlan - id 10 ;
primary; 13-in terf ace i rb . 10 ;
virtual - gateway- address vxlan {
} vni 5010 ;
) )
} }
unit 20 { v20 {
p r oxy- macip- advertisement ; vlan-id 20 ;
virtual-gateway-accept-data ; 13-in terf ace i rb . 20 ;

_ ___...... ___
family inet { vxlan {
address 10 . 1 . 2 . 100/24 ( vni 5020 ;
primary; _,_ ~ ..___ )
vi rtua l-gateway-addr ess 10 . 1 . 2 . 254 ;
Configure the VLANs that participate
)
in the routing domain, and assign the
l IRB interface as the Layer-3 interface
}
for the broadcast domain.

C> 2019 Juniper Networks, Inc All Rights Reserved

Layer 3 Gateway Configuration


To configure a Layer 3 gateway, an I RB interface must be configured on a device that supports L3 gateway functiona lity. The
IRB interfaces is added to t he VLAN as the L3-interface, and the VLAN is associated with the VNI, which places the IRB within
the VNI as we ll. In this manner the IRB interface becomes an interface that is reachable by all other devices within the VNI,
and the MAC address of the IRB interface is advertised to remote VTEPs . Because of t he multiple lookups required t o
perform the chain of f unctions required as a Layer 3 gateway, not all devices support this functiona lity.

Chapter 6-26 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Spine - VXLAN Layer 3 Gateway Tunnels


• Verify that the Spine device is a VTEP endpoint (participates in the VXLAN broadcast domain)

(mas t e r : 0}
lab@spinel> show ethernet- switching vxlan-tunnel-end- point source
Logical System Name Id SVTEP-IP IFL L3- Idx SVTEP Mode
<defau lt> o 192 . 168 . 100 . 1 loO . O o .,__-I Spine isthesourceVTEPofVNID
L2 -RTT Bridge Domain VNI D MC- Group-IP 5010and 5020
default-switch vlO+lO 5010 0.0.0.0 ...__ _ _ _ _ _ _ _ _ ____.
default-s witch v20+20 5020 0.0.0.0

{master : 0}
lab@spinel> sho w ethernet- switching vxlan- tunnel - end- point remote Spine has VXLAN tunnels to remote
leaf devices (VTEP endpoints)
Logical
<default> System Name oId 192
SVTEP-IP
. 168 . 100 . 1 ~I~F:L~~L:
100 . 0 3~- o
I: d:x_;_S~V~TE~P~M=
T s ine2 od: e : _ _ ~ : _ : ~ ; ; ~ , -
RVTE P-IP IFL-Idx NH-Id RVTEP Mode o
1 92 . 168 . 100 . 2 558 1740 RNVE
VNID MC-Group-IP
5020 0.0.0. 0
5010 o.o.o. o
RVTEP-I P IFL-Idx NH-Id
192 . 168 . 100 . 11 557 1726
VNI D MC- Gr oup-I P
5010 o.o.o. o
RVTEP-IP I FL-Id x NH- Id RVTEP Mode
192 . 168 . 100 . 13 556 1723 RNVE
Note: Notice that no VXLAN tunnel has been created
VNID MC-Group-IP
5020 0.0.0.0 to leaf2. Th is is because leaf2 has not advertised any
- - - - - - - - - - - - - - - - - - - - - - - - - 1 locally connected VNI.

iQ 2019 Juniper Networ1<s, Inc All Rights Reserved

Layer 3 Gateway Tunnels


The example shows the VXLAN tunnels on a Layer 3 gateway. The device has a loca l VTEP interface that terminates in both
VNls, as well as a remote VTEP interface for each VNI. Note that with a route reflector topology, the spine devices create
VXLAN tunnels between each other, and therefore there is a VXLAN tunnel for each VNI between the route reflect ors.

www .juniper.net Configuring VXLAN • Chapter 6 - 27


Data Center Fabric with EVPN and VXLAN

Host - Verify Reachability


• Veify reachability learned on hosts
• Verify IP connectivity
• Verify MAC address (ARP)
user@hostl : ~$ arp - n
Addr ess HWtype HWaddr ess Fl ags Mask Iface
172 . 25 . 11 . 254 ether fe : Oe : Of : 66 : lb : 39 C ens3
10 . 1 . 1 . 254
10 . 1 . 1 . 2
ether
ether
00 : 00 : 5e : Ol : 01 : 01
52 : 54 : 00 : 2c : 4b : a2 ul ::::i. n
. ~-
1 n
Virtual MAC address of l3 Gateway (spine1 and spine2 )
MAC address of host2

user@hostl : ~$ ping 10.1.1.2


PI NG 10 . 1 . 1 . 2 (10 . 1 . 1 . 2) 56(84) bytes of data .
64 bytes from 10 . 1 . 1 . 2 : icmp_ seq=l ttl=6 4 time=l99 ms
64 bytes from 10 . 1 . 1 . 2 : icmp_ seq=2 ttl=64 time=llB ms
Ac
--- 10 . 1 . 1 . 2 ping statistics ---
2 packets t r ansmitted , 2 r ece i ved , 0% packet l oss , time 3ms
rtt min/avg/max/mdev = 117 . 736/15 8 . 196/198 . 657/40 . 462 ms

use r @host2 : ~$ arp - n


Address HWtype HWaddress Flags Mask I f ace
172 . 25 . 11 . 254 ether fe : Oe : Of : 66 : lb : 39 C ens3
10 . 1 . 1 . 1 ether 52 : 54 : 00 : 5e : 8 8 : 6a
- .. , ~~
... . --
1 ()
MAC address of host1
10 . 1 . 1 . 254 ether 00 : 00 : 5e : Ol : 01 : 01
Virtual MAC address of L3 Gateway (spine1 and spine2)

C> 2019 Juniper Networks, Inc All Rights Reserved

Verify Host Reachability


From the host, the ARP command shows that host1, in the example, has learned the MAC address of the remote host that is
in the same subnet. It has also learned the MAC address of the default gateway. The MAC address of the default gateway is
the virtua l MAC address assigned to the IRB interface on the spine1 devices.

Note that the default MAC address shown is the default virtua l MAC address that is associated with the virtua l gateway
address. This is the same virtual MAC address that is used for a virtual gateway when using VR RP as well. Because the
virtual MAC address is a fixed value, the virtual MAC address for all configured subnets will be the same if the default value
is used. Juniper Networks recommends manually assigning a virtua l MAC address for the virtual gateway instead of using the
default address.

Chapter 6-28 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Layer 3 Gateway Behavior


■ Layer 3 Gateway behavior
• L3 Gateway has a subinterface that is part of the VN I (VXLAN Broadcast Domain)
• The IRB interface acts as a device (gateway device) in the default routing instance of the spine devices
• IRB interfaces can be placed in VRFs for separation of routing domains (multi-tenant environment)
• When irb interfaces are placed in separate VRFs, interface routes of connected VRFs must be shared using routing policy
• Traffic that must be routed is forwarded from host1 to the IRB interface through VXLAN tunnel
• L3 Gateway decapsulates original VXLAN header, performs a route lookup, re-encapsulates with a new VXLAN
header, and forwards traffic to new VN ls

L3 Gateway
inet.O
I Gateway I
VXLAN
(irb)

= VXLAN Domain XLAN Domain


host1 ,1 - - - - 1 VTEP VTEP i---- host2

Q 2019 Juniper Networks, Inc All Rights Reserved

Layer 3 Gateway Behavior


The primary goal of a VXLAN Layer 3 gateway is to allow traffic to be forwarded from one VNI to another VNI. When a device
w ithin a VNI is required to send a packet to a host on a different subnet, it must forward the frame to the default gateway.
The frame is sent to the virtual MAC address of the IRB interface that has been configured with in the VNI as the gateway
address for the subnet. The Layer 3 gateway processes the packet, removes the original VXLAN header, determ ines the next
hop of the original packet, re-encapsulates the packet in a new VXLAN header sourced from the Layer 3 gateway, and
forwards the packet through the VXLAN tunnel to the end host. From a routing and forwarding perspective, the original IP
packet remains unchanged, but the original source MAC address is changed to the MAC address of the Layer 3 gateway.

If the VTEP device is not capable of performing both the VTEP and Layer 3 gateway functions, a router can be configured as
a host, which is connected to a VTEP. This requ ires one more forwarding step in the process, since the VTEP decapsulates
the original frame and forwards it to the default gateway, which performs its normal Layer 3 routi ng functions, and then
forwards the frame back to the VTEP in a new VLAN, which corresponds to the next routing next hop.

www .juniper.net Configuring VXLAN • Chapter 6 - 29


Data Center Fabric with EVPN and VXLAN

Multihomed Sites - No LAG


■ Multihome with no LAG
• Assign an ESI to the shared Ethernet segment
• Both VTEPs advertise connectivity to same Ethernet segment

.2 p
Remote VTEPC 1------li;;;;;;;;;;;;;

host2

VTEPA VXLAN Tunnel VTEPB

I ESI: 00:01 :01 :01 :01 :01 :01 :01 :01 :01 I .1 .3

host1
C> 2019 Juniper Networks, Inc All Rights Reserved

Multihoming Without LAG


A device can be multihomed to the switched network without using link aggregation . This requires that each link on the host
be configured with a unique IP address, to which remote devices can forward traffic.

Whenever the same physical device is connected to the network with multiple links, the VXLAN network must be assigned an
Et hernet Segment ID, o r ESI. By default, a single homed device receives the reserved ESI va lue of
00:00:00:00:00:00:00:00:00:00. When a s ite is multihomed, a non-defau lt ESI value must be used. This proces.s is used to
reduce BUM traffic f lood ing and loops.

In t he example, BUM traffic from host1 that leaves interface A and arrives at VTEPA. the VTEPA forwards the BUM traffic to all
other VTEPs that participate in the save VN I. Unless VTEPA and VTEPB are aware that they are connected to the same
physica l host, VTEPB would fo rward BUM traffic received from VTEPA to host1 . To avo id this behavior, both VTEPA and VTEPB
a re configured with t he same ESI for the LAN segments t hat connect to host1 . In this manner, VTEPB can identify that the
BUM traffic received f rom VTEPA originated on the same ESI, and VTEPB wi ll not forward the t raffic back to host1 .

Chapter 6 - 30 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Multihomed Sites Configuration


{master : 0 ) [edit ]
user@leafl# show inte rfaces xe-0/0/0
esi {
00 : 01 : 01 : 01 : 01 : 01 : 01 : 01 : 01 : 01 ;
all-active ;
}
unit O {
fami l y ethernet- switching {
i n terface-mode tru nk;
vlan {
members vlO ;
leaf3
.2 =
}
}
} host2

{master : O}[edit]
user@leaf2# show interfaces xe-0/0/0
esi {
00 : 01 : 01: 0l : Ol : Ol : Ol : Ol : Ol : Ol ;
all-active ;
} leaf1 VXLAN Tunnel leaf2
unit O {
fami l y ethernet-swi tching {
interface- mode trunk;
vlan { IESI: 00:01 :01 :01 :01 :01 :01 :01 :01 :01 I .1 .3

members vlO ;
}
}
} host1
C> 2019 Juniper Networks, Inc All Rights Reserved

Multi homed Configuration


The ESI associated with a physica l link is configured under the [ e dit inte r faces interface-name es i J hierarchy.
As shown in the example, the physical interfaces on leaf1 and leaf2 t hat connect to the same physical host, wh ich are also in
the same broadcast domain, are configured with t he same ESI va lue. The al l -ac t ive flag indicates to remote VTEPs t hat
the Ethernet segment can be reached through all VTEPs that connect to the segment. The all-active mode is the only
forward ing mode supported on Juniper Networks devices at t he t ime this course was developed.

www.juniper.net Configuring VXLAN • Chapter 6 - 31


Data Center Fabric with EVPN and VXLAN

Multihomed Sites Remote Route Table


{master : O}
user@leaf 3> s how route t able bgp . evpn

bgp . evpn . O: 20 destinations , 40 routes (20 acti ve , 0 holddown , 0 hidden)


+ = Active Route, - = Last Active , *= Both

1 : 192.168 . 100 . 11 : 1 :: 0101 01010101010101 : : 0/192 AD EVI


*[ BGP/170 ) 00 : 02 : 14 , localpref 100 , from 192 . 168 . 100 . 1
AS path : I , validation- state : unverified
to 1 72 . 1 6 . 1 . 13 via xe- 0 / 0/2 . 0
leaf3
.2 =
> to 172 . 16 . 1 . 25 vi a xe - 0/0/3 . 0
[BGP/170 ) 00 : 02 : 14 , localpref 100 , from 192 . 168 . 100 . 2
AS path : I , val idation- state : unverif ied host2
to 1 72 . 1 6 . 1 . 13 vi a xe- 0/0/ 2 . 0
> to 172 . 1 6 . 1 . 25 vi a xe-0/0/3 . 0
: 192 . 168. 100 . 12 : 0 :: 0101 010 01 010 10101 : : FFFF:FFFF/192 AD/ESI
*[ BGP/170) 00 : 02 : 17 , localpref 100 , from 192 . 1 68 . 100 . 1
AS path : I , validation- state : unve r i fied
to 172 . 1 6 . 1 . 13 vi a xe- 0/0/2 . 0
leaf1 VXLAN Tunnel leaf2
> to 172 . 16 . 1 . 25 vi a xe - 0/0/3 . 0
[BGP/170 ) 00 : 02 : 16 , localpref 100 , from 192 . 168 . 100 . 2
AS path : I , validation- state : unverif ied
to 1 72 . 1 6 . 1 . 13 vi a xe- 0/0/2 . 0
> to 172 . 1 6 . 1 . 25 vi a xe-0/0/3 . 0
ESI: 00:01 :01 :01 :01:01 :01 :01 :01:01

ESI Value
host1
iQ 2019 Juniper Networks, Inc All Rights Reserved

Remote Route Table


From VTEP leaf3, two entries fo r the remote ESI are present in the routi ng table. The EVPN Type 1 ro ute is used to advertise
co nnectivity to a non-defau lt tagged ESI. The device leaf 3 is permitted to forward t raffic to the end destination th rough either
of the remote VTEPs.

Chapter 6 - 32 • Conf iguring VXLAN www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Multihomed Sites - LAG


■ Multihome
• Assign an ESI to the shared Ethernet segment
• Both VTEPs advertise connectivity to the same Ethernet segment
• Configure the same LACP system ID on each leaf device
.2 p
leaf3

host2

leaf1 VXLAN Tunnel leaf2

LAG

ESI: 00:0 1:01 :01 :01 :0 1:01 :01 :01 :0 1 p .1

LACP sys-ID : 0 1:02 :03:04:05:06


host1
C> 2019 Juniper Networks, Inc All Rights Reserved

Multihoming Using LAG


The process of configuring a multi homed site using LAG is s imila r to the process used without LAG. The key d iff erence is t hat
w ith LAG, a ll links t hat connect VTEPs to a host are in a LAG bundle on both the host and the VTEPs.

Link Aggregation groups are normal ly configured on a single device. The Link Aggregation Control Protocol (LACP), ma nages
the links in t he bund le. As you can see in the example, the LAG termi nat es o n two different devices with in the network; one
link t erm inates on leaf1, and the other on leaf2.

An EVPN Type 1 route, or Ethernet segment route, is advertised bet ween leaf1 and leaf2 to ind icat e that they are connected
t o the same Ethernet segment. In addit ion, an auto-d iscovery rout e, o r EVPN Type 4 rout e is generated by each connected
VTEP a nd advertised to remote VTEPs. The a uto-discovery rout e is used in the e lection of a designated forwa rder. The
designated forwa rder is the device con nected to the ESI that is responsible for forwarding BUM traffic for the segment. A
non-designated forward may f orward unicast traffic to the Ethernet segment, but blocks BUM traffic to avoid packet
duplication. The auto-d iscovery route also ind icates what forwardi ng mode is configured on the device, wh ich in the case of
Ju niper Networks devices, wi ll a lways be active-active.

From the perspective of host1, the LAG must te rminate on the same remote device, or must appear to term inate on the
same remote device. To perm it this funct ionality, t he LACP system ID on leaf1 and leaf2 is configured to the same va lue.
LACP contro l packets sourced from leaf1 and leaf2 a rrive on host1, and appear to originat e f rom the same remote device.

In order to enab le leaf1 and leaf2 to manage t he shared link properly, once again MP-BGP is used. Leaf1 and leaf2 co nf igure
LACP w ith a single physical link toward host1. The same ESI is assigned to the aggregated Ethernet int erface on both
devices, and both devices are configured with the same ESI. As long as t he devices are connect ed to the VXLAN network and
the underlay Fabric, they maintain the LACP status to host1 as active. In the event that a leaf device loses BGP con nectivity
t o the fa bric, and therefo re is disconnected f rom the fabric, the isolated VTEP sets the LACP status to standby, thereby
blocking traffic between the host and VTEP.

In order to limit traffic loss and delay in the fabric network, when a remot e VTEP, wh ich advertises connectivity t o an ESI is
removed from the network due to fai lure or some other cause, t he remot e VTEP performs a mass withdrawal of routes
associated with the remote VTEP. Forwarding next hops toward the withdrawn device are re-mapped to the remai ning VTEPs
that reta in connectivity to t he remot e ESI.

www.j uniper.net Configuring VXLAN • Chapter 6-33


Data Center Fabric with EVPN and VXLAN

Multihomed Sites - LAG Configuration


{mas t er : O} [edit] {master : O} [edit]
user@leafl# s h ow i nt e rfa c es aeO user@leaf2 # show interfaces aeO
es1. { esi {
00 : 01 : 01 : 01 : 01 : 01 : 0l : 01 : 0l : Ol ; 00 : 01 : 01 : 01 : 01 : 01 : 01 : 01 : 01 : 01 ;
all-active ; all -active ;
} }
aggregated- ether - options { aggregated-ether-options {
lacp { l acp {
syst em-id 01 : 02 : 03 : 04 : 05 : 06 ; system-id 01 : 02 : 03 : 04 : 05 : 06 ;
} }
} }
unit O { unit O {
family ethernet-switching { famil y ethe rnet- s wi tching {
i nte rf ace-mod e t r u n k; inte rf ace-mode t r un k;
vl an { v l an {
me mbers v lO ; members vlO ;
} }
} }
} }

{master : O}[edit ] {master : O} [edi t]


user@leafl # show interfaces xe- 0/0/0 user@leaf 2 # s h ow i nt e rfaces xe - 0/0/0
g i get he r-optio ns { gigethe r -options {
802 . 3ad aeO ; 802 . 3ad aeO ;
} }

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN-LAG Configuration
The example shows the configuration of an EVPN-LAG on two leaf devices, each of which has a single physical link to t he
Ethernet segment. Note that both the ESI and LACP system ID are synchron ized on the two devices, as are the interface
parameters. The EVPN-LAG functions as a single logical link toward the connected host.

Chapter 6-34 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Verify LACP Leaf1


{master : 0}
user@leafl> show lacp statistics interfaces
Aggregated interface : aeO
LACP Statistics : LACP Rx LACP Tx Unknown Rx Illegal Rx
xe-0/0/0 2517 93 0 0

{master : 0}
user@leafl> show interfaces aeO detail
Physical interface : aeO , Enabled, Physi cal link is Up
Interface index : 670 , SNMP ifindex : 562 , Generation : 161
Link- level type : Ethernet , MTU : 1514 , Speed : lOGbps , BPDU Error : None , Ethernet- Switching Erro r : None , MAC- REWRITE
Error : None,
Loopback : Disabled , Source filtering : Disabled , Flow control : Disabled, Minimum links needed : 1 , Minimum bandwidth
needed : lbps

Agg regate member links : 1

LACP info : SystemRole System Port Port Port


prio rity identifier priority number key
xe- 0/0/0 . 0 Actor 127 01 : 02 : 03 : 04 : 05 : 06 127 3 1
xe-0/0/0 . 0 Partner 65535 8e : 56 : 7f : c8 : be : 9b 255 2 9
LACP Stati stics : LACP Rx LACP Tx Unknown Rx Illegal Rx
xe-0/0/0 . 0 0 0 0 0
Marker Statistics : Marker Rx Resp Tx Unknown Rx Illegal Rx
xe-0/ 0/0 . 0 0 0 0 0
Protocol eth- switch , MTU : 1514 , Generation : 194 , Route table : 5 , Mesh Group : all_ ces ,
EVPN mul ti-homed stat us : Blocking BUM Traffic t o ESI, EVPN multi-homed ESI Spli t Horizon Label : 0, Next-hop : 1766 ,
vpls- status : up
Flags : Is-Primary

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
35

Verify LACP
The sho w lac p statistics i n t erfac e com mand is used to verify the LACP status of t he links in an EVPN-LAG. Note
that on the device leaf 1, the EVPN mu ltihomed status is set to Blocki ng BUM T r a f fic t o ES I . Th is indicates that this
device was not elected to be the designated forwa rder for th is Ethernet segment .

www .j uniper.net Conf iguring VXLAN • Chapter 6 - 35


Data Center Fabric with EVPN and VXLAN

Verify LACP Leaf2


{master : 0}
user@leaf2> show lacp s tatistics interfaces
Aggregated interface : aeO
LACP Stati sti cs : LACP Rx LACP Tx Unknown Rx I l legal Rx
xe-0/0/0 3202 11 5 0 0

{master : 0}
user@l eaf2> show interfaces aeO detail
Physical i nte r face : aeO , Enabl ed, Ph ysi cal link is Up
Inte rface index : 670 , SNMP i findex : 563 , Generation : 161
Link- level type : Ethernet , MTU : 1514 , Speed : lOGbps , BPDU Error : None , Ethernet- Switching Erro r : None , MAC- REWRITE
Error : None,
Loopback : Di sabled , Sour ce fil teri ng : Di sabled , Fl ow control : Disabled, Minimum l inks needed : 1 , Minimum bandwidth
needed : lbps

Agg regate member links : 1

LACP i nfo : Syst em Role System Port Port Por t


prio r i t y i dentifier priority number key
xe- 0/0/0 . 0 Actor 127 01 : 02 : 03 : 04 : 05 : 06 127 3 1
xe-0/0/0 . 0 Partner 65535 8e : 56 : 7f : c8 : be : 9b 255 1 9
LACP Statistics : LACP Rx LACP Tx Unknown Rx I l legal Rx
xe-0/0/0 . 0 295 10 0 0
Ma rker Statistics : Marker Rx Resp Tx Unknown Rx Illegal Rx
xe-0/0/0 . 0 0 0 0 0
Protocol eth- switch , MTU : 1514 , Generation : 192 , Route table : 5 , Mesh Group : all ces EVPN multi-homed status:
'
Forwarding,
EVPN multi- homed ESI Spl it Ho rizon Label : 0, Next- hop : 1764 , vpls- status : up

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lV.'OPKS
36

EVPN-LAG Status on Designated Forwarder


On device leaf2, the EVPN mu lt i-homed statu s is set to Forwarding. This indicates that the device leaf2 wil l forward
both un icast and BUM traffic to the devices connected to the Ethernet segment, and that leaf2 is the designated forwa rder.

Chapter 6-36 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Configured EVPN controlled VXLAN

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lWOPKS
37

We Discussed:
• Configuring EVPN controlled VXLAN.

www .juniper.net Configuring VXLAN • Chapter 6 - 3 7


Data Center Fabric with EVPN and VXLAN

Review Questions

1. What is the purpose of the Route Distinguisher in an EVPN


controlled VXLAN deployment?
2. What must be configured in an EVPN controlled VXLAN
deployment to support a multi-homed site?
3. What is the purpose of the vrf-target community in an EVPN
controlled VXLAN deployment?

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lWOPKS
38

Review Questions
1.

2.

3.

Chapter 6-38 • Configuring VXLAN www.juniper.net


Data Center Fabric with EVPN and VXLAN

Lab: VXLAN

• Configure EVPN-VXLAN.

C> 2019 Juniper Networks, Inc All Rights Reserved

Lab: VXLAN
The slide provides the objective for this lab.

www .juniper.net Configuring VXLAN • Chapter 6 - 39


Data Center Fabric with EVPN and VXLAN

Answers to Review Questions


1.
The route distingu isher is added to an advertised route prefix to ensure that the prefix is unique within a shared routing
doma in.

2.
An administrator-defined Ethernet Segment ID (ESI) is requi red when configuring a multihomed site.

3.
The vrf-target community is used by MP-BGP to tag EVPN routes before being advertised to remote VTEPs. It is also used to
identify which EVPN routes rece ived from remote VTEPs shou ld be accepted and imported into the local routing tables.

Chapter 6-40 • Configuring VXLAN www.juniper.net


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 7: Basic Data Center Architectures

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Describe a basic data center deployment scenario

Q 2019 Juniper Networks, Inc All Rights Reserved

We Will Discuss:
• A basic data center deployment scenario.

Chapter 7 -2 • Basic Data Center Arch itectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Basic Data Center Architectures

➔ Requirements Overview
■ Base Design
■ Design Options and Modifications

C> 2019 Juniper Networks, Inc All Rights Reserved

Requirements Overview
The slide lists the topics we will discuss. We wi ll discuss the highlighted topic first.

www .juniper.net Basic Data Center Arch itectures • Chapter 7-3


Dat a Center Fabric with EVPN and VXLAN

Organization Requirements

■ Data Cetnter Design Requirements


• VLANs
• Application flow between hosts within the same broadcast domain
• Application flow between hosts within different broadcast domains
• Reachability
• Applications must be able to access external networking resources (Internet, corporate
WAN, etc.)
• Security
• Routed traffic to external destinations must pass through a security appliance
• Scalability
• The data center must be able to scale in a modular fashion

C> 2019 Juniper Networks, Inc All Rights Reserved

Data Center Design Requirements


Plann ing is key to implementing successful data center environments. The initia l design should take into account several
factors, including the fo llowing:

• VLANs - How many VLANs will be requ ired with in the domain? How will traffic flow with in the same VLAN? How
will traffic flow between hosts in different VLANs?

• Reachabil ity - Do appl ications require Layer 2 commun icat ion? Do applications req uire Layer 3
communication? What external networks (Internet, corporate WAN, etc.) will applications be required to
commun icate with?

• Security - What traffic will be required to pass through a security domain? How will t hat security domain be
implement ed? Is an edge f irewa ll sufficient and sca lable? Wi ll a security domain that contains several security
devices be required?

• Scalability - How will t he initial design be impact ed when the dat a center scales?

Chapter 7 -4 • Basic Data Center Arch itectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Proposed Solution

■ Solution outline
• Spine-Leaf topology
• VXLAN with EVPN control plane for Layer 2 domains
• Layer 2 gateways at the leaf nodes
• Layer 3 gateways at the spine nodes
• Security domain for external destinations

C> 2019 Juniper Networks, Inc All Rights Reserved

Proposed Solution
In our example design, the following parameters wi ll be used:

• Spine-leaf topology;

• VXLAN with EVPN control plane for Layer 2 domains;

• Layer 2 gateways implemented at the leaf nodes;

• Layer 3 gateways implemented at the spine nodes (Centrally Routed design); and

• A security domain through wh ich traffic to external destinations must pass.

www .juniper.net Basic Data Center Arch itectures • Chapter 7 - 5


Data Center Fabric with EVPN and VXLAN

Basic Data Center Architectures

■ Requirements Overview
➔ Base Design
■ Design Options and Modifications

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
6

Base Design
The slide highlights the topic we d iscuss next.

Chapter 7 -6 • Basic Data Center Arch itectures www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Sample Topology - Physical Layout

• Physical Tapology
• Spine-Leaf Topology
• Dual-homed Servers Internet

• Single Internet Gateway

Spine ---- --
---+

---+

Leaf --
---+

---+ ---- --
---+

---+ ----
=
= =
=
=
= =
=
Servers =
= =
=
=
= =
=
C> 2019 Juniper Networks, Inc All Rights Reserved

Physical Topology
The topology for this example is a simple spine-leaf topology. The number of spine and leaf devices can be increased as
needed without impacting the design.

Servers wi ll be dual-homed to leaf devices for redundancy, which requires two leaf devices per rack of servers.

A single Internet gateway is implemented. Alternative ly, a dual-gateway may be deployed. If a dual Internet gateway is
deployed, a chassis cluster is recommended, with a LAG to the spine devices. EVPN-LAG can be implemented on the spine
devices, with the security chassis cluster assigned to the same ESI.

www .juniper.net Basic Data Center Arch itectures • Chapter 7-7


Data Center Fabric with EVPN and VXLAN

Sample Topology - Underlay - IGP Option

• IGP Underlay
• OSPF
• Advertise loopbacks Internet

• Enables IBGP EVPN


connections OSPF
Spine Area 0 ---- ----

-- -- -- --
~ ---+ ---+ ---+
Leaf - 1--------t - 1--------t - 1--------t -

=
= =
=
=
= =
=
Servers =
= =
=
=
= =
=
C> 2019 Juniper Networks, Inc All Rights Reserved

IGP Underlay
Mu lt iple options are available for t he underlay network. A simple IGP underlay is usual ly sufficient for a basic IP f abric data
ce nter design. The goal of the IGP is to advertise loopback addresses to all other fabric devices, which enables the ability to
use loopback addresses for t he overlay BGP peering sessions.

Chapter 7 -8 • Basic Data Center Arch itectures www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Sample Topology - Overlay - MP-BGP EVPN

• BGP Overlay
• IBGP sessions to loopback addresses
• Full mesh or route reflectors Internet

• Used for EVPN route advertisement


BGP
• family evpn signaling
• Same AS for all devices
Spine ---- --
Leaf -=-----=-----=-----=-
~

- - -
---+

-
---+ ---+

=
= =
=
=
= =
=
Servers =
= =
=
=
= =
=
Q 2019 Juniper Networks, Inc All Rights Reserved

EVPN Overlay
The overlay for the data center is based on MP-BGP. A full mesh of IBGP peers or route reflectors may be used to distribute
EVPN route information. The IBGP sessions will support the family evpn signaling option. All fabric devices will be
configured with the same autonomous system ID.

www .juniper.net Basic Data Center Arch itectures • Chapter 7-9


Data Center Fabric with EVPN and VXLAN

Sample Topology - EVPN VXLAN


■ EVPN VXLAN
• L2 Gateways on leaf devices
• L3 Gateways on spine devices Internet
• Includes L2 gateway on spine devices
as well
• Centrally Routed model L3 Gateway L3 Gateway

• Use Anycast address for redundant Spine L2 Gateway L2 Gateway

gateway address
• VXLAN tunnels connect leaf devices
Leaf -
L2Gateway
-
L2 Gateway
-
L2 Gateway
-
L2 Gateway
• VXLAN tunnels connect leaf-to-spine
devices for L3 gateway access =
= =
=
• Creates full mesh of VXLAN tunnels =
= =
=
through BGP signaling =
= =
=
Servers
=
= =
=
C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN VXLAN Topology


The EVPN VXLAN topo logy will be designed as a centra lized routing and bridging design, or CBR design. Leaf devices operate
as Layer 2 gateways only.

The Laye r 3 gateways will be configured as distributed gateways on t he spine nodes. The spine nodes w ill be configured as
Layer 2 gateways (VT EPS) and as distributed Layer 3 gateways (IRB interfaces within the Layer 2 domains).

The resulting topology creates a full mesh of VXLAN tunnels w ithin the data center.

Chapter 7-10 • Basic Data Center Architectures www.ju n iper .net


Data Center Fabric with EVPN and VXLAN

Sample Topology - Layer 2 Traffic Flow


• Hosts within same VLAN
• L2 Gateway-to-L2 Gateway
• L2 traffic encapsulated to remote VTEP
gateway Internet

-
-1-------1
+--

-
-
+--

+--
t - - - - ----..
VXLAN Dom in

Note: Not all possible traffic flows are shown L 2 GQteway L 2 Gateway L 2 Gateway L2 Gateway
'\-__;,,•-:.,J_--,4..- ~ _ : _ J . - - L - ~ ~- ---'---.,,:.~:....y


c:::::::J
c:::::::J
- - - VXLAN Tunnel =
=
c:::::::J
- - - - - Native Layer 2 Traffic =
c:::::::J
c:::::::J

•············· Traffic Flow

C> 2019 J uniper Networks , Inc All Rights Reserved

Layer 2 Traffic Flow


Layer 2 t raffic between hosts that are connected to d ifferent VTEPS will traverse VXLAN tunn els between t he Layer 2
gateways.

www .j uniper.net Basic Dat a Center Arch itectures • Chapter 7 - 11


Data Center Fabric with EVPN and VXLAN

Sample Topology - Layer 3 lnter-VLAN Traffic


• Hosts in different VLANs
• L2 Gateways-to-L2 Gateway that hosts L3
Gateway
• Routing Instance with IRB interface performs Internet

route lookup and forwards to VXLAN tunnel


to remote L2 Gateway VTEP (switches VNI)
L3 Gat~way L3 Gateway
• Remote VTEP decapsulates traffic and .

.
forwards to remote host ... ..
.,. .,. .- - •• 1.:2 Gateway' ••. L2 Gateway
, ..... ·•.
,,,,., .....•
.•·..
,.
, ~
~
~,~-~-.. VXLAN Domain
, ..·· •♦•
' ·..••
--
,

-
-----· VXLAN Tunnel

- - - - - Native Layer 2 Traffic

.............. Traffic Flow

C> 2019 J uniper Networks , Inc All Rights Reserved

Layer 3 lnter-VLAN Traffic


Traffic that must be forwarded between VLANs will be forwarded to the spine devices. The spine devices route the traffic to
the new VNI and forward it to the destination VTEP in the destination VNI. The remote VTEPs decapsulate the traffic and
forward it to the end host.

Chapter 7 -12 • Basic Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Sample Topology - External Destinations


• Traffic to external destinations
• L2 Gateways-to-L2 Gateway that hosts L3
Gateway (IRB interface)
• Traffic exits VXLAN and sent to External Internet
••

Security Gateway (normal forwarding) •



External Security Gateway
• External gateway device performs NAT and

L3 Gateway
other services prior to passing traffic L3G~ eway

.,. .,. .- - L2 Gateway L2 Gateway


• Return traffic is forwarded to L3 internal . . .,, ........ -~ ·=--------. ... -
,jll>r- .•· -

gateway, where it is forwarded back to the ,'~-------- ',, VXLAN Domain


~ .· '
VXLAN (L2 Gateway function)
L2 Gateway
'\-~_.:.J--l-1---'-~-4----~-
--
-
,-1--,

L2 Gateway --
=
= =
=
-----· VXLAN Tunnel =
= =
=
- - - - - Native Layer 2 Traffic
=
= =
=
=
= =
=
•············· Traffic Flow

C> 2019 Juniper Networks , Inc All Rights Reserved

External Destinations
Traffic destined to external destinations will be forwarded to the spine devices. The IRB interface on the spine device will be
configured as the default gateway for all VNl 's within the VXLAN. Once the traffic arrives on the spine device, it is routed to
the destination VNI, wh ich is the VLAN associated with the link that connects to the external security device. The traffic is
then forwarded to the external security device, which performs NAT and other services prior to passing traffic to the external
destination.

External traffic arrives on the External Security Gateway, is processed, and then is forwarded to the VXLAN Layer 3 gateway.
The VXLAN Layer 3 gateway forwards the traffic to the VXLAN VNI that corresponds to the destination host.

www .juniper.net Basic Data Center Arch itectures • Chapter 7 - 13


Data Center Fabric with EVPN and VXLAN

Sample Topology - Scaling


■ Scalability
• L2 Gateways on leaf devices
• Requires IBGP peering to all other leafs
Internet
-OR-
IBGP peering to route reflectors only
• L3 Gateways on spine devices Automatic full mesh of
L3 Gateway L3 Gateway VXLAN tunnels to existing
• Does not change as data center grows Spine VTEPs
L2 Gateway L2 Gateway
- Exception - when adding spine _____ . _.__._n -:______ .. -~-:.."=.::.::~
devices ,----
, ,I ,AA,' ----...... ---
' ,,__....,
• Requires hair-pinning when bridging
,----,
Leaf 1-=-= ---......
L2Gateway
-
L2 Gateway
-
L2 Gateway L2 Gateway
- ---
VNls within the data center
• VXLAN tunnels auto-signaled from new =
= =
= =
= =
= =
=
leaf to all remote leafs =
= =
= =
= =
= =
=
Servers =
= =
= =
= =
= =
=
=
= =
= =
= =
= =
=
C> 2019 Juniper Networks , Inc All Rights Reserved

Scalability Considerations
The Layer 2 gateway functions are performed on each leaf device . The VXLAN environment requ ires IBGP peering to all other
leaf devices, or to a centralized set of route reflectors. The centra lized route reflectors topology is more scalable because as
new leaf nodes are added, only peer sessions from the new leaf to t he route reflectors are required. The a lternative of a full
mesh topology requires that every leaf node in the network creates an IBGP session to the newly added leaf.

Layer 3 gateways are on the spine devices, and therefore the Laye r 3 routing functions do not change as more leaf nodes are
added. The only exception to this is when new spine devices are added in the future, if necessary.

One downside to th is topology is that it requires all inter-VLAN t raffic with in the data center to pass t hrough the sp ine nodes.

As new leaf nodes are added, the IBGP signaling automatically advertises the new leaf and new VTEP to the devices within
the VXLAN, and new VXLAN tunnels are automatically created .

Chapter 7-14 • Basic Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Hair-Pinning
• Hosts in different VLANs connected to
same VTEP
• Traffic must be forwarded to L3 gateway
before returning to original VTEP Internet

• Uses uplink/downlink bandwidth


• Adds unnecessary hops
L3 Gateway L3 Gateway

.,. .- .,. !.~fGateway L2 Gateway


;
.,. .,. .. •· .. ••1=::;.::::·,-----...... , .....
♦•

---,
.,, ••• -

,' ..•····· ........•· ',, VXLAN Domain


~~
....·· ......····· ',-
r--+--.•·
-
l2Gate~ y

=
=
-----· VXLAN Tunnel =
=
- - - - - Native Layer 2 Traffic
=
=
=
=
.............. Traffic Flow

C> 2019 Juniper Networks, Inc All Rights Reserved

Hair-Pinning
With this design, all inter-VLAN traffic must pass through the Layer 3 gateway at the spine nodes. This causes what is known
as hair-pinning in the network. A downside to hair-pinning is that it utilizes uplink and downlink bandwidth and adds
unnecessary hops to traffic that is forwarded between devices that are connected to the same VTEP.

www .juniper.net Basic Data Center Arch itectu res • Chapter 7- 15


Data Center Fabric with EVPN and VXLAN

Failure of a Gateway

• Question: What happens when


a centralized gateway fails?
• Backup gateway takes over Internet

• L3 GW functions on remaining
gateway double L3 Gateway L3 Gateway

L2 Gateway L2 Gateway

VXLAN Domain

-
=
=
-----· VXLAN Tunnel =
=
- - - - - Native Layer 2 Traffic
=
=
=
=
.............. Traffic Flow

C> 2019 Juniper Networks, Inc All Rights Reserved

Gateway Failure
Another downside to th is topology is that the failure of a gateway device may have a more severe impact on traffic within the
data center. Not only does a ll Layer 2 traffic exiting the domain have to pas.s t hrough a single spine device, but also all Layer
3 traffic processing for inter-VLAN routing must be processed by the remaining spine device and its fabric links.

Chapter 7-16 • Basic Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Basic Data Center Architectures

■ Requirements Overview
■ Base Design

➔ Design Options and Modifications

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOflKS
17

Design Options and Modifications


Th is slide high lights t he topic we discuss next.

www .j uniper.net Basic Dat a Center Arch itectures • Chapter 7 - 17


Data Center Fabric with EVPN and VXLAN

Sample Topology - Edge Routed Design


■ EVPN VXLAN
• L2 Gateways on leaf devices
• L3 Gateways on leaf devices Internet
• Requires leaf devices that support L3
Gateway functions
• Eliminates hair-pinning when switching
VNls within the same VTEP - , ..
Spine ~ ..... -, ,, ~

------- ~ · -------
• Distri butes L3 gateway functions
• Externally destined traffic is forwarded
--- ---
r------,,..........,
L3Ga l way
.,
L3 Ga way
~ '
L3
--- --
teway
.....
....-.----,
L3 teway
Leaf L2Gateway L2 Gateway L2 Gateway L2 Gateway
through VXLAN tunnel to spine device
• Spine device decapsulates and
forwards original IP packet toward the
=
= =
= =
= =
=
external network
=
= =
= =
= =
=
=
= =
= =
= =
=
• "Internet" or external network routing =
= =
= =
= =
=
device is considered a "site" or host
Servers
C> 2019 Juniper Networks, Inc All Rights Reserved

Edge Routed Design


One modification that could be implemented in this design is to move the Layer 3 gateway functions to the leaf nodes. This
design is ca ll ed an Edge Routed and Bridged design, or ERB design. With this design, Layer 2 gateway functions are
maintained on the leaf devices, and Layer 3 gateway functions are moved to t he leaf devices. This distributes Layer 3
gateway functions th roughout the network.

lnter-VLAN t raffic between devices that a re connected to the same leaf node never leaves the leaf node. lnter-VLAN traffic
between devices in different VLANs that are connected to remote leaf nodes is routed on the source leaf, and forwa rded
across the VXLAN Layer 2 tunnel to the remote leaf.

With t his design, traffic moves between Layer 3 domains at t he leaf device. For traffic destined to external destinations, the
leaf device bridges the original traffic to t he VN I that connects to the security device connected to the sp ine node, or to the
VNI that connects to the sec urity domain.

Chapter 7-18 • Basic Data Center Architectu res www.juniper.net


Data Center Fabric with EVPN and VXLAN

Sample Topology - No VXLAN to Spine

■ EVPN VXLAN
• L2 Gateways on leaf devices Internet

• L3 Gateways on leaf devices


• Requires leaf devices that support L3
Gateway functions Spine -- --
---+

• Eliminates hair-pinning when switching


VNls within the same VTEP L3Ga ~way L3 teway
Leaf L2 Gateway L2Gateway L2 Gateway L2 Gateway
• No VXLAN capabilities on spine
devices = = = =
• Spine devices run fabric protocols,
=
= =
= =
= =
=
=
= =
= =
= =
=
and can be used as route reflectors =
= =
= =
= =
=
for BGP route propagation = = = =
Servers

C> 2019 Juniper Networks, Inc All Rights Reserved

No VXLAN to Spine
Another option is to el im inat e the VXLAN tunne ls to t he spine nodes. All Layer 2 forwarding still transits t he spine devices
over the IP fabric, but t he spine nodes are not required to support any VXLAN capab ilities. The spine nodes in this
environment forward Layer 3 t raffic between the leaf nodes.

The spine nodes still participate in the underlay routi ng protocols. They ca n be configured to relay the overlay routing
information as we ll, and serve as BGP peers or route reflect ors, without running the EVPN or VXLAN com ponents.

www .j uniper.net Basic Dat a Center Arch it ectures • Chapt er 7 - 19


Data Center Fabric with EVPN and VXLAN

Sample Topology - Edge Routed Design - Virtual


■ EVPN VXLAN
• L2 Gateways on servers
• VRouter support for L2 Gateway required
Internet
• L3 Gateways on servers
• VRouter support for L3 Gateway required ....__-:f-J.....~--~rJ-',___r:External Security Gateway
• Traffic moves between Layer 2
domai ns on the servers Spine ---- --
• Virtual Routers on servers become the
edge of the VXLAN domain
• Leaf configured as L2 gateway for
BMS that are deployed without virtual
Leaf
I
- L3 Gatewa L2 Gateway

routers
L3Gatewa
L2Gatewa
L3 Gatewa
L2 Gatewa
L3 Gatewa
L2 Gatewa =
= =
=
L3 Gatewa
L2 Gatewa
L3 Gatewa
=
= =
=
L2 Gatewa
L3 Gatewa
=
= =
=
L2 Gatewa
L3 Gatewa
L2 Gatewa = =
= =
=
Servers
C> 2019 Juniper Networks, Inc All Rights Reserved

Edge Routed Design Using Virtual VTEPs


The VTEP functionality can also be moved to the individual hosts runn ing in the network. This requires a software
implementation and virtual machines that support VXLAN Layer 2 and Layer 3 gateway functiona lity. In situations where
bare-meta l servers are deployed in the network, the Layer 2 or Layer 3 gateway functions can be retained on the Layer 3
switch or router to which the bare-meta l servers are connected . This functiona lity can be deployed in a mixed environment,
as well, where some devices that are connected to the switch are bare-meta l servers and some devices support virtual
Layer 3 and Layer 2 gateway capabilities.

Chapter 7-20 • Basic Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Sample Topology - BGP Underlay


■ Underlay with BGP
• Use EBGP peering between devices in
underlay
Internet
• Peer to physical interface of directly
connected devices
• Exports (advertises) loopback address
to the rest of the domain (VTEP
endpoint addresses)
• Provides BGP level scaling, path
--+ --+ --+ --+
selection, load sharing, policy AS65003 ~ AS65004 ~ AS65005 ~ AS65006 ~

management in the underlay


• Each device is unique private AS =
= =
= =
= =
=
number for underlay =
= =
= =
= =
=
=
= =
= =
= =
=
=
= =
= =
= =
=
--------- EBGP Session

C> 2019 Juniper Networks, Inc All Rights Reserved

BGP Underlay
An EBGP underlay can a lso be configured in place of the IGP underlay. With the EBGP underlay, all fabric devices with in the
network peer to all physically connected fabric devices. Each device exports its loopback address into the IP fabric domain to
provide reachability for the overlay BGP peering sessions. In larger environments, using a BGP underlay can provide
additional scalability and policy management functions that may not be available with an IGP. With this environment, each
device in the fabric can be configured as an individual autonomous system, or devices can be grouped into autonomous
system groups, such as configuring the spine devices with one AS number, and the leaf devices with another AS number. If
this method is chosen, additional parameters may be needed to change the BGP protocol 's default route selection and loop
detection process.

www .juniper.net Basic Data Center Arch itect ures • Chapter 7 - 21


Data Center Fabric with EVPN and VXLAN

Sample Topology - Dedicated Security Domain


• Traffic to external desti nations
• Forwarded to a dedicated security domain
• Dedicated security domain can be physical
or logical (virtual) devices Internet

• Scalable security domain


...__--"';:;;,,,,;~ ..... External Security Gateway
••••• ••
••
L3 Gatew:i;:


-.. ......,.............'~-'Y'·....·····. . ................ ......······
L·2 GateJey ·•,,,
....
. . . . : .· • •

~ -
L2 Gatew.,ay '•,,,

..._ •• • • l •
• •

...,
......
• .....

•.•••
... ......
• ••••

VXLAN ' ·•·l r" '. ' ' •••• .. ••-. -. ... ......
••• .. ••.
' I ·• ••• .... ' .··- ---.~ :• •

-
L2 Gateway ~
~

I---'-~------.-.-
·- -·
r--
-'--'=.\
L2 Gateway
~..:...,
••• '
·-••
'

••••
'
.-.-
♦ -
\

L~ Gateway :
#••. :•
.........,_-.-.,.
'\ -_ ,____:...1-_-1_
I •
•. .••• ..••
•• •
I •

VLAN 100
=
= =
=
-----· VXLAN encapsulated traffic =
= =
= =
=
- - - - - Native Layer 2 traffic
=
= =
= =
=
=
= =
= =
=
.............. Traffic flow

C> 2019 J uniper Networks , Inc All Rights Reserved

Dedicated Security Domain


Another option is to move the security device from t he edge of the network to a dedicated security domain with in the data
center itself. This can allow greater scalability because the security domain can consist of multiple devices, each of which
has a different f unction . With this design, all traffic destined to external destinations is forwarded to t he Layer 3 gateway. The
Laye r 3 gateway bridges the traffic from t he source VN I to the VNI associated with the security domain edge device. The
destination VTEP decapsulates the data packets and forwards them to the security domain inside interface. Once the
security domain has processed the traffic, it is returned to t he VXLAN network on a different VLAN/VN I. The new VNI contains
a Layer 3 gateway, which is capable of routing to the external networks - in this example on the sp ine nodes. Return traffic
from external destinations is passed directly to the external VNI, which connects to the security domain's outside interface,
and is forwarded to t he sec urity domain to be processed and passed back into the VXLAN. With t his deployment, it is
common to use VR Fs on the sp ine devices to maintain traffic separation and manipulate the forward ing path of traffic as it
pas.s es th rough the spine devices.

Chapter 7-22 • Basic Data Center Architectu res www.juniper.net


Data Center Fabric with EVPN and VXLAN

Sample Topology - Per-VLAN on Dedicated


Security Domain
• Traffic to external destinations
• Forwarded to IP address within the same
VLAN in a dedicated security domain
• Requires per-VLAN IP address on security Internet
gateway within each broadcast domain

• Security domain performs routing between _______.. _,__-:;.,,,,,c~•...... External Security Gateway
VLANs/domains L:t-Qateway

• Dedicated security domain can be physical VXLAN L2 Gateway
.......
:0: - .. ·.. ..., . .
L2 Gatewi!
,

or logical (virtual) devices


--
,,,,,, -- --- -- --
------, -,
-. - ------
-c: -
' ----
_...._ ..
.......... . . .--- ---
.. .. .. -- ...
....
', ·• ..-'
• Scalable security domain
- ~
,----;, <I'_,.
- ,---,.,I

L2 Gateway
---,
-
L2 Gateway
..., .._
'·~~~
•.• '

y
I

VLAN100
=
= =
=
-----· VXLAN encapsulated traffic =
= =
= =
=
- - - - - Internal native Layer 2 traffic
=
= =
= =
=
=
= =
= =
=
•·· ··········· Public facing traffic flow

C> 2019 Juniper Networks , Inc All Rights Reserved

Per VLAN Security Domain


An alternative to the previous approach is to configure an IP address in the security domain for each VLAN that is serviced in
the network. With th is design, every VLAN within the network has Layer 2 connectivity with the logica l interface on the
security domain that belongs in that VLAN . The security domain can be configured as the default gateway for all traffic that
will leave the VLAN . The security domain processes the traffic and forwards it back to the VXLAN network on a logica l
interface that is placed in the destination VLAN/ VNI. Traffic destined to external networks is placed in a VLAN/ VNI that
connects to the gateway device at the domain edge.

www .j uniper.net Basic Data Center Arch itectures • Chapt er 7 - 23


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Described a basic data center deployment scenario

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
24

We Discussed:
• A basic data center deployment scenario.

Chapter 7-24 • Basic Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Review Questions

1. Where are Layer 3 Gateways placed in a centrally routed EVPN-


VXLAN environment?
2. Where are Layer 3 Gateways placed in an edge routed EVPN-
VXLAN environment?
3. What is the primary role of an IGP in an EVPN-VXLAN underlay
design?
4. What is the primary role of EBGP in an EVPN-VXLAN underlay
design?

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(l\\'OPKS
2s

Review Questions
1.

2.

3.

4.

www .juniper.net Basic Data Center Arch itectures • Chapter 7 - 25


Data Center Fabric with EVPN and VXLAN

Lab: EVPN-VXLAN L3-GW


• Configure an EVPN-VXLAN distributed Layer 3 Gateway to bridge VXLAN
traffic between two VN Is and verify Layer 3 Gateway functions.
• Configure an EVPN-VXLAN distributed Layer 3 Gateway within a customer
VRF and verify Layer 3 Gateway functions within a customer VRF.

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
26

Lab: EVPN-VXLAN L3-GW


The slide provides the objectives for th is lab.

Chapter 7-26 • Basic Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Answer to Review Questions


1.
Laye r 3 gateways are normally placed in the spine devices in a centrally routed EVPN-VXLAN environment.

2.
Laye r 3 gateways are normally placed in the leaf devices, or in hosts connected to leaf devices, in an edge routed
EVPN-VXLAN environment.

3.
The primary ro le of an IGP in an EVPN-VXLAN environment is to advertise paths to fabric devices, and advertise loopback
interfaces in the underlay network for use in an overlay network configuration. It also provides ECM P across all forward ing
paths in the underlay.

4.
The primary ro le of EBGP in an EVPN-VXLAN underlay environment is the same as that of an IGP: to advertise loopback
interfaces for use in the overlay network. It is also to load balance transit traffic across all available forwarding paths
between devices.

www .j uniper.net Basic Data Center Arch itectures • Chapter 7- 27


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 8: Data Center Interconnect

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Define the term Data Center Interconnect
• Describe the DCI options when using an EVPN-VXLAN
• Configure Data Center Interconnect using EVPN-VXLAN

C> 2019 Juniper Networks, Inc All Rights Reserved

We Will Discuss:
• The term Data Center Interconnect;

• The DCI options when using EVPN-VXLAN; and

• Configuring Data Center Interconnect using EVPN-VXLAN.

Chapter 8 - 2 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Agenda: Data Center Interconnect

➔ DCI Overview
■ DCI Options for a VXLAN Overlay
■ EVPN Type-5 Routes
■ DCI Example

iQ 2019 Juniper Networks, Inc All Rights Reserved

DCI Overview
This slide lists the topics we will cover. We will discuss t he highlighted topic f irst.

www.j uniper.net Data Center Interco nnect • Chapter 8-3


Data Center Fabric with EVPN and VXLAN

Data Center Interconnect

• What is a data center interconnect?


• Connection between two or more data centers
• DCI communication methods
• Interconnects can operate at Layer 2 and Layer 3
• Many transport options are available

DCI Transport
Network

DCI Routers DCI Routers


L3 fabric L3 fabric
data center data center

C> 2019 Juniper Networks, Inc All Rights Reserved

Data Center Interconnect


Data centers can exist and be deployed in a variety of areas. They can be deployed within a corporation, in the cloud, or in
various sites around the globe. Organizations can choose to deploy their own data center, or have it deployed in an
organization that specializes in data center operations.

Whenever two or more data centers are deployed with in an organization that must communicate with each other, a method
of connecting them must exist. A connection between two or more data centers is called a data center interconnect (DCI).

A DCI can function at Layer 2 or Layer 3. A Layer 2 DCI bridges Layer 2 traffic across the transport network. A Layer 3 DCI
connects data centers with Layer 3, or IP routing. Many different transport options are available to interconnect sites.

Chapter 8-4 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Interconnect Physical Network Options

• There are several types of networks that can provide DCI


• Point-to-Point - Private Line, Dark Fiber
• IP - Customer owned , Service Provider owned
• MPLS - Customer owned, Service Provider owned

DCI Transport
Network

DCI Routers DCI Routers


L3 fabric L3 fabric
data center data center

Q 2019 Juniper Networks, Inc All Rights Reserved

Physical Network Options


Before we talk about the protocols and bridging methods t hat are used to relay traffic from one data center to another, we
need to discuss the transport network over which data will pas.s . There are several types of networks that can provide DCI
functionality.
• Point-to-point links are private lines or dark fiber that interconnect sites. These types of transport networks are
dedicated to the organization, and are not shared with other organizations or customers.
• An IP transport network can be a customer owned or per service provider owned IP network.
• An MPLS interconnect uses multiprotocol label switching to bridge two or more service domains, and can be
customer owned or service provider owned.

www.juniper.net Data Center Interconnect • Chapter 8 - 5


Data Center Fabric with EVPN and VXLAN

Dark Fiber

■ Dark Fiber
• Enterprise is responsible for DCI routing and end to end optical function
• Dedicated optical fiber. Can be lit with multiple WL, Highest quality and
bandwidth solution
• Most expensive option

ROAOM ROAOM

DCI Routers DCI Routers


L3 fabric L3 fabric
data center data center

iQ 2019 Juniper Networks, Inc All Rights Reserved

Dark Fiber
Dark fiber is a f iber-optic network link that connects two or more sites. With a dark fiber interconnect, the enterprise is
responsible for DCI rout ing and end-to-end optica l function . The fiber can be a dedicated fiber, where it can be lit with
multiple wavelengths. A dark fiber connection is the highest quality and highest bandwidth solution. It is also t he most
expensive option.

Chapter 8-6 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Wavelength Service

■ Wavelength Service
• Enterprise is responsible for DCI routing and end to end optical function
• Fiber shared with multiple enterprises
• Less expensive option OR dark;fiber not a,,ailable
.
I
W L Leased to other customer
1• • Provider _• 1
Owned
I
I I

ROADM

[> [>
I f ' t '

DCI Routers DCI Routers


L3 fabric L3 fabric
data center data center

C> 2019 Juniper Networks, Inc All Rights Reserved

Wavelength Service
Wavelength services are similar to dark f iber, except the f iber is not dedicated to a single enterprise. A service provider owns
the f iber and uses wave mult iplexing to separate data streams between locations. Each wavelength can be dedicated to an
individual customer, so an enterprise leases a wavelength or set of wavelengths on a shared fiber. A wavelength service is a
less expensive option to dark fiber, or a good option when dark fibers are not available.

www.j uniper.net Data Center Interconnect • Chapter 8-7


Data Center Fabric with EVPN and VXLAN

Managed Service with Ethernet/IP Handoff

• Managed Service with Ethernet/IP handoff


- Enterprise is responsible for DCI routing
I
- Optical connectivity is pro,,ider owned I

1
W L Leased to other customer
I
-- Provider
Owned
.I .I
I I

~'
----
ROADM ROADM
r:;
[>
---- ~~ · t '
~r:;
DCI Routers DCI Routers
L3 fabric L3 fabric
data center data center

C> 2019 Juniper Networks, Inc All Rights Reserved

Managed Service with Ethernet/IP Handoff


With a managed service with Ethernet/IP handoff, the enterprise is responsible for DCI routing but the optica l connectivity is
provided by a service provider. From the enterprise or customer perspective, the DCI routers are connected to an Ethernet
segment.

Chapter 8-8 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

DCI Options - Large Enterprise

• Large Enterprise DCI


- Multiple sites (DC, COLO)
- Connectivity at different sites based on
economics and availability of DCI options
- Enterprise owns the routing; owns transport
depending on the site
--- ---
- Private WAN based on MPLS - -
DCI Routers 5~

DCI Routers DCI Routers


L3 Fabric L3 Fabric
Data Center Data Center
C> 2019 Juniper Networks , Inc All Rights Reserved

Large Enterprise Options


With large enterprises, often t imes there are mu lt iple data centers in different locations. When this is the case, the
connectivity at different sites is based on the economics and available DCI options at those locations. Enterprise customers
own the routing, and can sometimes own the transport depending on the site. A common solution is a WAN based on MPLS.
With a WAN based on MPLS, the service provider network can be transparent to the end customer (as was the case with a
Layer 2 VPN), or it can appear as a rerouted IP network.

www.juniper.net Data Center Interconnect • Chapter 8-9


Data Center Fabric with EVPN and VXLAN

DCI Routing Options Summary

IP Backbone M PLS Backbone

DC Fabric - EVPNNXLAN DC Fabric - EVPNNXLAN

L3 DCI Only
DCI Option 1 - EVPN Type-5NXLAN (DC fabric DCI Option 4 - IPVPN/MPLS (DC fabric and DCI
and DCI collapsed on QFX1 OK) collapsed on QFX10K)

DC Fabric - EVPNNXLAN DC Fabric - EVPNNXLAN

DCI Option 2 - EVPN Type-2+5NXLAN - OTT DCI Option 6 - VLAN handoff from DC fabric to MX.
L2 + L3 DCI with fully meshed VTEP's (DC fabric and DCI VPLS + IPVPN/MPLS or EVPN/MPLS on MX
collapsed on QFX1 OK)

C> 2019 Juniper Networks, Inc All Rights Reserved

DCI Routing Options


The re are several options for DCI routing. When evaluating a DCI option, it is important to be aware of the abilities and
limitations of each option. It is also important to be aware of the requirements of the applications that will be running across
the DCI.

For Layer 3 DCI on ly, the data center fabric can be EVPN/ VXLAN and t he DCI Option 1 example shown in the slide runs EVPN
with Type-5 routes across the DCI. With this option, no Layer 2 connectivity is established across the DCI link. Only Laye r 3
routi ng is performed across the DCI link.

If Layer 2 and Layer 3 DCI are requ ired, a hybrid of EVPN with Type-2 routes and EVPN with Type-5 routes can be used. EVPN
Type-2 routes are used for a Layer 2 stretch across the DCI. EVPN Type-5 routes are used when Layer 3 DCI is required.

When using an MPLS backbone, and a data center is running EVPN/ VXLAN, an IP VPN/ MPLS DCI can be configured.
If Layer 2 and Layer 3 connectivity is desired across an MPLS backbone, a VLAN handoff from the data center fabric to an
edge device, which is ru nning VPLS, or IP VPN/ MPLS, or EVPN/ MPLS can be used . A VLAN or VXLAN can be stitched to an
MPLS label switched path at each end of the DCI.

Chapter 8-10 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Agenda: Data Center Interconnect

➔ DCI Options for a VXLAN Overlay

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lW()PKS
11

DCI Options for a VXLAN Overlay


The slide highlights the topic we d iscuss next.

www.j uniper.net Data Center Interconnect • Chapter 8 - 11


Data Center Fabric with EVPN and VXLAN

DCI Options for VXLAN Using EVPN Signaling

EVPN-MPLS
L3VPN-MPLS EVPN-VXLAN
• •
• •

•• •••
• ,,

EVPN -VXLAN
••

E e VPN- E e Edge EVPN- Edge Edge EVPN- Edge


•,,
.,YXLA~. ► •• VXLAt:J•• ., YXLAN ►

QFX QFX QFX QFX QFX QFX QFX QFX


DC 1 DC2 DC 1 DC2 DC 1 DC2 DC 1 DC2

I
Option 1 Option 2 Option 3 Option 4

• Existing MPLS • EVPN stitching • Existing WAN • Direct connect


• Easy • Requires planning • OTT DCI • Easy
implementation (Internet) implementation
• OTT DCI (L3VPN) • No MPLS
• OTT DCI (dark
fiber)

Q 2019 Juniper Networks, Inc All Rights Reserved

DCI with EVPN/VXLAN


The re are several methods to deploy EVPN/VXLAN across Data Center Interconnect. These methods include:

• L3VPN over MPLS;

• EVPN Stitching;

• EVPN/VXLAN over an existing WAN; and

• Direct Con nect.

Chapter 8-12 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

L3VPN-MPLS Option 1

L3VPN-MPLS


••
.••

EVPN-VXLAN Service Provider Edge Devices

........ EBGP Peering to Service Provider to exchange loopback interface routes

IBGP peering (EVPN signaling) between DCI routers


QFX QFX VXLAN tunnel created between DCI routers
DC 1 DC2 Provider network is transparent to VXLAN
Layer 2 Stretch, or Layer 3 DCI (Type-5), or mix

C> 2019 Juniper Networks, Inc All Rights Reserved

L3VPN-MPLS
The L3VPN-MPLS option is easy implement. The VXLAN tunnel runs over the top of a Layer 3 VPN. The VTEP endpoints
terminate on the devices within the data center, and the data center sites appear to be directly connected across the VXLAN
tunnel.

www.juniper.net Data Center Interconnect • Chapter 8 - 13


Data Center Fabric with EVPN and VXLAN

EVPN-MPLS Option 2

EVPN-MPLS

•••
••
...

Edge EVPN-VXLAN Edge Service Provider Edge Devices

........ ►
VXLAN Tunnel to Edge device - VXLAN tunnel stitched to MPLS LSP

QFX QFX
DC 1 DC2

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN-MPLS
EVPN stitching requires some planning, because it requires coord ination between the enterprise and the service provider.
The enterprise VXLAN VNls must be mapped to MPLS label switched paths in the provider network. This mapping must
match on both ends of the DCI.

Chapter 8-14 • Data Center Interconnect www.ju n iper .net


Advanced Data Center Switching

EVPN-VXLAN Option 3

EVPN-VXLAN
••




VXLAN Tunnel between DCI devices (IBGP family evpn signaling)

Edge EVPN-VXLAN Edge


Edge Devices

VXLAN Tunnel to Edge device (IBGP family evpn signaling)


♦••·· .. ····►

QFX QFX
DC 1 DC2

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN-VXLAN Option 3
EVPN/ VXLAN over an existing WAN connection is another over-the-top DCI option. This can be run over the Internet. With this
option, the WAN is an IP routed network and VXLAN tunne ls are configured from the DCI edge devices. One th ing to note
about this scenario is that the VXLAN tunnels within a site do not cross the edge device. The local VXLAN tunnels terminate
on the edge device, and a separate VXLAN tunnel is used across the interconnect. Traffic is routed or bridged on the edge
device between the two VXLANs.

www.juniper.net Data Center Interconnect • Chapter 8 - 15


Data Center Fabric with EVPN and VXLAN

EVPN-VXLAN Option 4

EVPN-VXLAN






••
t
Leased lines or dark fiber (direct connect)

VXLAN tunnel between DC edge devices

QFX QFX

, ____.DC 1 DC2

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN-VXLAN Option 4
With a Direct Connect DCI, the DCI connection is treated as a Layer 2 link. The DCI devices at each site appear to be directly
connected to each other, and VXLAN tunnels run directly between them.

Chapter 8-16 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

DCI with EVPN (L3 DCI) - IP Backbone


Tenant 1
Green Red Yellow
VLAN VLAN VLAN

~ Inter-POD or
• VXLAN Routing lnter-DCI IP Fabric

~ Border Role
EVPN
Peering
between
L3 VXLAN border GW ------
------
GW ------ ~~I "'--~ on~ly~ - - - - ------
------

L2/L3 VXLAN
GW

YellowVLAN

Tenant 1 Tenant 1

C> 2019 Juniper Networks, Inc All Rights Reserved

DCI with EVPN - IP Backbone


Data centers with different IP address blocks can be interconnected across an IP backbone. The IP backbone data transport
is based on IP routing. Since each data center has its own unique IP address range, no Layer 2 bridging is needed.

Because no Layer 2 bridging or Layer 2 stretch is needed, it's not necessary to advertise MAC addresses across the DCI.
Instead, route prefixes can be advertised between the sites. A route prefix can represent a group of hosts within a site, or a ll
host addresses within a site. To use this option, EVPN Type-5 routes must be supported.

EVPN Type-5 routes differ from EVPN Type-2 routes in that EVPN Type-5 routes do not need a VXLAN tunnel to the protocol
next hop received in the router advertisement. The EVPN peering between the border gateways is to advertised prefixes that
exist with in the data center domain, and not MAC addresses.

www.juniper.net Data Center Interconnect • Chapter 8 - 17


Data Center Fabric with EVPN and VXLAN

DCI with EVPN (L2 + L3 DCI) - IP Backbone


Tenant 1
Green Red Yellow
VLAN VLAN VLAN

- Inter-POD or
• VXLAN Routing lnter-DCI IP Fabric

~ Border Role
EVPN Peering
between border
GW only, RR on
L3 VXLAN
GW
Spine ------
______
------
-----..-
------

L2VXLAN
GW ====
-- ------
Full mesh VTEP
L2/L3 VXLAN
GW
Peering

Tenant 1 Tenant 1

Q 2019 Juniper Networks, Inc All Rights Reserved

Layer 2 and Layer 3 DCI


When both Layer 2 and Layer 3 connectivity is needed, EVPN Type-2 and EVPN Type-5 routes can be advertised across the
DCI. Whenever a Layer 2 stretch is required, a VXLAN tunnel is created from the leaf device in one data center to a leaf
device and the other data center. VXLAN VNI bridging can take place on a Layer 3 VXLAN gateway at either location . For Layer
3 prefix advertisement, Type-5 routes can be used.

Chapter 8-18 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

DCI with IPVPN (L3 DCI) - MPLS Backbone


Tenant 1
Green Red Yellow
VLAN VLAN VLAN

• VXLAN Routing lnter-DCI MPLS Fabric


IP\/pN
Routes
~ Border Role
IPVPN
Peering
between
L3 VXLAN borderGW
GW ------ " only

L2/L3 VXLAN
L2 VXLAN GW
GW

vRouter
YellowVLAN Red VLAN
VM2
Tenant 1 Tenant 1

Q 2019 Juniper Networks , Inc All Rights Reserved

MPLS Backbone
With an MPLS backbone in the provider network, the border gateway peers to the service provider to exchange route
information. Routes from one data center are advertised to the service provider, transported across the service provider
network, and re-advertised back to the remote data center.

With this scenario, the IP prefixes from one data center are advertised to the remote data center and traffic is routed
between them.

www.juniper.net Data Center Interconnect • Chapter 8 - 19


Data Center Fabric with EVPN and VXLAN

L3 DCI To Public Cloud


Tenant 1 Tenant 2
Green Red Yellow
VLAN VLAN VLAN

.._ ___
• VXLAN Routing lnter-DCI MPLS Fabric
...
fP\lpN
Routes
~ Border Role
IPVPN
Peering
between
L3 VXLAN border GW
GW only

L2/L3 VXLAN
L2VXLAN GW
GW ====

vRouter
YellowVLAN
VM2
Tenant 1 Tenant 2

C> 2019 Juniper Networks, Inc All Rights Reserved

Public Cloud
A public cloud can be used to interconnect data centers. The public cloud option functions in a similar manner to an MPLS
backbone DCI. IP VPN routes are advertised across the public cloud from the edge routers in each data center.

Chapter 8-20 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Agenda: Data Center Interconnect

➔ EVPN Type-5 Routes

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
21

EVPN Type 5 Routes


The slide highlights the topic we d iscuss next.

www.j uniper.net Data Center Interconnect • Chapter 8 - 21


Data Center Fabric with EVPN and VXLAN

Layer 2 Stretch Between Two DCs


■ Stretching subnets between data centers requires the exchange of
EVPN Type-2 routes between DCs
• Type-2 route protocol next-hops must be validated by and forwarded to VTEP
tunnel next-hop
Route Table Route Table
• Could be 1OOOs 10.1 .1.0/24 > irb.O 10.1.1 .0/24 > irb.O
MAC/IP Advertisement
of routes (MACs) 10.1 .1 .254 irb.O 10.1 .1 .254 irb.O

- ., __
MAC host2 nh leaf2's loO
target:1 :1
p;r. ril,1 5ii
MAC/IP Adv ,, ., .,

I
/
..
MAC host2's nh tunnel S1>L2
MAC/IP Adv
.,.
' •• • :

MAC host2's nh tunnel S2>L2


I■ •

-- ... ' \

t L!::=======:::...J ---~ \

r ....
MAC host2's nh tunnel L 1>L2 MAC host2's nh L2>xe-O/O/O
~ S1 S2 xe-0/0/0 -g
Ii='
1.= 1 DCI Transport ,--
I L1
-
--+
+--
--+
Network +--
--+
+-- - L2 I
host
host1 +-- --+ 2
DCI Routers '- ~
DCI
10.1 .1 .0/24 1 1 Routers 10.1.1.0/24
I I I I I
~---------' -----------------------------------'
MP-IBGP Session (EVPN) MP-IBGP Session (EVPN ) ~---------' MP-IBGP Session (EVPN)
Q2019 Juniper Networks, Inc All Rights Reserved Juniper NFl'\\'OPKS
22

Layer 2 Stretch
When a Layer 2 stretch is required between data centers, EVPN Type-2 routes must be advertised between the data centers.
Since each EVPN Type-2 route represents a single MAC address within a data center, the number of routes advertised across
the DCI link could be in the thousands of routes . In addition, the BGP protocol next hop of Type-2 routes must be va lidated
with the VXLAN tunnel next hop. This means that VXLAN tunnels must exist from end to end across the data centers, and the
DCI transport network.

Chapter 8-22 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Unique Subnets Between DCs


■ Using unique subnets between data centers allows Layer 3 GWs to
exchange only Type-5 routes
• No MAC advertisements necessary between data centers - Type-5 NH
validated using Route Table .... - - - ~ , Route Table
•,net. 0 10.1 .1.0/24 > irb.O Type-5 Prefix 10.1.2/24 > irb.O
10.1 .2.0/24 > tunnel S1 -S2 dverti sement 10.1.1 .0/24 > tunn el S2-S1
10.1 .2/24
10.1.1 .254 irb.O 10.1 .1.254 irb.O
nh S2's loo MAC/IP Advertisement
target: 1:1 MAC host2 nh leaf2's loO

MAC/l P Adverti sement


.
MAC S1 's IRB nh S1 's loO I
/
,, -- MAC S1 's irb > nh irb.O
· "·· '•--11 . .. .. ~ :

MAC host2's nh tunnel S2>L2


. ...lirn .._ - - - ...
.... target:1 :1

'\\
target: 1: 1 /

MAC S1 's irb tunnel L 1>S1 MAC host2 nh L2>xe-O/O/O

11111
IP
Network
-
S2 xe-0/0/0

111111

host1 host2
DCI Routers . __ _ _ _ _ _ _ _.., DCI Routers
10.1.1 0/24 L3 GW EVPN 1 , 1 L3 GW EVPN 10.1 .2.0/24
I I I 1
~---------' -----------------------------------·
MP-IBGP Session (EVPN) MP-IBGP Session (EVPN ) ~---------' MP-IBGP Session (EVPN)

C> 2019 J uniper Networks , Inc All Rights Reserved.

Layer 3 DCI
If t he IP address range in each data center is unique to th at data center, it's not necessary to advertise MAC addresses
between data centers. In this scenario, IP prefixes can be advertised by using EVPN Type-5 routes. The EVPN Type-5 routes
do not contain the MAC addresses of the hosts at each data center.

With a Layer 3 DCI, the destinations in the remote data center are on different subnets than the hosts in the original data
center. This means that IP routing must take place. In the example, host1 is on network 10.1.1.0/24 . To communicate with
hosts in the 10.1.2.0/ 24 network, host1 must first send its data packets to its default gateway. In t he example, the default
gateway is the IRB interface on the DCI edge device, and in th is example the DCI edge device is the spine device.

The spine device does not operate as a traditional VXLAN Laye r 3 gateway. With the trad itiona l VXLAN Layer 3 gateway, t he
VNls that would be bridged must all be configured on the gateway device. When using Type-5 routes, standard IP routing is
used to switch Layer 3 domains. In the example, the data center on the right conta ins subnet 10.1.2 .0/ 24. The MAC address
of host2 is advertised to t he edge device spine2. There is a VXLAN tunnel between spine2 and leaf2. Spine2 installs host2's
MAC address in its switching table with a next hop pointing to the tunnel that term inates on leaf2. With Type-5 routing
enabled, t he prefix associated with the IRB interface within t hat broadcast domain is installed in t he route tab le. The spine2
device advertises t he prefix 10.1.2/ 24 to spine1 in an EVPN Type-5 route.
Spine1 receives the EVPN Type-5 route, which contains the IP prefix 10.1.2.0/ 24, and even t hough it is an EVPN route, it can
validate t he protocol next hop of that route using the INET.O routing table. The MAC address of t he IRB interface on spine1 is
advertised to leaf1, which is also used as the default gateway.

When host1 sends traffic to host2, host1 forwards t he data to the MAC address of the default gateway on spine1's IRB
interface. Spine1 decapsulates the VXLAN packet, performs a route look up on the inner IP header, and looks up the next
hop to host2's IP address. The next hop to t he 10.1.2.0/24 network is the VXLAN tunnel between spine1 and spine2. The
data packet is re-encapsu lated in a new VXLAN header and forwa rded to spine2. Spine2 decapsulates the VXLAN packet
that is destined for its loopback address, analyzes t he inner IP header, and identifies host2's MAC/IP information in t he local
Laye r 2 switching table. The local Layer 2 switch ing table indicates t hat the MAC address of host2 is reachable through the
tunnel from spine2 to leaf2. Spine2 encapsulates the packet and fo rwards it t hrough the spine2 to leaf2 VXLAN tunnel and
the packet arrives on leaf2. Leaf2 decapsulates the VXLAN header and forwards the original IP packet to host2.

www.juniper.net Data Center Inte rconnect • Chapter 8 - 23


Data Center Fabric with EVPN and VXLAN

Virtual Machine Traffic Optimization -


VMTO (1 of 3)
■ What happens when a VM moves to a new VLAN?
• Hosts are unaware that they have been migrated and do not flush their ARP
table
• Subsequent inter-VLAN traffic sent traffic to the old default gateway

I POD1 I
.,.,,,.---- ......
,
+ - _ ,_ - -
---
~----
---- ... ' -- ... - c::;i
--
-
DCI Transport
--- - 11111

111111,

..........
-
..... ___ _--- - host2

POD2
♦ VTEP Tunnel

--------- VXLAN Tunnel

- - - - Traffic Flow

C> 2019 Juniper Networks, Inc All Rights Reserved

VMTO (1 of 3)
The MAC addresses of hosts are learned and advertised using BGP route advertisements. When a BGP peer receives an
advertisement for a MAC address, it stores the MAC address in the EVPN switching table. In a data center that uses virtual
machines, virtual machines can frequently move from one VLAN to another in a m igration. When a host is m igrated to a new
VLAN, the host may not be aware that it has been moved. It's common for the host to not flush its ARP table, and to retain
the ARP table that existed prior to the migration. Because the host has stored t he MAC address information for the default
gateway it was using prior to the migration, subsequent traffic that must be forwarded to another VLAN can be sent to the old
gateway MAC address. Virtual Machine Traffic Optimization is used to assist in this scenario.

Chapter 8 - 24 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Virtual Machine Traffic Optimization -


VMTO (2 of 3)
■ Without VMTO
• Traffic continues to forward to old L2 Gateway (VRF route table not always
updated)
• Remote Data Center Edge continues sending traffic toward the original
gateway
I POD1 I
- - --
--- ' .....
-- - c::;i
VM
Migration
DCI Transport
--- -
-
111111

host2

♦ VTEP Tunnel
POD2

--------- VXLAN Tunnel

- - - - Traffic Flow

C> 2019 Juniper Networks, Inc All Rights Reserved

VM Migration Without VMTO


In the example, a virtual machine migrates from one host to another. The new physical host exists in different VLANs than
the original host . However, the virtual machine retains the AR P entries for the original defau lt gateway. Additiona lly, the DCI
edge device in a remote data center has been forwarding traffic along the original ly learned path, and to the original remote
gateway.This is because the layer 3 gateways at the edge of the data center do not store individual MAC addresses, or /32
addresses of hosts.
When t he virtual machine moves to a new host in different VLAN, the gateways at the edge of the data centers are not
updated as to where the new virtua l mach ine resides. Because of this, traffic is forwarded along the original path until a
gateway is reached t hat has been updated with the new virtual machine's location . The original gateway must re-route the
traffic destined to the virtua l machine across the VXLAN tunnels in the new Layer 3 domain. This process of bouncing traffic
around is referred to as trombon ing.

www.juniper.net Data Center Interconnect • Chapter 8 - 25


Data Center Fabric with EVPN and VXLAN

Virtual Machine Traffic Optimization -


VMTO (3 of 3)
■ With VMTO
• VNE (Virtual Network Edge) device stores host (/32) routes in VRF table
• Installs host routes in addition to subnet prefixes - Remote DC edge devices
receive upated host location
I POD1 I
------......
VM
,., ,. ,
---
DCI Transport
-t: -- --- ------ ----
=-1 -
-
.....
------ - q

Migration ,_,-==- 1111


~==--1 = host2

I POD2 I
♦ VXLAN Gateway
Enable VMTO in the routing instance:
--------- VXLAN Tunnel
- - - - Traffic Flow [edit routing- instance instance- name protocols evpn remote - ip- host- routes]

iQ 2019 Juniper Networks, Inc All Rights Reserved

VM Migration With VMTO


With VMTO enabled, the gateways at the edge of the data centers store the /32 host MAC/IP routes, so when a MAC/IP
changes at the edge of the data center, the gateways are updated and forwarding paths are optimized .

Chapter 8-26 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Enable VMTO
■ Enable VMTO in the routing instance
• [edit routing-instance instance-name protocols evpn remote-ip-host-routes]
I POD1 I

Iii
- - -
;=....::~ .... .... _ --
VM DCI Transport
Migration -==- 1 11
host2

POD2

♦ VXLAN Gateway
--------- VXLAN Tunnel
- - - - Traffic Flow

C> 2019 Juniper Networks, Inc All Rights Reserved

Enabling VMTO
VMTO is enabled with in a customer routing instance at the [ edit rout i ng - instance instance - name p ro toco ls
evpn r emote-ip-hos t-routes ] hierarchy. Routing pol icy can be implemented to modify or regulate which routes are
imported into the forwarding table on the edge device.

www.juniper.net Data Center Interconnect • Chapter 8 - 27


Data Center Fabric with EVPN and VXLAN

Agenda: Data Center Interconnect

➔ DCI Example

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
2s

DCI Example
The slide highlights the topic we d iscuss next.

Chapter 8-28 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Example Topology Layer 2 Stretch (1 of 2)

• Underlay Topology
• Underlay is two IP Fabrics based on EBGP routing
• EBPG routing to provider to advertise loopbacks between data centers
• DCI is a MPLS Layer 3 VPN
• Goal: Ensure that all loopbacks are reachable between sites
DC1 DC2
, r

AS65003

ii>---!~~ = 1-+15}5_
AS65001
MPLS ProviderWAN
(AS65100) I
~ -
AS65002 AS65005

5~~ == ~ >----411
-:--

host1 leaf 1 spine1 <........_---<'......_ ,' spine2 leaf3 host2


I

host1 IP: 10.1.1 .1/24 host2 IP 10.1.1.2/24


host2 MAC: 52:54:00:2c:4b:a2
LoopbackAddresses LoopbackAddresses
spine1 : 192.168.100.1 spine2: 192.168.100.2
leaf 1: 192.168.100.11 leaf 3: 192.168.100.13

C> 2019 Juniper Networks, Inc All Rights Reserved

Underlay
Data centers DC1 and DC2 are connected across an MPLS provider WAN . The MPLS provider WAN appears to the spine1
and spine2 devices as an IP routing handoff. Spine1 and Spine2 peer with the provider using EBGP.

Each data center is configured as a spine-leaf EVPN-VXLAN network. Host1 and host2 are in the same subnet, so a Layer 2
stretch is req uired across the DCI connect ion .
The underlay in each data center site is configured using EBGP. The goal of the underlay is to ensure that all loopback
addresses are reachable between sites.

www.juniper.net Data Center Interconnect • Chapter 8 - 29


Data Center Fabric with EVPN and VXLAN

Example Topology Layer 2 Stretch (2 of 2)

• Overlay Topology
• All leaf switches act as VXLAN Layer 2 gateways
• EVPN Signaling is based on MP-IBGP routing
AS65002
AS65001
EBGP Unicast EBGP Unicast
1
DC1
' 1 ' I I DC2
,

lll>-------4;-- ~ ---- ::J~


~G-
MPLS ProviderWAN
(AS65100)
,'
~
l'.:J ~
~G ---- ----
leaf3
-=
1lill
host1 leaf 1 spine1
RR
'-- ....... I
spine2
RR
host2

l A A I
IBGP EVPN
'
AS65000
'
IBGP EVPN
AS65100
'
IBGP EVPN
AS65000

---- ------- ------ --- --


----------- -------------------
VXLAN Tunnel
---------- -------
C> 2019 Juniper Networks, Inc All Rights Reserved

Overlay
The overlay topology consists of MP-IBGP routi ng. The leaf device at each data center peers with the spine device in that
data center. The spine devices peer with each other across the WAN connection . Because both sites belong to the same
autonomous system, the spine devices are configured as route reflectors so that routes received on spine1 f rom spine2 are
readvertised to leaf1, and routes received on spine2 from spine1 are readvertised to leaf3.

The signaling is EVPN signaling, which means the IBGP peering sessions will be sending Type-2 MAC routes . When leaf1
receives t he MAC advertisements that originate from leaf3, leaf1 will install a VXLAN t unnel to leaf3, and vice versa . With
this example, leaf1 and leaf three will be directly connected with a VXLAN t unnel and will be able to forward Layer 2 traffic
from end-to-end.

Chapter 8-30 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Spine1 Configuration - VRF

{master : O}[edit]
lab@spinelll show routing-instances
customerl (
instance-type vrf ;
interface irb . 10 ; irb interface in VLAN 10
interface lo0 . 10 ; -- loopback interface for customer VRF
route-distinguisher 192 . 168 . 100 . 1 : 5001 ;
vrf- target target : 65000 : 1 ; -
- - vrf-target community associated with customer 1
routing-options {
auto-export {
fami ly inet { -- Ensure interface routes are in VRF table for forwarding
unicast ; next-hops
}
}
-
}
}

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine1 Configuration
The configuration on the spine will be performed in a virtua l routing and forwarding instance, or VRF. With this configuration
type, each customer with in the data center can be configured with an independent VRF, and routes from different customers
can be maintained separately.
An IRB interface and the loopback interfaces are placed in the virtua l routing and forwarding instance. The route
distinguisher is unique to this device and th is customer. A VRF-target community is also defined that is unique to th is
customer. The rou t ing-opt i o n s a ut o-expo r t parameter ensures that the physical interfaces connected to the
spine1 device are included in the routing instance. If th is configuration parameter is not present, only the IRB and loopback
interfaces defined in the routing instance are present, and there wou ld be no physica l interfaces to forward traffic.
A simi lar configuration exists on router spine2, with a matching VRF target community.

www.juniper.net Data Center Interconnect • Chapter 8 - 31


Data Center Fabric with EVPN and VXLAN

Spine1 Configuration - Protocols and Switch-


Options
(master : O}[edit] ""group provider (
lab@spinelll show protocols type external ;
bgp (
group fabric { - Service provider peering . export export-directs ;
peer- as 65100 ;
type external ; local-as 65001 ;
export export-directs ;
local-as 65001 ;
... neighbor 172.16 . 1 . 30 ;
}
multipath {
multiple-as ;
}
. Underlay BGP peering
.. ..
}
""evpn {
encapsulation vxlan ;
EVPN
neighbor 172.16.1 . 6 ( extended-vni-list all ;
peer-as 65003 ; }
}
}
group evpn (
- - (master : 0 ) [edit]
type internal ; lab@spinel# show switch-options
local- address 192 . 168 . 100 . 1 ; vtep-source-interface lo 0 . 0 ;
family evpn ( route- distinguisher 192.168 . 100 . 1 : 1 ;
signaling ;
}
cluster 1 . 1 . 1 .1;
. Overlay BGP peering
vrf-target (
target : 65000 : l;
auto ;
local- as 65000 ; }
neighbor 192 . 168 . 100 . 1 '
neighbor 192.168 . 100 . 2 ; ~
...
...
}
- ~
Causes spine1 to relay routes received from spine2 to leaf1

. . . . . .
Note: Sp1ne2 will have a s1m1lar conf1gurat1on

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine1 Protocols and Switch-Options


The example shows the protocols configuration for the spine1 device. There are three BGP peering groups. The first group
shown is for the underlay, which co nnects spine1 to leaf1. There is an export po licy called export-directs, which redistributes
the loopback interface into t he BGP routing protocol.

The BGP peering group EVPN is for the underlay network. The EVPN peering group peers t o the loopback address of the leaf1
device and to the loop back address of t he sp ine2 device . The default BGP rout e advertisi ng rules do not forward routes
learned from inte rnal BGP peers to other internal BGP peers. The cluster statement in the group EVPN allows spi ne1 to
re-advertise routes received f rom an internal BGP peer to members of the cluster, which are the neighbors configured within
that group. If this st at e were not present , routes received from spine2 would not be forwarded to leaf 1, and routes received
f rom leaf 1 would not be forwarded to spine2.

The peering group called provider is an externa l peeri ng group, which appears to the service provider. Th is peering grou p also
advertises the local loopbac k address to the external peer, and will relay the loopback address received from leaf1 to the
service provider.

The protocols EVPN sect ion sets encapsu lation to VXLAN, and is configured t o support all VNl 's.

The switch options hierarchy defines the VTEP sou rce interface to be the loopback interface, defines the route distingu isher
for th is device, and defi nes the global VRF target community that wi ll be used for al l type I EVPN routes. For all Type-2 and
Type-3 EVPN routes, the device is configured to automatically generate VRF target values. The VRF target val ues for auto
generated VRF target communit ies are based on the base VRF target value, and the VNI associated with the VLAN ID. In this
way the auto generated VRF target commu nities are synchron ized across multiple devices as long as the VNI/VLAN
mappings and the base VRF target are the same.

Chapter 8-32 • Data Center Interconnect www.j uniper.net


Advanced Data Center Switching

Leaf1 Configuration
{master : O}[edit] ~ vpn {
lab@l eafli show protocols EVPN ~ encapsulation vxlan ;
bgp { extended-vni-list all ;
group overlay ( -
type internal ;
local-address 192 . 168.100 . 11; {master: 0 ) [edit]
family evpn { lab@leafli show vlans
• Overlay BGP peering default {
signaling;
} vlan-id l ;
}
neighbor 192 . 168 . 100 . l ;
}
group fabric {
- vlO {
vlan- id 10 ;
type external ;
export export-directs ;
- vxlan {
vn1. 5010 ;
local-as 65003 ; }
multi path { }
multiple- as ;
• Underlay BGP peering {master : 0 ) [edi t]
}
lab@leafli show switch-options
neighbor 172 . 16 . 1 . 5 {
vtep- source- interface loO . O;
peer-as 65001 ;
route-distinguisher 192 . 168 . 100 . 11 : l ;
}
}
vrf-target {
} - target : 65000 : 1;
auto ;
}

Note: Leaf2 will have a similar configuration

C> 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 Configuration
The leaf1 BGP configuration has two groups. The fabric group connects leaf1 to spine1 and advertises the leaf1 loopback
address to spine1. The overlay group peers with spine1 only. Routes from other devices within the data centers are related to
leaf1 to the spine1 peering session.
The EVPN configuration set the encapsulation to VXLAN, and includes all VNls.

A single VLAN is configured on leaf1. VLAN v10 has VLAN ID 10, and is assigned to VNI 5010. Although not shown in t he
output, the single interface that connects to host1 is an e t he r ne t -swi tchi n g interface assigned to VLAN v10.

The switch-opt i ons configuration hierarchy defines the VTEP source the interface as interface l oO . O, defines the
route distinguisher for leaf1, and defines the route target.

www.juniper.net Data Center Interconnect • Chapter 8 - 33


Data Center Fabric with EVPN and VXLAN

Spine1 - Verify BGP Sessions


(master : O}
lab@spinel> show bgp summary
Threading mode : BGP I/0
Groups : 3 Peers : 4 Down peers : 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
bgp . evpn . O
12 12 0 0 0 0
inet . O
4 4 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State I# . . . 1-"' .......... /0 -- ived/Accepted/Damped ...
172 . 16 . 1 . 6 65003 519 517 0 1 3 : 50 : 19 Establ leaf1 fabric
inet.O : 1/1/1/0
172 . 16 . 1 . 30 65100 531 519 0 0 3 : 51 : 24 Establ I Provider I
inet . 0 : 3/3/3/0
192 . 168.100 . 2 65000 86 62 0 0 23 : 54 Establ I spine2 overlay ]
_default_evpn_.evpn . 0 : 0/0/0/,,_.r._ _ _ _ _ _ _ _ _ _ _~
bgp. evpn . O: 8/8/8/0 Receiving EVPN routes from spine2
default-switch . evpn . O: 8/8/8/0
192 . 168.100 . 11 65000
_default_evpn_.evpn.O : 0/0/0/0
60 75 0 0 23 : 58 Establ I ieaf1 overlay I
bgp.evpn.O: 4/4/4/0 I~-----------~,
default- switch . evpn . o: 4/ 4/ 4/0 Receiving EVPN routes from leaf1

iQ 2019 Juniper Networks, Inc All Rights Reserved

Spine1 BGP Sessions


The show BGP summary command on spine1 can be used to verify that the BGP sessions to all of the peers are up. In the
example we can see that all of the BGP sessions are up. We can also see that EVPN routes are being rece ived from spine2
and from leaf1.

Chapter 8-34 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Spine1 - Verify BGP Underlay Routes (Loopbacks)


{master : 0}
lab@spinel> show route 192 . 168 . 100 . 0/24

inet . O: 13 destinations, 13 routes (13 acti ve , 0 holddown, 0 h i dden)


+=Active Route, - = Last Active , *=Both

192 . 168.100 . 1/32 * [Direct/OJ 03 : 20 : 15


> via loO . O
192 .1 68 . 100 . 2/32 *[BGP/170) 03 : 19 : 58 , localpref 100
AS path : 65100 65002 I , validation-state : unverified
> to 172.16 . 1 . 30 via xe-0/0/0 . 0
192 . 168.100 . 11/32 * [BGP/170 ] 03 : 18 : 53 , localpref 100
AS path : 65003 I, vali dation-state : unverifi ed
loopbacks from remote AS (across DCI)

___...
> to 172 . 16 . 1 . 6 via xe- 0/0/1 . 0
(Note the AS Path of the route)
192 . 168 . 100 . 13/32 *[BGP/170) 03 : 19 : 00 , localpref 100
AS path : 65100 65002 65005 I , val idation- state : unverified
> to 172 . 16 . 1 . 30 vi a xe-0/0/0 . 0

: vxlan . inet . 0 : 11 destinations , 11 routes (1 1 active , 0 holddown , 0 hidden)


+ =Active Route, - = Last Active , *= Both

192 . 168.100.1/32 *[Direct/OJ 03 : 20 : 12


> via loO . O
192 . 168 . 100 . 2/32 * [Static/1] 00 : 15 : 03 , metric2 0
> to 172.16 . 1 . 30 vi a xe- 0/0/0 . 0 loopbacks used for EVPN route recursive lookups
192 . 168.100.11/32 *[Static/1) 00 : 16 : 28 , metri c2 0
> to 172 . 16 . 1 . 6 via xe- 0/0/1 . 0

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine1 Underlay Routes


You can verify that the loopback addresses have been received on spine1 with the show rou te 192 . 1 68 . 1 oo . o/24
comma nd. As you can see in t he out put, t he loopback addresses of a ll of the devices in both data centers are present in the
routing table. Also note that the : vxlan. i net . o route table contains the loopback addresses of t he two devices in DC1,
and the spine2 device that's connected over the interconnect.

www.j uniper.net Data Center Interconnect • Chapter 8 - 35


Data Center Fabric with EVPN and VXLAN

Leaf1 - Verify Remote Loopback Routes


{master : 0}
lab@ l eafl> show route 192 . 168 . 100 . 0/24

inet. O: 13 destinations, 13 routes (13 acti ve , 0 holddown, 0 hidden)


+=Active Route, - = Last Active , *=Both

192 . 168.100 . 1/32 * [BGP/170 ] 03 : 51 : 01 , localpref 100


AS path : 65001 I, vali dation- state : unverifi ed
> to 172 . 16 . 1 . 5 via xe- 0/0/1 . 0
192 . 168 . 100 . 2/32 *[BGP/ 170 ] 03 : 51 : 01 , localpref 100
AS path : 65001 65100 65002 I , validation- state : unverified
> to 172.16 . 1 . 5 via xe-0/0/1 . 0
192 . 168.100.11/32 *[Direct/OJ 03 : 51 : 14
> via loO . O loopbacks from remote AS (across DCI)
192 . 168 . 100 . 13/32 *(BGP/17 0 ] 03 : 51 : 01 , localpref 100 (Note the AS Path of the route)
AS path : 65001 65100 65002 65005 I , validation-state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0

: vxlan . inet . 0 : 12 destinations , 12 routes (12 active , 0 holddown , 0 hidden)


-
+ =Active Route, - = Last Active , *= Both

192 .1 68 . 100 . 1/32 *[Static/lJ 00 : 24 : 41 , metri c2 0


> to 172 . 16 . 1 . 5 via xe- 0/0/1 . 0
192 . 168 . 100 . 2/32 * [Static/1] 00 : 24 : 37 , metric2 0
> to 172.16 . 1 . 5 via xe-0/0/1 . 0
192 . 168 . 100 . 11/32 *(Direct/OJ 03 : 51 : 11
> via loO . O loopbacks from all devices present in :vxlan.inel.O (for EVPN route validation)
192 . 168 . 100 . 13/32 * [Static/1] 00 : 03 : 44 , metric2 0
> to 172.16 . 1 . 5 via xe-0/0/1 . 0 ...

C> 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 Route Verification


From the leaf one device, the show rout e comma nd can be used t o verify that the loopback addresses of a ll of t he remote
devices are present . Also note that t he loopback addresses of all of the remote devices are present in t he vx l a n. i net . o
routi ng table. When leaf one receives an EVPN route advertisement from a remote device, the : vx l an . i ne t. o routing
t able will be used to va lidate t he protocol next hop advertised with the EVPN route.

Chapter 8-36 • Data Center Interconnect www.j uniper.net


Advanced Data Center Switching

Leaf1 - Verify BGP Overlay Tunnels (1 of 2)


{master : O}
lab@l eafl> show interfaces vtep
[snip)
Logical interface vtep . 32768 (Index 553) (SNMP ifindex 543)
Flags : Up SNMP-Traps Ox4000 Encapsulation : ENET2
Ethernet segment val ue : 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00 , Mode : single- homed , Mul ti-homed status : Forwarding
VXLAN Endpoi nt Type : Source , VXLAN Endpoint Address : 192 . 168 . 100 . 11 , 12 Routing Instance : default-switch, 13 Routing
Instance : default

Input packets · 0
Output packets : 0

Logical interface vtep . 32769 (Index 569) (SNMP ifindex 544)


Flags : Up SNMP-Traps Encapsulation : ENET2
VXLAN Endpoi nt Type : Remote , VXLAN Endpoint Address : 192 . 168 . 100 . 1 , 12 Routing Instance : default-switch , 13 Routing
Instance : default
I VTEP tunnels to spine1
Input packets : 6
Output packets : 9
Protocol eth-switch, MTU : Unlimited
Flags : Trunk-Mode
[continued on next slide ... )

iQ 2019 Juniper Networks. Inc All Rights Reserved


Jun1Per
N(lWOPKS
37

Leaf1 VTEP Tunnels, Part 1


The show i nter £ aces v t ep command displays the VTEP tunnel interfaces on the device. In the example shown , there
is a VTEP tunnel from leaf1 to spine1.

www.juniper.net Data Center Interconnect • Chapter 8 - 37


Data Center Fabric with EVPN and VXLAN

Leaf1 - Verify BGP Overlay Tunnels (2 of 2)


[ continued from previous slide... ]

Logical interface vtep . 32770 (Index 570) (SNMP ifindex 547)


Flags : Up SNMP- Traps Encapsulation : ENET2
VXLAN Endpoint Type : Remote , VX1AN Endpoint Address : 192 . 168 . 100 . 2 , 12 Routing Instance : default-switch , 13 Routing
Instance : default I
VTEP tunnel to spine2 I
Input packets : 5
Output packets : 9
Protocol eth-switch, MTU : Unlimited
Flags : Trunk-Mode

Logical interface vtep . 32771 (Index 571) (SNMP ifindex 560)


Flags : Up SNMP-Traps Encapsulation : ENET2
VXLAN Endpoint Type : Remote , VXLAN Endpoint Address : 192 . 168 . 100.13 , 12 Routing Instance : default-switch, 13 Routing
Instance : default I
VTEP tunnel to leaf3 I
Input packets : 21
Output packets : 0
Protocol eth-switch, MTU : Unlimited
Flags : Trunk-Mode

EVPN routes w ill have VTEP tunnels to validate protocol next-hop to remote devices

C> 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 VTEP Tunnels, Part 2


The example shows the rest of the show i nt e r faces v t ep command . This command indicates that there is a VTEP
tunnel to spine2 and a VTEP tunnel to leaf3. This also wou ld be an indication that EVPN MAC routes have been received from
both of those devices.

Chapter 8-38 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Leaf1 - Verify VXLAN Routes


[{master : 0}
lab@l eafl> show r out e table : vxla n.inet . 0

: vxlan . inet . O: 12 destinations , 12 routes (12 active , 0 holddown , 0 hidden)


+ = Active Route, - = Last Active , * = Both

[snip]

172 . 25 . 11 . 3/32 *[ Local/O J 00 : 43 : 28


Local via emO . O
192 . 1 68 . 100 . 1/32 * [St atic/1] 00 : 30 : 32 , metri c2 0
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
192 . 168 . 100 . 2/32 * [Static/1) 00 : 30 : 28 , metric2 0
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0 Routes to remote loopbacks in :vxlan.inet.O table
192 . 1 68 . 100 . 11/32 * [Direct/O J 03 : 57 : 02
> v i a loO . O
192 . 168 . 100 . 13/32 * [Static/1) 00 : 09 : 35 , metri c2 0
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0

iQ 2019 J uniper Networks , Inc All Rights Reserved

Leaf1 VXLAN Routes


The :vxlan.inet.O route table contains the routes to remote devices that can be used for recursive lookup of EVPN routes.

www.juniper.net Data Center Interconnect • Chapter 8 - 39


Data Center Fabric with EVPN and VXLAN

Spine1 - Verify Customer VRF Routes


(master : O}
lab@spinel> show route table customerl . inet.0

customerl . inet . O: 4 destinations , 4 routes (4 active , 0 holddown , 0 hidden)


+ =Active Route, - = Last Active , *=Both

10 . 1 . 1 . 0/24 * [ Direct/ oJ oo : 34 : 04 Customer subnet present in customer routing table


> via irb . 10
10 . 1 . 1 . 100/32 *[ Local/OJ 00 : 34 : 04
Local via irb . 10
10 . 1 . 1 . 254/32 *[Local/OJ 00 : 34 : 04
Local via irb . 10
192 . 168 . 100 . 110/32 *[Direct/OJ 04 : 01 : 48
> via lo0 . 10 > to 172 . 16 . 1 . 5 via xe-0/0/1 . 0

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine1 Customer VRF


The customer on spine1 was configured within a VRF. To view the route table for the customer VRF, the show rou te
t ab l e cus t ome r l . i ne t. O command can be used. Note that the subnet associated with VLAN 10.1 .1 .0/ 24 is present in
the routing table because the spine1 device has an IRB interface configured within that subnet. The customer! loopback
address, IRB unique address, and a virtual gateway address are also present in the routing table.

Chapter 8-40 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Leaf1 - Verify Customer Remote MAC Routes


(BGP EVPN Table)
(master : O}
lab@l eafl> show route table bgp . evpn . O

bgp . evpn . O: 14 destinations , 14 routes (14 active , 0 holddown , 0 hidden)


+=Active Route, - = Last Active , *=Both

[snip]

2 : 192 . 168 . 100 . 13 : 1 : : 5010 : : 52: 54: 00: 2c: 4b: a2/304 .MAC/ IP host2 MAC address present in bgp.evpn.0 table
*[BGP/170] 00 : 14 : 39, localpref 100 , from 192 . 168 . 100 . 1
AS path : I, vali dation-state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0

[snip)

Q 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 BGP.EVPN.0 Table


EVPN routes that are advertised to leaf 1, and that are accepted on leaf1, are stored in the b gp . e vpn. o route table. In the
example, we can see a MAC/IP address listed. Note that the route distinguisher of 192.168.100.13:1 indicates that the route
originated on leaf3 . The MAC address associated with host2 is shown as part of the route that was advertised. This indicates
that the MAC address of host2 is present on leaf1.

www.juniper.net Data Center Interconnect • Chapter 8 - 41


Data Center Fabric with EVPN and VXLAN

Leaf1 - Verify Customer Remote MAC Switch


Table
{master : O}
lab@l eafl> show route table default-switch . evpn . O

default- switch . evpn . O: 18 destinations , 18 routes (18 active, 0 holddown, 0 hidden)


+ =Active Route, - = Last Active , *=Both

[snip)

2 : 192 . 168 . 100 . 11 : 1 :: 5010 :: 52 : 54 : 00 : 5e : 88 : 6a/304 MAC/IP


*[EVPN/170] 04 : 06 : 10
Indirect
2 : 192 .1 68 . 100 . 11 : 1: : 5010 :: fe : 05 : 86 : 71:cb : 03/304 MAC/IP
*[EVPN/170) 04 : 06 : 10
Indirect
2 : 192 . 168 . 100 . 13 : 1: : 5010 : : 52: 54: oo : 2c: 4b : a2 /304 MAC/ IP host2 MAC address present in default EVPN switch table
*[BGP/170) 00 : 18 : 35 , localpref 100 , from 192 . 168 . 100 . 1
AS path : I, validation- state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0

[snip)

C> 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 Switching Table


Use the show rou te tab l e d e f au lt -switch . evp n. O command to list the EVPN routes that have been moved from
the bgp . evpn . o table to the local EVPN switching table . In the example, we can see that the MAC address associated with
host2 has been moved to the local EVPN switching table.

Chapter 8-42 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Host2 - Verify Reachability to Host1



lab@host2 : ~$ ping 10 . 1 . 1 . 1
PING 10 . 1 . 1 . 1 (1 0 . 1 . 1 . 1 ) 56 (84 ) bytes o f data .
64 bytes f r om 10 .1. 1 . 1 : i' cmp_seq=l ttl=64 t i me=2597 ms
64 bytes from 10 . 1 . 1 . 1 : icmp_ seq=2 ttl=64 time=l582 ms
64 bytes from 10 . 1 . 1 . 1 : icmp_seq=3 ttl=64 time=559 ms
64 bytes from 10 . 1 . 1 . 1 : icmp_ seq=4 ttl =64 t i me=126 ms
64 bytes from 10 . 1 . 1 . 1 : i cmp_seq=5 ttl =64 t i me=128 ms
64 bytes from 10 . 1 . 1 . 1 : icmp_seq=6 ttl=64 t i me=124 ms
Ac
- -- 10 . 1 . 1 . 1 ping statistics - --
6 packets transmitted, 6 rece i ved , 0% packet l oss , time 48ms
rtt min/avg/max/mdev = 124 . 353/852 . 858/2596 . 964/934 . 934 ms , pi pe

host2 MAC address present in default EVPN switch table

C> 2019 Juniper Networks, Inc All Rights Reserved

Verify Reachability
From the host2 device, the ping command can be used to verify that host1 is reachable across the data center
interconnect and across both data centers.

www.juniper.net Data Center Interconnect • Chapter 8 - 43


Data Center Fabric with EVPN and VXLAN

Leaf1 - Verify Customer Remote MAC Switch


Table, Part 2
{master : O}
lab@l eafl> show route table default-switch . evpn . O

default- switch . evpn . O: 20 destinations , 20 routes (20 active , 0 holddown , 0 hidden)


+=Active Route, - = Last Active , *=Both

[snip]

2 : 192 . 168 . 100 . 13 : 1 : : 5010 : : 52 : 54 : 00 : 2c : 4b : a2/304 MAC/ IP host2 MAC address present in default EVPN switch table
*[BGP/170] 00 : 22 : 28, localpref 100 , from 192 . 168 . 100 . 1
AS path : I , vali dation-state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
[snip)

2 : 192 . 168 . 100 . 13 : 1 : : 5010 : : 52 : 54 : 00 : 2c: 4b : a2 : : 10 . 1 . 1 . 2/304 MAC/IP host2 MAC/IP address present in default EVPN switch table
*[BGP/170) 00 : 01: 51 , localpref 100 , from 192 . loti . 1uu.1
AS path : I , validation- state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
[snip]

C> 2019 Juniper Networks, Inc All Rights Reserved

Verify MAC/IP Route


Once traffic has passed between the two devices, the leaf devices learn the IP address associated with the MAC address and
a new entry is included in the local switching table. At th is point, the leaf1 device can perform proxy ARP for IP address
10.1.1.2 if configured to perform proxy ARP services.

Chapter 8-44 • Data Center Interconnect www.juniper.net


Advanced Data Cent er Switching

Example Topology Type-5 Routes (1 of 2)

• Underlay Topology
• Underlay is two IP Fabrics based on EBGP routing
• EBPG routing to provider to advertise loopbacks between data centers
• MP-IBGP routing leafs
• DCI is a MPLS Layer 3 VPN
DC1 DC2
,

---
AS65003 AS65001 AS65002 AS65005
-
----
c;::::,
'.'_!~ MPLS ProviderWAN
II ~G- (AS65100)
,'
leaf1
host1 spine1 '-- ......_ I
spine2 leaf3

host1 IP: 10.1.1 .1/24 host2 IP 10.1.2.1/24


host2 MAC: 52:54:00:2c:4b:a2
LoopbackAddresses LoopbackAddresses
spine1 : 192.168.100.1 spine2: 192.168.1 00.2
leaf 1: 192.168.100.11 leaf 3: 192.168.100.13

C> 2019 Juniper Networks, Inc All Rights Reserved

Type-5 Routes Underlay


In the next example, the hosts in data center DC1 and data center DC2 are configured with different IP subnets. Host1 is in
the subnet 10.1.1.0/ 24 and host2 is in subnet 10.1.2 .0/24. There are two methods to bridge the subnet domains. The f irst
method is to configure a VXLAN Layer 3 gateway that bridges the two VNls. This method requires that all of the VNls that will
be bridged be configu red on the Layer 3 gateway. Instead of configuring a Layer 3 gateway, configure the use of Type-5 EVPN
routes, wh ich advertise a prefix in an EVPN route that can be used to forward traffic to a Layer 3 destination. This
configuration allows a more traditional method to route between subnets in different VN ls.

The underlay topology in th is example uses the same underlay as in the previous example.

www.juniper.net Data Center Interconnect • Chapter 8 - 4 5


Data Center Fabric with EVPN and VXLAN

Example Topology Type-5 Routes (2 of 2)

• Overlay Topology
• Only Type-5 route advertised between data centers (no MAC Type-2 routes)
• EVPN Signaling is based on MP-IBGP routing
Type-5 route (10.1 .1.0/24)
Type-5 route (10.1 .2.0/24)
AS65002
AS65001
EBGP Unicast EBGP Unicast
1 1
DC1 I \ DC2
, r

11~1
-
==-lli-----4
leaf1
---- ::.!~
~G-
MPLS ProviderWAN
(AS65100)
'I
~
1::.1 ~
~G ---- ----
leaf3
-=
I
host1 spine1
RR
'-- ....... I
spine2
RR
host2

l A A I
IBGP EVPN
'
AS65000
'
IBGP EVPN
AS65100
----- -· '
IBGP EVPN
AS65000
----------------------------------- -----------
VXLAN Tunnel VXLAN Tunnel VXLAN Tunnel
C> 2019 Juniper Networks, Inc All Rights Reserved

Type-5 Route Overlay


The difference between this example and the last example is that in this example, no Type-2 ro utes wil l be advertised
between data centers. Additionally, a new VNI wi ll be created across t he DCI. This creates a virtua l broadcast domain
between the DCI devices, wh ich can be used to route traffic t o an IP prefix destinat ion using t he inet .O routing table.

A VXLAN tunnel is created between leaf1 and spine1. Since host2 is in a different broadcast domain t han host1, host1 will
be required to send traffic destined to host2 to t he default gateway, which is spine1. The VTEP t unnel between leaf1 and
spine1 wi ll be used to forward that traffic. The default gateway address is configured on t he IRB interface on spine1 .

Once t raffic reaches the IRB interface on spine1, a route lookup ta kes place and det ermines that the destination network
10.1 .2 .0/24 is reachable through the new VTEP t unnel that crosses the DCI. Router spine2 is listed as the next hop for t he
Type-5 route prefix that advertises reachabi lity t o t he destination.

The traffic is forwarded across the VXLAN t unnel and arrives at spine2, at which point the VXLAN header used to cross the
DCI link is removed, and a new VXLAN header is placed on t he packet in order to forward it to leaf3.

Since the devices in DC1 are not interested in the VNI associated with the host network, the Type-2 MAC routes associated
with t he DC2 VNls will not be accept ed by t he DC1 routers, and will not be present in the DC1 devices. Instead, a route prefix
will be present on spine1 to route traffic toward DC2.

Chapter 8-46 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Spine1 Configuration - VRF

{master : O}[edit]
lab@spinel# show routing-instances
customerl (
instance-type vrf ;
interface irb . 10 ; +-------------+-- irb interface in VLAN 10
interface loO . 1 O; : loopback interface for customer VRF
route- distinguisher 192 . 168 . 100 . 1 : 5001 ;
vrf- target target : 65000 : 5100 ; • : - - - - - - - - - vrf-target community associated with customer 1
routing-options { -
auto-export {
fami ly inet { ~-----------+- Ensure interface routes are in VRF table for forwarding next-hops
unicast ;

}
}
-
}
protocols (
evpn ( -
ip-prefix-routes { • Configure Type-5 IP Prefix Route advertisement
advertise direct-nexthop; ~t----+--- • Connection uses an independent VNI for spine-to-spine connection - acts as an
encapsulation vxlan; Ethernet segment between spine devices for routing and forwarding
vn1. 10010 ;
}
- • Advertise /32 routes with advertise direct-next-hop parameter

}
}
}

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine1 Configuration
The only changes that wi ll affect the behavior of the DCI will be made on the spine devices. The leaf devices have no
changes.
The example shows the configuration of the customer1 VRF on spine1. Note the addition of the [protocol s evpn ]
hierarchy with in the routing instance. The configuration shown enables the Type-5 IP prefix router advertisement functions.
The VNI listed in the EVPN section of the configuration is the VNI associated with the VXLAN t unnel that will cross t he DCI.

www.juniper.net Data Center Interconnect • Chapter 8 - 47


Data Center Fabric with EVPN and VXLAN

Leaf1 Route Verification - :vxlan.inet.O


(master : 0}
lab@leafl> show route table : vxlan.inet.O

: vxlan . inet . O: 11 destinations , 11 routes (11 active ,


0 holddown, 0 h i dden)
+=Active Route, - = Last Active , *=Both

169 . 254 . 0 . 0/24 *[Direct/OJ 00 : 15 : 59


> via eml . O
169 . 254.0 . 2/32 *[Local/OJ 00 : 15 : 59
Local via eml . O
172 . 16 . 1 . 4/30 *[Direct/OJ 00 : 15 : 59
> via xe-0/0/1 . 0
172 . 16 . 1 . 6/32 *[Local/OJ 00 : 15 : 59
Local via xe- 0/0/1 . 0
172 . 16 . 1 . 16/30 *[Direct/OJ 00 : 15 : 59
> via xe-0/0/2 . 0
172 . 16 . 1 . 18/32 *[Local/OJ 00 : 15 : 59
Local via xe- 0/0/2 . 0
172 . 25 . 11 . 0/24 *[ Direct/OJ 00 : 15 : 59
> via emO . O
172 . 25 . 11 . 3/32 *[Local/OJ 00 : 15 : 59
Local via emO . O
192 . 168 . 100 . 1/32 * [Static/1 J 00 : 15 : 18 , metric2 O-
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
192 . 168 . 100 . 2/32 * [Static/lJ 00 : 15 : 16 , metric2 0 l'-◄..,_---1 Spine1 , spine2, and leaf1 in :vxlan.inet.O route table - no leaf3 route!
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
192 . 168.100.11/32 *[Direct/OJ 00 : 15 : 59
> via lo0 . 0 -
Q 2019 Juniper Networks, Inc All Rights Reserved

Leaf1 Route Verification


On the leaf1 device, the remote leaf devices are no longer present in the : v xl an . i ne t. o table .

Chapter 8-48 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Leaf1 Route Verification - default-switch.evpn.O


lab@leafl> show route table default-switch . evpn . O

default-switch . evpn . O: 11 destinations, 11 routes (11 active , 0 holddown , 0 hidden)


+=Active Route, - = Last Active, *=Both

[snip]

2 : 192 . 168 . 100 . 1 : 1 :: 5010 : : 00 : 00 : 5e : OO : Ol : 01 : : 10 . 1 . 1 . 254/304 MAC/IP


-
*[BGP/170] 00 : 17 : 47 , localpref 100 , from 192 . 168 . 100 . 1 ,-......,._---1 OefautGaewayforVLAN10onspine1
AS path : I , validation-state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0 -
2 : 192 . 168 . 100 . 1 : 1 :: 5010 : : 02 : 05 : 86 : 71 : bd : OO : : 10 . 1 . 1 . 100/304 MAC/IP
*[BGP/170] 00 : 17 : 47 , localpref 100 , from 192 . 168 . 100 . 1
AS path : I, validation-state : unverified
> to 172 . 16 . 1 . 5 via xe- 0/0/1 . 0
2 : 192 . 168 . 100 . ll : l :: 5010 :: 52 : 54 : 00 : 5e : 88 : 6a :: 10 . 1 . l . 1/304 MAC/IP
* [EVPN/170] 00 : 18 : 30
Indirect
3 : 192 . 168.100 . 1 : 1 : : 5010 : : 192.168 . 100 . 1/248 IM
*[BGP/170] 00 : 17 : 47 , localpref 100 , from 192 . 168 . 100 . 1
AS path : I, validation- state : unverified
> to 172 . 16 . 1 . 5 via xe-0/0/1 . 0
3 : 192 . 168 . 100 . 11 : 1 : : 5010 :: 192 . 168 . 100 . ll/248 IM
*(EVPN/170] 00 : 18 : 29
Indirect

Note: Without Type-5 routes, switching VLANs requires an L3-Gateway configuration


Q 2019 Juniper Networks. Inc All Rights Reserved

Leaf1 Route Verification - Default Gateway


On leaf 1, the default gateway add ress configured on the IRB interface of spine1 is listed in the EVPN switching tab le.

www.j uniper.net Data Center Interconnect • Chapter 8 - 49


Data Center Fabric with EVPN and VXLAN

Spine1 VLAN Configuration


{master : 0) [edi t]
lab@spinel# show vlans
default {
vlan-id 1 ;
}
vlO {
vlan-id 10 ;
13-interface irb . 10; +----4 Only one VLAN configured. It's NOT an L3 gateway!
vxlan {
vni 5010 ;
}
}

Note: L3 gateway devices have IRB interfaces configured in the VLANs that will be bridged,
and bridge traffic between IRB interfaces. With Type-5 routes, remote VLANs, and IRBs
associated with remote VLANs, do not have to be configured on the border router.

Q 2019 Juniper Networks, Inc All Rights Reserved

Spine1 VLAN Configuration


On the spine1 device, the VLAN associated with host1 is conf igured, with the VNI associated with host1's broadcast domain.
The IRB interface placed in the VLAN serves as the default gateway for all hosts within the subnet. If spine1 were co nfigu red
as a VXLAN Layer 3 gateway, the VLANs and VNls associated with the remote VLANs wou ld also have to be conf igured here.

Chapter 8-50 • Data Center Interconnect www.j uniper.net


Advanced Data Center Switching

Spine1 Remote Route Verification


{master : 0}
lab@spinel> show route 10 . 1/16

cust omer l . inet . O: 5 desti nati ons , 5 routes (5 active , 0 hol ddown , 0 hidden)
+=Acti ve Route , - = Last Active , * = Both
DC1 prefix on leaf1 (local IRB interface is configured as part of
10 . 1 . 1 . 0/24 * [Direct/OJ 00 : 24 : 59 . the 10.1.1.0/24 network and is the default gateway for VLAN 10)
> via irb . 10
10 . 1 . 1 . 100/32 * [Local/OJ 00 : 24 : 59
Local via irb . 10
10 . 1 . 1 . 254/32 *[Local/O J 00 : 24 : 59
Local via irb . 10
10 . 1 . 2 . 0/24 * [EVPN/170] 00 : 24 : 15 DC2 prefix on leaf3 learned through EVPN route
> to 172 . 16 . 1 . 30 via xe-0/0/0 . 0

C> 2019 Juniper Networks, Inc All Rights Reserved

Spine1 Remote Route Verification


The show r o ut e command on spine1 shows the routes in the routing table. In the example, we can see the
cus tome r l . i ne t. o routing tab le has a route to the local 10.1.1.0/24 subnet, which is attached to interface IRB.10. We
can also see a prefix of 10.1.2 .0/ 24, wh ich was received through a BG P EVPN connection. The next hop for the prefix is the
physica l interface that connects to the service provider.

Remember, traditional EVPN route next hops must be va lidated by a tunnel to a remote VTEP. Type-5 routes can be validated
using standard 1Pv4 routes .

www.juniper.net Data Center Interconnect • Chapter 8 - 51


Data Center Fabric with EVPN and VXLAN

Spine1 Route Verification - bgp.evpn.O


(master : 0}
lab@spinel> show route table bgp . evpn . O

bgp . evpn . O: 21 destinations, 21 routes (21 active , 0 holddown, 0 hidden)


+=Acti ve Route, - = Last Active , *=Both

[snip)

Indirect
:==============--{
5 : 192 . 168.100 . 1:5001: : 0 : :10 . 1.1.0 : : 24/248
* [EVPN/170] 00 : 27 : 31 .--------..._---------------,
Type-5 routes present for both subnets (one local, one remote)
5 : 192 . 168.100 . 2 : 5001 :: 0 : : 10 . 1.2.0 : : 24/248 ~-------r-------------~
* [BGP/170 ) 00 : 26 : 47 , localpref 100, from 192 . 168 . 100.2
AS path : I, vali dation-state : unverifi ed
> to 172 . 16 . 1 . 30 via x e - 0/0/0 . 0

iQ 2019 Juniper Networks, Inc All Rights Reserved

Spine1 bgp.evpn.O Table


The show route table bgp . evpn. o command shows the Type-5 route in the b gp . evpn . o route table. One route is for the
prefix that is reachable by the local device, the other route was received from device 192.168.100.2, and conta ins the prefix
10.1.2.0/24 in the route prefix.

Chapter 8-52 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Spine1 - Type-5 Route: A Closer Look


5 : 192 .1 68 . 100 . 2 : 5001 :: 0 : : 10 . 1.2.0 : : 24/248 (1 entry, 1 announced)
*BGP Preference : 170/- 101
+---~ Advertised prefix

Route Distinguisher : 192 . 168 . 100 . 2 : 5001


Next hop type: Indirect , Next hop index : 0
Address : Oxdaf9870
Next- hop reference count : 9
Source : 192 . 168 . 100 . 2
Protocol next hop : 192 . 168. 100 . 2+----------- Protocol next-hop spine2 (DC2)
Indirect next hop : Ox2 no-forward INH Session ID: OxO
State : <Active Int Ext>
Local AS : 65000 Peer AS : 65000
Age : 30 : 11 Metric2: 0
Validati on State : unveri fied
Task: BGP 65000 . 192 .1 68 . 100 . 2
Announcement bits (1) : 0- BGP_ RT_ Background
AS path : I
Communities : target : 65000 : 5100 encapsul ation : vxlan(Ox8) router- mac : 02 : 05 : 86 : 71 : 7b : OO
Import Accepted - - - - - - - - - - - - - - - --
Route Label : 10010 +------------~ VNI configured for Type-5 route advertisements in
customer VRF
Overlay gateway address : 0 . 0 . 0 . 0
ESI 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00 : 00
Localpref : 100
Router ID : 192 . 168 . 100 . 2
Secondary Tables : customerl . evpn . O

iQ 2019 Juniper Networks, Inc All Rights Reserved

Type-5 Route Details


The show r oute e x tens i ve allows you to see more detai l regard ing the Type-5 route . The protocol next hop is the BGP
peer that advertised the route. The VN I is listed in the Route Lab e l field , and is the VN I va lue that is assigned from with in
the customer1 VRF. Th is value refers to the VN I, or VXLAN broadcast domain, that connects the DCI devices across the
provider network.

www.juniper.net Data Center Interconnect • Chapter 8 - 53


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Defined the term Data Center Interconnect
• Described the DCI options when using an EVPN-VXLAN
• Configured Data Center Interconnect using EVPN-VXLAN

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
54

We Discussed:
• The term Data Center Interconnect;

• The DCI options when using an EVPN-VXLAN; and

• Configuring Data Center Interconnect using EVPN-VXLAN.

Chapter 8-54 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Review Questions

1. What are the transport network options for a DCI?


2. When using VXLAN with EVPN signaling, what DCI options are
possible when the transport network is a public IP network?
3. When using Layer 2 stretch (Type-2 routes), which routing table is
used for protocol-next-hop validation?
4. When using Type-5 routes for DCI, which routing table is used for
the protocol-next-hop validation?

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(l\\'OPl(S
55

Review Questions
1.

2.

3.

4.

www.j uniper.net Data Center Interconnect • Chapter 8 - 55


Data Center Fabric with EVPN and VXLAN

Lab: Data Center Interconnect

■ Configure a Data Center Interconnect.

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
s6

Lab: Data Center Interconnect


The slide provides the objective for this lab.

Chapter 8-56 • Data Center Interconnect www.juniper.net


Advanced Data Center Switching

Answers to Review Questions


1.
Transport options for a DCI include point-to-point links, such as dark fiber and private lines, IP networks, wh ich can be
customer owned or service provider owned, and MPLS networks, which can be customer owned or service provider owned .

2.
When the transport network is a public IP network, VXLAN tunnels can be configured across the DCI for Layer 2 stretch, or
Type-5 routes can be used to advertise prefixes across the DCI connection.

3.
Type-2 EVPN routes must be val idated using the : vx l an . i ne t. o table, which contains the links associated with VXLAN
tunnels.

4.
Type-5 EVPN routes can be validated using standard 1Pv4 routes, whether those routes are present in the default inet.O route
table, or in a customer VRF route table.

www.juniper.net Data Center Interconnect • Chapter 8 - 57


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 9: Advanced Data Center Architectures

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Describe an advanced data center deployment scenario

Q 2019 Juniper Networks, Inc All Rights Reserved

We Will Discuss:
• An advanced data center deployment scenario.

Chapter 9-2 • Advanced Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Basic Data Center Architectures

➔ Requirements Overview
■ Base Design

C> 2019 Juniper Networks, Inc All Rights Reserved

Requirements Overview
This slide lists t he topics that we will cover. We wi ll discuss t he highlighted topic first.

www .juniper.net Advanced Data Center Architectures • Chapter 9 - 3


Data Center Fabric with EVPN and VXLAN

Organization Requirements

■ Data center design requirements


• Multi-site data center design with DCI
• VLANs
• Application flow between hosts within same broadcast domain
• Application flow between hosts within different broadcast domains
• Reachability
• Applications must be able to access external networking resources (Internet, corporate
WAN, etc.)
• Security
• Routed traffic to remote destinations must pass through a security appliance
- This includes traffic routed between data centers over the DCI
• Scalability
• The data center must be able to scale in a modular fashion
C> 2019 Juniper Networks, Inc All Rights Reserved

Data Center Design Requirements


Plann ing is key to implementing successful data center environments. The initia l design should take into account several
factors. Some factors to consider are:

• How many data center sites will be deployed? How will t he data centers be connected?

• VLANs - How many VLANs wi ll be required within the domain? How will t raffic f low with in the same VLAN? How
will traffic flow between hosts in different VLANs?

• Reachability- Do applications require Layer 2 comm unication? Do t he applications requi re Layer 3


commun ication? What external networks (Internet, corporate WAN, etc.) will applications be required to
communicate with?

• Security - What traffic will be required to pass t hrough a secu rity domain? How will that security domain be
implemented? Is an edge f irewa ll sufficient and sca lable? Will a security domain which contains several
security devices be requ ired?

• Scalability - How wi ll the initial design be impacted when the data center scales?

Chapter 9-4 • Advanced Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Proposed Solution
■ Solution outline
• Two data center sites: DC1 and DC2
• Spine-Leaf topology
• Multi-homed servers (EVPN-LAG)
• VXLAN with EVPN control plane for Layer 2 domains within VRFs on super
spine nodes for traffic flow control
• Layer 2 gateways at the leaf nodes
• Layer 3 gateways at the spine nodes
• Dedicated service block within DC1
• Traffic to external destinations and to remote DC must pass through service block
• Service device within DC2
• EVPN controlled VXLAN architecture with Type 5 routes
• Route reflectors for BGP overlay route distribution
C> 2019 Juniper Networks, Inc All Rights Reserved

Solution Outline
The design for this example consists of two data center sites: DC1 and DC2. The physical topology will be a spine-leaf
topo logy, and all servers wi ll be multihomed using EVPN-LAG. A VXLAN with EVPN contro l plane will be used for the Layer 2
doma ins. Layer 2 gateways will be configured on the leaf nodes, and Layer 3 gateways wil l be configured on the spine nodes.

To maintain traffic separation and to improve control traffic flows, the Layer 3 gateways and super-spine nodes will
implement VRFs. VRFs allow traffic to pass through the device in one VRF, and then be forwarded back to the device to be
forward along a different path later t ime, such as after traffic has been processed through a service block, or when traffic
passing through the same super-spine device must be d irected over a DCI link or an externa lly facing link. The forward ing
path that the traffic takes depends on the VRF in which the traffic arrives.

A dedicated service block within data center DC1 will service all traffic that is destined to external destinations, and all
inter-VLAN traffic. A single service device will be deployed within the data center DC2, through which a ll Layer 3 routed traffic
that arrives at or leaves DC2 will have to pass.

Data Center Interconnect (DCI) traffic will be routed using Type 5 EVPN routes. Within the data center, route reflectors will be
used for BGP overlay route distribution. Layer 2 DCI traffic, or Layer 2 stretch traffic, does not pass through the service
devices, and is bridged to remote hosts with in the same broadcast domain.

www.juniper.net Advanced Data Center Architectures • Chapter 9-5


Data Center Fabric with EVPN and VXLAN

Basic Data Center Architectures

■Requirements Overview
➔ Base Design

C> 2019 Juniper Networks, Inc All Rights Reserved

Base Design
The slide highlights the topic we will d iscuss next.

Chapter 9-6 • Advanced Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Sample Topology - Physical Layout


DC Pod DC Pod

• Physical topology DC2

• Spine-Leaf topology with Super Spine •••

• Dual-homed servers
--- ---
- - 1----i
--- ---
- - 1--------..

Core Fabric
• Dual Internet Gateway
--- ---
• Service Block - -
WAN

DC1
DC1 Super Spine Layer -
--=-. ----
DC1 Spine Layer --- --- -- --
DC 1 Leaf Layer
EVPN LAG .->,a~ :.__
to each server
•••
Service Device Cluster
DC1 Service Block
Sample POD

C> 2019 Juniper Networks, Inc All Rights Reserved

Physical Layout
The physical topology in DC1 is based on a five t ier fabric architecture. The spine and leaf nodes are grouped together in
pods. Each pod connects to a super-spine layer. The servers within each pod are dual homed to leaf devices.

The super-spine consists of dual connected Internet gateway devices. The DCI is used to connect the super-spine devices in
DC1 with t he spine devices in DC2. A service block in DC1 services all traffic destined to external destinations, all inter-VLAN
traffic, and all routed traffic to data center DC2.

The DC2 design is a standard spine leaf topology.

www .j uniper.net Advanced Data Center Architectures • Chapter 9 - 7


Data Center Fabric with EVPN and VXLAN

Sample Topology - Protocols


DC Pod DC Pod
■ Underlay DC2
~
1

• EBGP on all point-to-point links ~ •••


./"'-.. ./"'-..
(underlay connectivity) --- --- --- ---
• EBGP also provides overlay connectivity
/

- - - -
within a pod ~~7 Core Fabric
--- ---
• IBGP between spine devices and RR
(overlay route distribution between
- - " WAN
A

¢_ ~
pods) .r

--- ---
1. / , 1

• BGP between Service Block DC1


DC1 Super Spine Layer
routers and Service Block service ( - ~
... -:.-::::.- ~
devices
DC1 Spine Layer .... ---- --,
.=- IS-IS .=-
- - --
-
--
/_ I ........._ J' -
'. 0~:.~ 1/'\
/

- -- (
EBGP
DC 1 Leaf Layer
~ ~
ESls
.. ~ESls
,-;; '

• ..•• ''
<- J <- ' •
IBGP •
BGP ~ '
BGP

'\..,

II
=
Sample POD
--
-=

,/
Service Device Cluster
DC1 Service Block
,,/'/
C> 2019 Juniper Networks, Inc All Rights Reserved

Underlay Network
The underlay network in both data centers wi ll be configured with EBGP sessions on all point-to-point links within the data
center. IBGP peering sess ions between spine devices and the super-spine devices, wh ich act as route reflectors, provide
route red istribution between pods. The spine devices in each pod are configured as route reflectors for route redistribution
w ithin each pod.

BGP runs between the service block routers and the service block devices to d istribute routing information between the
VXLAN networks and the service device clusters.

Chapter 9-8 • Advanced Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Sample Topology - Logical Overlay


DC Pod DC Pod
• VXLAN tunnels built between VTEPs DC2
across domain
•••
• L2 VXLAN tunnels for all L2 traffic
--- ---1-~-
-- --
• Full mesh L2 VXLAN tunnels between all
Leafs and all Spines within a pod (L3
- - - - - 1 -- - ----

Gateway devices)
• Full mesh between all leafs within a DC
• Full mesh between all leafs across all DCs
• VXLAN tunnels dynamically built through DC1
DC1 Super Spine Layer
EVPN signaling

- - lntra-VLAN between data centers


- - lntra-VLAN between Pods DC1 Leaf Layer
- - lntra-VLAN w ithin the same Pod
DC1 Service Block
• • • •••
I •Not all VXLAN tu nnels s hown I
DC1 Pod1 - Layer 2 Fabric DC1 Pod2 - Layer 2 Fabric

C> 2019 Juniper Networks, Inc All Rights Reserved 9

Logical Overlay
VXLAN tunnels will be built between all VTEPs across the entire domain. A full mesh of Layer 2 VXLAN tunnels exists between
all leaf nodes and all spine nodes within a pod. A fu ll mesh of VXLAN tunnels exists between all leaf nodes within a DC. A fu ll
mesh of VXLAN tunnels exists between all leaf nodes across both data centers. Al l of these VXLAN tunne ls wi ll be
dynamically created through EVPN signaling. These VXLAN tunnels create a Layer 2 stretch from every leaf device to every
other leaf device.

Note that the diagram has been simplified by not showing all VXLAN t unnels. It shows a representation of the types of
tunnels that will be created.

www .j uniper.net Advanced Data Center Architectures • Chapter 9 - 9


Data Center Fabric with EVPN and VXLAN

L3 VXLAN Tunnels - Distributed Gateway


DC Pod DC Pod
• L3 VXLAN Gateway used for: DC2
• lnter-VRF traffic within the same Pod through the
•••
Service Block
--- ---1-~-
-- --
• lnter-VRF traffic between Pods through the
Service Block
- - - - -1------

Core Fabric
• lnter-VRF traffic between Data Centers through
-- --
the Service Block
• EVPN Type-5 Routes
- -
'------l-1-~- ~-------

• Service Block router is used as next-hop for all


DC1
Type-5 routes
DC1 Super Spine Layer
• Traffic in DC2 does not use a Service Block.
lnter-VRF traffic in DC2 must traverse DCI and
pass through Service Block in DC 1

DC1 Leaf Layer


..
, - ..
, DC1 VXLAN to Service Block
- - DC2 VXLAN to Service Block
• • • • • • DC 1 Service Block

DC1 Pod1 - Layer 2 Fabric DC1 Pod2 - Layer 2 Fabric

C> 2019 Juniper Networks, Inc All Rights Reserved 10

Distributed Gateway
The Layer 3 VXLAN gateway exists on the service block routers. All inter-VRF traffic within the same pod must pass through
the service block. All inter-VRF traffic between pods must pass through the service block a ll inter-VRF traffic between data
centers must also pass through the service block.

EVPN Type-5 routes are used for Layer 3 destinations outside of the data centers. This includes Layer 3 destinations with in
each data center, and not just external destinations. The next hop for al l Type-5 routes is the interface on the service block.

Traffic in DC2 does not use a service block in DC2. Instead, DC2 traffic that requires servicing must traverse the DCI link and
pas.s through the service block in DC1.

Chapter 9-10 • Advanced Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Traffic Forwarding - lntra-VRF Between Pods


DC2
• Traffic load balanced at
each fork in the forwarding ----
tree
WAN
• lntra-VRF traffic does not pass
through the Service Block DC1 ---- ---- DC1 Super Spine Layer

I DC1 Spine Layer I

I DC1 Leaf Layer I DC 1 Service Block

DC1 Pod1 - Layer 2 Fabric DC1 Pod2 - Layer 2 Fabric

C> 2019 J uniper Networks , Inc All Rights Reserved 11

lntra-VRF Between Pods


Traffic that remains within the same broadcast domain, but must be forwarded to a different pod, is load shared across all
uplinks to the spine layer, and then all uplinks to the super-spine layer. This traffic is forwarded in Layer 2 t unnels, or VXLAN
tunnels, between VTEPs. Traffic that does not leave a broadcast domain does not transit the service block.

www .j uniper.net Advanced Data Center Architectures • Chapter 9 - 11


Data Center Fabric with EVPN and VXLAN

Traffic Forwarding - lntra-VRF Between DCs


nij DC2 lfjJ
■ Layer 2 Stretch
• lntra-VRF traffic between DCs
does not pass through the
Service Block
. ...
• wa•• .


_ ___:::::::::;=i· ~·===;·;:S::::::=-__________
-- ---
DC1
- - DC1 Super Spine Layer

DC1 Spine Layer

I DC1 Leaf Layer I


DC 1 Service Block

DC1 Pod1 - Layer 2 Fabric DC1 Pod2 - Layer 2 Fabric

C> 2019 Juniper Networks, Inc All Rights Reserved 12

Layer 2 Stretch
In the same manner, intra-VRF traffic between data centers did not pass to a service block as it is considered trusted.

Chapter 9 - 1 2 • Advanced Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Traffic Forwarding - lnter-VRF Within a Pod

■ lnter-VRF within a Pod


fl DC2 ~

• lnter-VRF traffic within a Pod --- --- --- ----


passes through the Service Block
-- --
WAN

DC1 DC1 Super Spine Layer

DC1 Spine Layer

I DC1 Leaf Layer I


• • • ••• DC 1 Service Block

DC1 Pod1 - Layer 2 Fabric DC1 Pod2 - Layer 2 Fabric

Q 2019 Juniper Networks, Inc All Rights Reserved 13

lnter-VRF Within a Pod


lnter-VRF traffic always passes through a service block because it changes virtual routing and forwarding domains. This type
of traffic can be inter-department traffic, or inter-tenant traffic. Either way, if traffic is routed between VRFs, it is processed
through the service block.

www .juniper.net Advanced Data Center Architectures • Chapter 9 - 13


Data Center Fabric with EVPN and VXLAN

Traffic Forwarding - lnter-VRF to Remote DC


fiij DC2 f_ij
■ lnter-VRF between DCs
• lnter-VRF traffic between DCs ----
passes through the Service Block

. ,.,.• .

• wa•• •

_ _ _----=:::=:::::;:::i· ~·;=:;:!♦~ :;:::__ _ _ _ _ _ _ _ _ __

DC1 DC1 Super Spine Layer

DC1 Spine Layer

I DC1 Leaf Layer I


• • • ••• DC1 Service Block

DC1 Pod1 - Layer 2 Fabric DC1 Pod2 - Layer 2 Fabric

C> 2019 Juniper Networks, Inc All Rights Reserved 14

lnter-VRF Between Data Centers


lntra-VRF traffic between data centers also passes the service block. Once the traffic f rom DC1 enters the service block, it is
processed and returned to the super-spine nodes within a VRF that is dedicated to inter-VRF/inter-DC traffic.

Chapter 9-1 4 • Advanced Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

lnter-VRF to Remote DC Option 2


ij DC2
■ lnter-VRF traffic between DCs
• lnter-VRF traffic between DCs passes ----
through the Service Block
• lnter-VRF traffic passes through ---- --
secondary security device at remote DC : wf.J.:•:
♦ ♦
• Firewall directly connected - ___----=:::=:::::;:::i· ~:::;:!it:;::___ _ _ _ _ _ _ _____
DC1 Super Spine Layer
to spine device oc1
• Traffic forwarded
to local VRF on DC1 Spine Layer

the firewall
I DC1 Leaf Layer I
• • • ••• DC1 Service Block

DC1 Pod1 - Layer 2 Fabric DC1 Pod2 - Layer 2 Fabric

C> 2019 Juniper Networks, Inc All Rights Reserved 15

lnter-VRF to Remote DC Option 2


An alternative method of handling inter-VRF/ inter-DC traffic is to place a security device in DC2. The spine devices in DC2
forward all traffic that comes across the DCI to the security device, where it is processed and routed to the appropriate VRF
or VLAN within DC2.

www .juniper.net Advanced Data Center Architectures • Chapter 9 - 15


Data Center Fabric with EVPN and VXLAN

Traffic Forwarding - Internet Traffic


ij DC2 f_iJ
■ Internet traffic
• Passes through the Service Block --- ---- --- ---
• Uses Internet VRF (not DCI VRF)
at core routers ---- -- ---+

---+

WAN
••
DC1 DC1 Super Spine Layer

DC1 Spine Layer

I DC1 Leaf Layer I


• • • ••• DC1 Service Block

DC1 Pod1 - Layer 2 Fabric DC1 Pod2 - Layer 2 Fabric

Q 2019 Juniper Networks, Inc All Rights Reserved 16

Internet Traffic
Traffic destined to the Internet or to outside destinations is forwarded to the Layer 3 gateway in the service block. The
Layer 3 gateway forwards the traffic to the service block, which processes the traffic and forwards it back to the Layer 3
gateways in a different VRF or VLAN. From the service block router, the traffic is forwarded to an Internet specific VRF on the
super-spine, which has a connection to external destinations, such as the Internet. Return traffic fol lows the reverse path
and comes in through the Internet VR F on the super-spine, and is forwarded to the service block to be processed before it
reenters the data center domain.

Chapter 9-16 • Advanced Data Center Architectures www.juniper.net


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Described an advanced data center deployment scenario

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
17

We Discussed:
• An advanced data cente r deployment scenario.

www .juniper. net Advanced Data Center Architectures • Chapter 9 -17


Data Center Fabric with EVPN and VXLAN

Review Questions

1. What method is used to maintain traffic separation within the


advanced data center example?
2. What is a benefit of deploying a service block instead of a dedicated
security device?
3. What is a reason for deploying a five-stage topology in this
scenario?

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lWOPKS
1s

Review Questions
1.

2.

3.

Chapter 9-18 • Advanced Data Center Architectures www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Answers to Review Questions


1. Using VRFs allows t he separation of traffic between tenants and security zones wit hin a data center, and allows
better management of traffic flows.

2. Deploying a service block instead of a dedicated service appliance allows the sca lability of the service block. A
service block can be scaled by adding new devices and services beyond the gateway device, which is
transpa rent to the rest of the network.
3. A five-stage topology allows the data center to expand horizontally without impacting each individua l pod. New
pods can be added as needed without affecting the other pods, and with m inima l impact on the super-spine
uplinks and down links.

www.juniper.net Advanced Data Center Architectures • Chapter 9 - 19


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 10: EVPN Multicast

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Describe the multicast extensions to EVPN
• Configure EVPN Multicast

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
2

We Will Discuss:
• The multicast extensions to EVPN; and

• Configuring EVPN multicast.

Chapter 10-2 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: EVPN Multicast

➔ EVPN Multicast Routing


■ EVPN Multicast Configuration

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Multicast Routing


The slide lists the topics we discuss. We will discuss the highlighted top ic first.

www .juniper.net EVPN Multicast • Chapter 10 - 3


Data Center Fabric with EVPN and VXLAN

EVPN Route Types


■ EVPN is signaled using MP-BGP and has eight route types
'
Route Type Description Usage Standard
Type 1 Auto-Discovery (AD) Route Multipath and Mass Withdraw RFC7432
Type 2 MAC/IP route MAC/IP Advertisement RFC7432
Type 3 Multicast Route BUM Flooding RFC7432
Type 4 Ethernet Segment Route ES Discovery and Designated RFC7432
Forwarder Election (DF)
Type 5 IP Prefix Route IP Route Advertisement Draft-rabedan-12vpn-evpn-
prefix-advertisement
Type-6 Selective Multicast Ethernet Enables efficient core MCAST draft-ietf-bess-evpn-igmp-
Route Tag forwarding mid-proxy
Type-7 IGMP-Join synch Synchs Multi-Homed peers draft-ietf-bess-evpn-igmp-
when an IGMP-Join is received mid-proxy

Type-8 IGMP-Leave synch Synchs Multi-homed peers draft-ietf-bess-evpn-igmp-


when an IGMP-Leave is mld-proxy
received

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Route Types


EVPN is signaled through multiprotocol BGP. Several route types have been defined to perform different tasks within the
EVPN environment. The route types that correspond to multicast traffic operations are Type-6, Type-7, and Type-8 routes.
These routes have specific roles within a multicast environment that is running in an EVPN-VXLAN network.

Chapter 10-4 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

Traditional Multicast
■ Traditional Multicast Process
• IGMP at edge
• Used to register multicast sources
• Used by receivers to request multicast feeds from specific or general multicast feeds (S,G
or *,G)
• Queries used by Multicast Designated Router (DR) for each broadcast segment (edge)
- Used to signal when a receiver is no longer interested in a multicast feed (Group
Leave)
- Used to verify interested receivers on a broadcast segment (Group Query)
• PIM used to select active forwarding path through core (routed) network
• Shared tree through Rendezvous Point (RP)
• Shortest Path Tree between receiver and source
• Join and Prune messages to initiate or terminate multicast feeds

C> 2019 Juniper Networks, Inc All Rights Reserved

Traditional Multicast
Trad itional multicast networks involve several components. The resource devices that send multicast traffic into the network,
receiver devices that are interested in receiving the multicast traffic, and then there are the network devices in between the
source and receiver.

At the edge of the multicast domain, the Internet Group Management Prot ocol (IGMP) is used to register multicast sources
and to register multicast group requests by hosts on the network.
One device connected to a LAN segment is elected as a designated router for the broadcast segment. Its ro le is to signal to
the multicast network when a receiver is no longer interested in a multicast feed, and to verify or register interested receivers
on the broadcast domain for wh ich it is responsible.

The prot ocol independent multicast protocol, or PIM, is used to select the active forwarding path through 1/ 4 or routed
network. It can do this in one of several methods.

One of the most common methods is to forward all source traffic to a central point called a shared tree. With a shared tree,
a centralized device is selected to receive all source traffic, and to wh ich all join messages for multicast feeds, where a
specific source address for the feed is unknown, are initially sent. Th is device is called a Rendezvous Point (RP). The RP is
the central point for all multicast sources and receivers, or any receiver can join a multicast tree.

A Shortest Path Tree between a receiver and a source refers to a direct routed path from a source to a rece iver. In order to
create a shortest path tree, a receiver must know the specific source IP address, and request traffic from that specific
source in an IGMP query. When a DR receives an IGMP query for a specific source, group (S,G) combination, a multicast
forwarding tree is established along the shortest path between the source DR and the receiver DR.

To manage multicast flows throughout the network, join and prune messages are used to initiate or terminate multicast
feeds.

www .juniper.net EVPN Multicast • Chapter 10- 5


Data Center Fabric with EVPN and VXLAN

EVPN Multicast
■ EVPN with VXLAN tunnels traffic
• IGMP could be tunneled across IP overlay
• Inefficient (places DR at remote locations from source or receiver)
• Increases BUM traffic in core
• Inefficient multicast stream replication
• Multicast functions are separate from EVPN Control Plane
■ EVPN route types for Multicast
• Type-6: Selective Multicast Route to signal IGMP Joins to remote VTEPS
• Type-?: IGMP join sync for multi-homed sites (Designated Forwarders)
• Type-8: IGMP leave sync for multi-homed sites (Designated Forwarders)

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Multicast
Multicast t raffic is broadcast within a broadcast domain. Normally, broadcast domain is conta ined with in a single switch ing
doma in, and mu lticast traffic is only forwarded to remote rece ivers across a routed network. With this design, the multicast
broadcast packets are conta ined with in the broadcast domain that terminates at the designated rout er.

With an EVPN-VXLAN, a broadcast domain is not limited to a single location. The EVPN-VXLAN domain can extend across
multiple leaf devices, multiple spine devices, and mult iple data centers. The members of the VNls in the remote locations
are part of a single broadcast domain. Because of th is, when a multicast source sends traffic, it is forwarded throughout the
entire broadcast doma in.

There are a couple of ways to manage the multicast traffic in an EVPN-VXLAN. The f irst would be to tunnel IGMP across the IP
overlay. This is inefficient because it places a designated router at remote locations that are not directly connected to the
source or receiver. This creates inefficient multicast stream rep lication, and multicast functions are separate from the EVPN
control plane.

To help address some of these inefficiencies, three new EVPN route types for multicast were developed. The Type-6 route, or
selective multicast route tag route is used to signal IGMP joins to remote VTEPs.

The Type-7 route, or IGMP j oin sync route, is used in multihomed sites where a source or rece iver is connected to a broadcast
doma in has multiple routers as exit points, and the join queries must be synchronized across all potential edge devices.

The Type-8 route, or IGMP leave sync route, is used in multihomed sites where source or receivers connected to a broadcast
doma in that has mu ltiple routers as exit points, and t he leave queries must be synchronized across all potential edge
devices.

Chapter 10-6 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Multicast Issues


■ Broadcast domain spans multiple devices
• Not all devices in the EVPN domain are interested in multicast traffic
• Multicast (BUM) traffic is sent to all leafs that service a VNI/VLAN
• Unwanted multicast traffic is flooded to all nodes that host the same broadcast
domain ◄- MulticastTraffic Flow
,---

----------- DCI

DC1 ---
+--
--
+--
+--
VRF/Bridge Domain

VRF/Bridge Domain

-- -- -- -- --
+-- +--
+--
+--
+-- DC2 -
+--

-
+--

=
SRC R
(S,G) (S,G)
POD1 POD2 POD1 111

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Multicast Issues


Shown is an example of multicast traffic without EVPN mu lt icast traffic optimization functions. The example shows two data
centers connected with a DCI. Data center DC1 has two pods, and data center DC2 has one pod. All three pods have devices
in the same VXLAN VNI, and therefore t hey are in the same broadcast domain . However, only one device in pod2 is
interested in receiving multicast traffic from t he source and pod one.

With traditional multicast, traffic is sent to all leaf devices that service the VNI/VLAN because they all reside in the same
broadcast domain. Traffic from the source and pod one arrives at t he receiver in pod two, but t he t raffic is also forwarded to
all leaf devices that service the VNI throughout the entire VXLAN .

www .juniper.net EVPN Multicast • Chapter 10 - 7


Data Center Fabric with EVPN and VXLAN

EVPN Multicast Type-6 Solution


■ EVPN Type-6 Route (§elective Multicast £thernet Iag)
• Leaf devices snoop IGMP joins
• Notify remote VTEPs which groups have local interest using MP-BGP route
advertisement
+- MulticastTraffic Flow
•-- Type-6 Route

~
DC1 --- VRF/Bridge Domain

VRF/Bridge Domain
\

\ -
--
+--
\
\
\
\

Interested
(S,G)

=
SRC R
(S,G) (S,G)
POD1 POD2 POD1 111

C> 2019 J uniper Networks , Inc All Rights Reserved

Type-6 Routes
The EVPN Type-6 route, or selective multicast Et hernet tag ro ute, allows a VTEP device to advertise whether locally
connected receivers are interested in receiving m ult icast traffic. The route can indicate a specific source for t he multicast
group, or j ust a multicast group address. This allows remote VTEPs to register which remote VTEPs are interested in
multicast feeds within the same broadcast domain.

Chapter 10- 8 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Multicast Type-6 Result


■ Source leaf only forwards to interested leafs
• Reduced multicast flooding within horizontal broadcast domain
• Notify remote VTEPs which groups have local interest using MP-BGP route
• Leafs that don't support SMET routes still receive multicast feeds
.,._ MulticastTraffic Flow
DCI • -- Type-6 Route
I
-- ---- ~ VRF/Bridge Domain

DC1 _... -- VRF/Bridge Domain


I
--
I
---
-
. ...... _
-- -- -- """ --"
""
-- - -
+-- +-- +-- +-- +-- +--

+-- +-- +-- DC2


,, ~~ ✓ = ~~
,, ,/ ......... ~ e' Interested r--...._ e ~~~~
♦ ♦ ♦
• ♦ ♦ ♦

- lii ., n lii (S,G)


I II l I I n lii n lii n n
I
Pl = IP I
=
SRC R
(S,G)
l!!!!!!!I
POD1 POD2
I POD1 I 111

Cl 2019 Juniper Networks, Inc All Rights Reserved

Type-6 Route Result


The result of this process is that the VTEP connected to the source is aware of which remote VTEPs require the multicast
feeds. The source VTEP then regulates the multicast broadcast packets and sends them only to the interested remote
VTEPs. This elim inates unnecessary traffic replication and forwarding across the network.

Not all leaf devices will support the Type-6 route. If a leaf device does not support Type-6 routes, it cannot signal to remote
VTEPs that it is not interested in receiving certa in mu lt icast feeds. Therefore, it receives all multicast feeds for the locally
connected broadcast domains.

www .juniper.net EVPN Multicast • Chapter 10- 9


Data Center Fabric with EVPN and VXLAN

EVPN Type-6 Route


■ Selective Multicast Ethernet Tag Route
• Generated in response to IGMP queries on local broadcast segment
• Requires IGMP snooping
• Acts as an IGMP proxy
• Ingress leaf (VTEP) translates IGMP message to a Type-6 route
• Egress leaf (VTEP) translates Type-6 message back to an IGMP message
EVPN Type-6 Route Advertisement

l
I \
IGMP IGMP
1 1
I
= 10.1 0.10.1 1/24
'
10.1 0.10.22/24 Site 2 ~

Group Join (239.1.1.1) ---------------------------


MP-IBGP Session (EVPN) Source for 239.1.1.1
Site 1 cE1 Leaf1 Leaf2 CE2
Receiver 2.2.2.2/32 4.4.4.4/32 Source
Q 2019 Juniper Networks, Inc All Rights Reserved

EVPN Type-6 Route Advertising Process


In the example, a receiver in Site 1 sends a group j oin message for group 239.1 .1.1 . The Leaf1 device advertises to Leaf2
that it is interested in receiving t raffic destined to group 239.1.1.1 . When traffic arrives on Leaf2 from the source in Site2,
the Leaf2 device forwards the traffic to remote VTEP Leaf1 . The remote VTEP the Leaf1 then forwards the t raffic to the
locally configured broadcast domain, wh ich contains the interested receiver.

This process requires the capability to perform IGMP snooping on the leaf devices. The IGMP leaf devices listen to IGM P
queries that enter the customer facing interfaces, and replace them with EVPN Type-6 routes.

Chapter 10-10 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Type-6 Route Format


• Group Join Message
• Source Register includes MCAST source address
Route Type Selective Multicast Ethernet Tag (Type-6)
Route Distinguisher (RD) RD of EVI on Leaf1
Muticast Source Address MCAST Source Address (if source is specified: e.g. SSM)
NLRI
Multicast Group Address Group Address (239.1.1. 1)
Originator Router ID 2.2.2.2
Flags (Optional) (Optional)
Next-hop BGP Next-hop (2.2.2.2)
Extended Community Route-target for EVI on Leaf1
Other attributes (Origin, AS -Path, Local-Pref, etc.) .' .
• •• •• •• •• •• •• •• • • •• ••
•••• • •••
•• •• • ••
•• ••
••• •• •

---- ----
•••• Soine1 Soine2 •••• •
•• ••
IGMP ••
• • •• IGMP
••• •
••
1 ••
• • ••
•• 1
I \ •• •• I \
..·····~

~ ·. .... ·
= 10.10.10.1 1/24
••
" 10.10.10.22/24 Site 2 =
- ----------------------------·
~

+-

11
I I

Group Join (2391.1.1) MP-IBGP Session (EVPN) Source for 239.1.1.1


Site 1 CE1 Leaf1 Leaf2 CE2
Receiver 2.2.2.2/32 4.4.4.4/32 Source

iQ 2019 Juniper Networks, Inc All Rights Reserved


Jun1Per
NFl\\'OPKS
11

EVPN Type-6 Route Format


Similar to other EVPN route types, the Type-6 route contains information about protocol next hop for traffic, the route target
information that is associated with the rece iving device broadcast domain, a route distinguisher to ensure t hat t he route
rema ins unique within the domain, and information about the multicast source address and m ult icast group address.

www .juniper.net EVPN Multicast • Chapter 10- 11


Data Center Fabric with EVPN and VXLAN

EVPN Multicast Problem 1 - Multihomed Sites


■ IGMP Join messages goes to the non-DR device
• In Active/Active environment, DR may not receive the IGMP Join
• Join process is never completed

--
+--

+--
--
+--

+--

Leaf1
- Leaf2
- Leaf3
-
DR
S ite 2 ~ -----~
SRC

C> 2019 Juniper Networks, Inc All Rights Reserved

Multihomed Sites
When a device is mu lti homed to a VXLAN, another problem presents itself when dea ling with mu lticast. Within a broadcast
doma in, only one device at the edge of the domain can be elected as a designated router (DR). The designated router's role
is to manage the mu lticast process within the connected Ethernet segment. With an active/active environment, the devices
connected to the m ultiple leaf devices can forward t heir IGM P join messages to t he non-designated ro uter. If the
non-designated router receives an IGMP join message, the non-designated router does not process the join. The res ult is
that the designated router never receives the join message, and cannot initiate a multicast f low from the source.

Chapter 10-12 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Multicast Problem 2 - Multihomed Sites


■ IGMP Join/Leave messages go to the DR for the segment
• In Active/Active environment, DR may not be the forwarding leaf
• IGMP Leave messages are sent to device that isn't forwarding traffic
• Device that is forwarding traffic isn't notified to stop forwarding Multicast traffic

+---- --
+--

+-- +--
I- - Multicast flow J

Leaf1
- Leaf2
- Leaf3
-
Leave
Message ~ P Site 2 P ,......__~

'-----~ I SRC
Site 1 cE1

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per NEl\\'OPKS


13

Multihome IGMP Leave


In addition t o problems with j oin messages, leave messages can become out of sync as well. In t he example on the slide, if a
receiver in Site 1 must termi nate a multicast feed, but the leave message is sent to the non-designated router, then t he
designated rout er is unable to t erminate the mult icast feed . In t his scenario, the mult icast f low cont inues even when t here
are no more receivers in the site that are interested in the multicast t raffic.

www .j uniper.net EVPN Multicast • Chapter 10- 13


Data Center Fabric with EVPN and VXLAN

EVPN Type-7 Route


■ IGMP join sync message
• Synchronize join messages between multi-hop leaf edge devcies
• Without Type-7, designated forwarders are not synchronized, which could
result in multicast traffic starvation

Route
Advertisement --- ---
Leaf1 Leaf3
-
---u..ui DR

Site 2 P c

Site 1 CE1 CE2

Q 2019 Juniper Networks, Inc All Rights Reserved

EVPN Type-7
To remedy the join mes.s age issue, the EVPN Type-7 route was created. The EVPN Type-7 route acts as an IGMP
synchronization mechanism between leaf devices that are connected to the same Ethernet segment. With t he EVPN Type-7
route, if an IGMP join message arrives on t he leaf device, that leaf device automatical ly advertises a Type-7 route contain ing
the join message information to all other leaf nodes that are connected to the same Ethernet segment. In the example, the
IGMP join is sent to a leaf device that is not the designated router. Leaf1 creates an EVPN Type-7 ro ute conta ining t he
information in the join message and sends it to Leaf2. Leaf2, acting as the designated router, becomes aware that a device
in sight one is interested in multicast traffic, and begins the multicast tree build ing process.

Chapter 10-14 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Type-7 Route Format


• IGMP Group Join Sync
Route "fype IGMP Join Sync ("fype-7)
Ethernet Segment Identifier Ox0:1 :1 :1:1:1:1:1:1: 1
Ethernet Tag ID VXLANVNID
NLRI Multicast Source Address Mcast Source Address
Multicast Group Address Mcast Group Address
Orioinator Router Address Leaf1 Address
Flags (Optional) (Optional)
Next-hop BGP Next-hop (2.2.2.2)
Extended Community Route-target for EVI on Leaf1
Other attributes (Origin, AS -Path, Local-Pref, etc.) ...

Type-7 Route
Advertisement
-- ---
Leaf1 Leaf2 Leaf2 Leaf3
-
---------
!ESl=Ox0:1 :1:1 :1:1 :1 :1 :1 :11 ..... ··
...• Type 1

:' ---====~
A~i;:;:=i
~ B~ -
l 1GMP Join r Site 2 =c
Site 1 CE1 CE2
Q 2019 Juniper Networks, Inc All Rights Reserved

EVPN Type-7 Route Format


Displayed in the example is the format of the EVPN Type-7 rout e. It contains information regarding the mult icast group
address, multicast source address, if specified, the VXLAN VNI, the Ethernet segment of the co nnect ed host, and other
information as shown.

www .j uniper.net EVPN Multicast • Chapter 10- 15


Data Center Fabric with EVPN and VXLAN

EVPN Type-8 Route


■ IGMP leave sync message
• Synchronize group leave messages between multi-hop leaf devices
• Without Type-8, designated forwarders are not synchronized, which could
result in multicast traffic forwarding after a group leave message

Type-8 Route
Advertisement ---- ----
Leaf1 Leaf3
-
-u..ui DR

Site 2 P c

Site 1 CE1 CE2

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Type-8 Route


The EVPN Type-8 route performs a simi lar task to the EVPN Type-7 route. Its role is to indicate when a connected host desires
to leave a multicast feed . Without a Type-8 route, the designated forwarders are not synchronized, wh ich could result in
multicast traffic forward ing after a group leave message. In t he example, the IGM P leave arrives at Leaf 1, the designated
router is Leaf2. The Leaf1 device generates a Type-8 route and sends it to Leaf2. This informs Leaf2 t hat the rece iver in the
Et hernet segment is no longer interested in multicast traffic, and the designated router can begin the process of term inating
multicast feed .

Chapter 10-16 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

EVPN Type-8 Route Format


• IGMP Group Join Sync
Route "fype IGMP Leave Sync ("fype-8)
Ethernet Segment Identifier Ox0:1:1 :1:1:1:1:1:1:1
Ethernet Tag ID VXLANVNID
NLRI Multicast Source Address Mcast Source Address
Multicast Group Address Mcast Group Address
Orioinator Router Address Leaf1 Address
Flags (Optional) (Optional)
Next-hop BGP Next-hop (2.2.2.2)
Extended Community Route-target for EVI on Leaf1
Other attributes (Origin, AS-Path, Local-Pref, etc.) ...

Type-8 Route
Advertisement - ---
Leaf1 Leaf2 Leaf3
-
---------
! ESl=Ox0:1:1:1:1:1:1:1:1:1j ••••• ••
...•
:· ---====~
A~i;;;:::i
~ B~ -
l1GMP Leaver
Site 2 =c
Site 1 CE1 CE2
Q 2019 Juniper Networks, Inc All Rights Reserved

EVPN Type-8 Route


The slide shows the parameters are included in the EVPN Type-8 route.

www .j uniper.net EVPN Multicast • Chapter 10- 17


Data Center Fabric with EVPN and VXLAN

EVPN Multicast Traffic Replication


• Only Ingress Replication is supported with EVPN controlled VXLAN

---- ----
-----
,,,,.-----
Leaf1 Leaf2
- ¥~ ......
............,:,

Leaf3
- Traffic replicated on Leaf 3
I
I

Site 1 = Site 2 = Site 3 =


--

Receiver CE1 Receiver CE2 Source CE3

C> 2019 Juniper Networks , Inc All Rights Reserved

Multicast Traffic Replication


Because multicast traffic can be destined to multiple hosts, traffic feeds must be rep licat ed at point s along the multicast
t ree where the forward ing path must divide. With EVPN multicast, mu lticast rep lication a lways occu rs on the source VTEP.

Chapter 10-18 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

Multicast Hair-Pinning
• Centralized DR can cause hair-pinning across VXLAN network

• On Leaf1: Multicast traffic from source in


subnet 1 is:
• Flooded to receivers in subnet 1
following EVPN BUM procedure =
• NOT routed to receivers in subnet 2
1111111
because Leaf1 's IRB2 is not the DR on Receiver2
subnet2 VRF
L3
• On Leaf3: Multicast traffic received from
EVPN core in subnet 1 is: Source Leaf2
• Flooded to receivers in subnet 1 1111111
following EVPN BUM procedure VRF HosWM
• Routed to subnet 2 and delivered to L3
receivers in subnet 2 through IRB2
because Leaf3's IRB2 is the PIM DR Leaf1
for subnet 2 1111111
Receiver1

D Bridge Domain 1, subnet 1 eaf3


D Bridge Domain 2, subnet 2 DR for subnet 2 I
• IRB interface

C> 2019 Juniper Networks, Inc All Rights Reserved

Multicast Hair-Pinning
Within a multicast environment in a VXLAN, the designated router does not have to be configured on t he device that directly
connects to the source or receiver. The designated router can be configured on an IRB interface on a remote device. When a
multicast feed is instantiated, the feed must always extend from the receiver or source to the designated router for that LAN
segment.

In the example, t he designated router for subnet 2 is configured on leaf3. Any device in subnet 2 must receive multicast
traffic from its designated router, which is on the other side of t he EVPN. Th is process where t raffic is sent across the
network to an IRB, then switches VLANs, and returns to a receiver on the remote side of the network is called ha ir-pinning. As
you can see in the d iagram, this traffic pattern is inefficient.

www .juniper.net EVPN Multicast • Chapter 10- 19


Data Center Fabric with EVPN and VXLAN

Inter Subnet Multicast - EVPN Distributed DR


• EVPN Distributed DR

• Forward multicast traffic to the EVPN IRB


interfaces that have receivers, regardless of
whether it is DR for that subnet or not
• Pull multicast traffic from RP/source if there =
are receivers on the EVPN IRB interface
1111111
regardless of whether it is ADR for that Receiver2
subnet or not VRF
• To prevent duplication, forward multicast L3
traffic sent out of the EVPN IRB interface to
p
local acces interfaces only; e.g. DO NOT Source Leaf2
forward routed multicast traffic to the EVPN 1111111
HosWM
core. DR for subnet 2

Receiver1

D Bridge Domain 1, subnet 1 Leaf3


D Bridge Domain 2, subnet 2 DR for subnet 2 I
• IRB interface

C> 2019 Juniper Networks, Inc All Rights Reserved

EVPN Distributed DR
With distributed DRs, multicast traffic is forwarded to the EVPN IRB interfaces that are connected to interested receivers,
regardless of whether or not the leaf device is a DR for the subnet. This allows a local device, such as Leaf1 in the example,
to act as a designated router even when it is not a designated router. It forwards traffic destined to Receiver1, which is in a
different subnet, without having to forward the traffic to remote Leaf3.

To prevent traffic duplication, multicast traffic is only sent out of the IRB interface to local access interfaces. The local
distributed DR cannot forward routed multicast traffic. In other words, the IRB for subnet 2, which is connected to Receiver 1,
cannot forward the mult icast traffic to remote VTEPs. Only the original IRB that receives traffic from the source can forward
to remote VTEPs.

Chapter 10-20 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: EVPN Multicast

➔ EVPN Multicast Configuration

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lW()PKS
21

EVPN Multicast
The slide highlights the topic we will d iscuss next.

www .j uniper.net EVPN Multicast • Chapter 10- 21


Data Center Fabric with EVPN and VXLAN

Example Topology: EVPN Multicast

• EVPN Routing
• Synchronize joins from dual-homed Site 1 (CE1)
• Advertise source from Site 2 (CE2)

---- ----

Leaf1
- Leaf2
- Leaf3
-
!ESl=Ox0:1:1 :1:1 :1 :1 :1 :1:11

Site 2 ~ c

Site 1 CE1 CE2

iQ 2019 Juniper Networks, Inc All Rights Reserved

Example Topology: EVPN Multicast


The example shows the polity used for the next section. Site 1 is connected to Leaf1 and Leaf2 using an active/ active LAG
connection. The goal of the configuration is to synchronize IGMP join and IGMP leave mes.s ages between Leaf1 and Leaf2,
and to advertise the source from Site 2 .

Chapter 10- 22 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

IGMP on IRB interfaces

• Device with IRB interface acts as proxy


• If IRB is on spine, spine will be point of proxy (CBR)
• If IRB is on leaf, leaf will be point of proxy (EBR)

{master : O} [edit] {master : O} [edit ]


lab@leafl# show interfaces irb lab@leafli show vlans {master : 0) [edit)
unit 10 { default { lab@leafl# show protocols igmp
family inet { vlan-id l; interface irb . 10 ;
address 10 . 1 . 1 . 113/24 ; }
} vlO {
} vlan- id 10 ;
13-interface irb . 10 ;
vxlan {
vni 5010 ;
}
}

C> 2019 Juniper Networks, Inc All Rights Reserved

IGMP On Leaf Device


Where the interface that runs IGMP is placed in the network will determine wh ich device acts as a proxy. The IRB interface is
configured on the spine device, the spine device will act as the proxy point. This is the case fo r centrally bridged and routed
EVPN-VXLAN network. If the IRB is placed on the leaf device, the leaf device will be the proxy device for IGMP messages. This
is the case with an edge bridged routed design.

Example shows the configuration of the IRB interface, the VLAN configuration that places the IRB interface in t he broadcast
doma in, the IGMP configuration that enables the IGMP protocol on the IRB interface.

www .juniper.net EVPN Multicast • Chapter 10- 23


Data Center Fabric with EVPN and VXLAN

IGMP Snooping in VLANs

• IGMP Snooping
• Configured under [edit protocols igmp-snooping]

{master : O}[edit]
lab@leafl# show protocols igmp- snooping
vlan vlO {
12-querier {
source-address 10 . 1 . 1 . 113 ;
}
proxy ;
}

C> 2019 Juniper Networks, Inc All Rights Reserved

IGMP Snooping
In order to register the IGMP messages sent from clients, the leaf device must be able to see t he IGMP messages t hat enter
the host faci ng interfaces. IGMP stooping is used to perform th is task. The example shows the configuration of the IGMP
stooping process or VLAN v10 . It also configures a source address that should be used for listening to IGMP queries.

Chapter 10-24 • EVPN Multicast www.j u n iper .net


Data Center Fabric with EVPN and VXLAN

Optional - Distributed DR

■ Distributed DR
• Only required if configuring distributed DR within the VXLAN
• Configure RP address and enable distributed-cir
• In this example, the RP is a remote device (not local)

(master : O}[edit]
lab@l eaflf show protocols pim
rp {
static {
address 192 . 168 . 100 . 1 ;
}
}
interface irb . 10 {
distributed- dr;
}

C> 2019 Juniper Networks, Inc All Rights Reserved

Distributed DR Configuration
The configuration example shown is only needed if a distributed DR is going to be configured within a VXLAN . The address of
the Rendezvous Point is configured to identify wh ich device is acting as the official Rendezvous Point. The distributed DR
parameters configured under the local IRB interface to identify which local interfaces will be used to perform the distributed
DR tasks.

www .juniper.net EVPN Multicast • Chapter 10- 25


Data Center Fabric with EVPN and VXLAN

Verify IGMP Interfaces

(master : 0}
lab@leafl> show igmp interface
Interface : irb . 10
Querier : 10 . 1 . 1 . 112
State : Up Ti meout : 209
Versi on : 2 Groups : 0
Immediate leave : Off
Promiscuous mode : Off
Passive : Off

Configured Parameters :
IGMP Query Interval : 125 . 0
IGMP Query Response Interval : 10 . 0
IGMP Last Member Query Inte rval: 1 . 0
IGMP Robustness Count : 2

Derived Parameters :
IGMP Membe rship Timeout : 260 . 0
IGMP Other Queri er Present Timeout :
255 . 0

C> 2019 Juniper Networks, Inc All Rights Reserved

Verify IGMP Interfaces


The show i gmp inte rf aces command displays wh ich interfaces are configured to support the IGMP protocol. The
command also displays information regarding IGMP j oins and leaf nodes that have entered the interface.

Chapter 10-26 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

Verify IGMP Snooping

{master : O} {master : O}
lab@leafl> show igmp snooping evpn proxy vlan vlO lab@leafl> show igmp snooping statistics
Instance : default- switch Vlan : vlO
Bridge-Domain : vlO , VN Identifier : 5010 IGMP Message type Received Sent Rx errors
Membership Query 0 10 0
Vl Membership Report 0 0 0
DVMRP 0 0 0
PIM Vl 0 0 0
Cisco Trace 0 0 0
V2 Membership Report 0 0 0
Group Leave 0 0 0
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 2 0 0
Other Unknown types 0

IGMP Global Statistics


Bad Length 0
Bad Checksum 0
Bad Receive If 0
Rx non-local 0
Timed out 0

iQ 2019 Juniper Networks, Inc All Rights Reserved

Verify IGMP Snooping


The show igmp snoopi n g evpn proxy command is used to display the EVPN proxy configuration on the device. The
show igmp snooping s t at i st i cs command displays statistics regarding the IGMP snooping process.

www .juniper.net EVPN Multicast • Chapter 10- 27


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Described the multicast exensions to EVPN
• Configured EVPN Multicast

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
2s

We Discussed:
• The multicast extensions to EVPN; and

• Configuring EVPN Multicast.

Chapter 10-28 • EVPN Multicast www.juniper.net


Data Center Fabric with EVPN and VXLAN

Review Questions

1. What is the purpose of a Type-6 EVPN route?


2. What is the purpose of a Type-7 EVPN route?
3. What is the purpose of a Type-8 EVPN route?
4. What is the purpose of configuring a distributed designated router
for the EVPN-VXLAN?

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lWOPKS
29

Review Questions
1.

2.

3.

4.

www .j uniper.net EVPN Mu lt icast • Chapter 10- 29


Data Center Fabric with EVPN and VXLAN

Answers to Review Questions


1.
A Type-6 EVPN route is used to advertise to remote VTEPs t hat a local ly connect ed host is interested in a mult icast traffic
feed.

2.
In EVPN Type-7 route is used to synchron ize IGMP j oin messages bet ween devices con nected to a shared Ethernet segment,
in a multi homed environment.

3.
EVPN Type-8 route is used to synchronize IGMP leave messages between devices connected to a shared Et hernet segment,
in a multi homed environment.

4.
An EVPN-VXLAN distributed designated rout er allows a locally connected device to forward multicast traffic to a d ifferent
subnet on the same device, without forwarding the multicast traffic to a remote designat ed route r IRB.

Chapter 10-30 • EVPN Multicast www.j uniper.net


un1Pe[
NETWORKS
Education Services

Data Center Fabric with EVPN and VXLAN

Chapter 11: Multicloud DC Overview

Engineering Simplicity
Data Center Fabric with EVPN and VXLAN

Objectives

■ After successfully completing this content, you will be able to:


• Describe the evolution of data center environments
• Describe the use cases of CEM

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
2

We Will Discuss:
• The benefits of CEM; and

• The use cases of CEM.

Chapter 11-2 • Multicloud DC Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: Contrail Enterprise Multicloud Overview

➔ Data Center Evolution


■ CEM Use Cases

ii:> 2019 Juniper Networks, Inc All Rights Reserved

CEM Overview
The slide lists the topics we will discuss. We wi ll discuss the highlighted topic first.

www .j uniper.net Multicloud DC Overview • Chapter 11-3


Data Center Fabric with EVPN and VXLAN

Multicloud Data Center

■ Five steps from DC to Multicloud '( ······················•


1ECHNOLOG------------------------------------------------
..,0,ocESS -----------------------------------------------
Data Center
r,: oP'::~----------------------
.....................
1'"'" f'. ······
II
to Multicloud
Discovery ....
....---....
P ------
~
.......
...
D Secure and
Automated Multicloud

.... ---- --
........--- ---
-- --
.... ------
D Hybrid Cloud • Intent-driven networking

..-·
------ ---- --
--
D Multidomain
• Microsegmentation
• Leashed policy
• Unified cloud policy

D ..•· ..... --
Simplified Data
Center
Data Center
• Workflowautomation
• Root cause insight

• Orchestrated clouds
• SON - overlay • -35% IT staff time with
• Fabric decreasing IT efforts
Legacy Data Center • Automated remediation self-service capabilities by 30%
• DC 3-tier • Telemetry • AWS, Azure, GCP ... • -38%, savings in IT • +66% in faster
• Perimeter Security • Threat detection infrastructure platform application life cycles
costs per application
• 70% time to market • -65% effort with
• -40% systems/admin
• +100x resource centralized & simplified
scaling speed security policies
• Increase 400+:1 in
VM/resource

C> 2019 Juniper Networks, Inc All Rights Reserved 4

Five Steps
The slide shows the f ive steps from a legacy data center to an automat ed multicloud .

Chapter 11-4 • Multicloud DC Overview www.j uniper.net


Data Center Fabric with EVPN and VXLAN

Data Center Complexity

• Each step increases complexity '( ...................... .


G------------------------------------------------
,.ECHNOLO
........ ..........
..,o,ocESS -----------------------------
Data Center
r,: oP'::~----------------------
.....................
r f'. ······
II
to Multicloud
Discovery
....---....
....
P---------
~
....
...
D Secure and
Automated Multicloud

.... ---- --
........--- ---
-- --
.... ------
D Hybrid Cloud • Intent-driven networking

..-·
------ ---- --
--
D Multidomain
• Microsegmentation
• Leashed policy
• Unified cloud policy

D ... ...... --
Simplified Data
Center
Data Center
• Workflowautomation
• Root cause insight
• Multi-cloud
deployments
• SON - overlay • More complex
Legacy Data Center • Fabric • Automated remediation microsegmentation and • Multi-tenant with
• Telemetry policy profiles multi-cloud
• DC 3-tier • AWS, Azure, GCP ...
• More data to track deployments
• Perimeter Security • Threat detection
• Increased need for • More parameters to • Need for unified
• Simplified designs policies across cloud
reduce need for speed to market synchronize across
requires automation multiple environments for and non-cloud
• Basic data center admin resources infrastructures
and flexibility consistency
architecture • Improved telemetry to
• Increased need for • Real-time or near real-
• Manually configured see how the network time adjustments to
is performing remed iation requires
and provisioned, or automation network and
use of scripting and • Improved threat application
templates detection performance

C> 2019 J uniper Networks , Inc All Rights Reserved Jun1Per NElWOPKS
5

Data Center Complexity


The slide shows the increased complexity as data centers have evolved.

www .j uniper.net Multicloud DC Overview • Chapter 11-5


Data Center Fabric with EVPN and VXLAN

Challenges in DC/Cloud Operations

• Customer asks are simple ~ "I need a two-tier application execution environment with these
• Ful lfillment is complex characteristics"
"Can I have my DB cluster up and running by next week and
connected to the Web front end ?"

( = PuP~-1~.c ~~~~d
\ ~ - -- WORKl.0-60$
• Thousands of lines to set up a • Best practices and tools are
------".":. . ~ --' device different across teams and
"-- • Hundreds of lines to create a
simple service •
across DCs and clouds
Skill sets are different
WAN/ Interconnect • Different capabilities for different • Tools and best practices are
vendors and OS versions distinct across physical and
(I) (i) • Many DC architecture models virtualized workloads
I
- DC interconnects across DC

l!!I ~~ operations teams
I
' '''
'
Private
Private

-- - - cloud
DC
-- -- = .,::-
I\:
' cloud
DC REVENUE-LOSS
- - - WORKU)A0$

- - - W~ICLOA.DS
HUMAN ERRORS=
LONG LEAD TIME
C> 2019 Juniper Networks, Inc All Rights Reserved

Challenges in the DC
With each step in data center evolution, the complexity of implementing a data center that fu lfills business requirements
increases. In order to fu lf ill service level agreements (SLAs), not only do telemetry and network monitoring capabilities have
to be more robust and accurate, t he ability to act quickly on t he information gathered from those systems m ust be improved.

When implementing more complex data center designs, and as those designs incorporate remote data centers, and even
cloud based data centers, it becomes even more difficult to create and apply traffic management policies, security policies,
and performance polices across the different environments with consistency.

Chapter 11-6 • Multicloud DC Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

The Contrail Enterprise Multicloud Solution


• Use Contrail Networking to connect VMs, BMSs, and public cloud
workloads from a single user interface, Contrail Command

PaaSnaaS/ Vmware
Containers Integration

-
; _; Openstack @®
a
e Kubernetes
!. 'i ii ..©.. ©.·:
.0 • 0 • • 0 0 .. I O O O o

Q Openshift :@©©·~··:
. ..
.•
0 (Beta)

~
Mesos ;;~m ..
. .. . . .. ..
. ... ... .. . '
..

VMware

Contrail Networking ___,, I..,


\.._ <C~ > <C~ > <C~ > <C~ >
• AppFormix
Security features of Analytics & Monitoring .._/
Contrail Networking
Unified user interface
\._.., Contrail Security & Microsegmentation

Contrail Command

0 2019 Juniper Nelwor1<s, lllC All Rights ReseNed. JUnlPe[


lll,VOfl(.-;
7

The CEM Solution


The slide shows an overal l picture of the CEM solution. The basic goal of the CEM solution is to provide a single user
interface (Contrai l Command) where you are able to create virtual networks, launch workloads (VMs, containers, BMSs, etc.)
such that the workloads can comm unicate over the virtual networks. All of this is almost tota lly automated by CEM. This
sounds simple enough, but when you take into cons ideration that if you had to do this manually, it could take hours and
hours of configuration (for the Layer 2 and Layer 3 VXLAN gateways in t he IP fabric as an example) and potentially thousands
of lines of configuration statements and API calls.

You should also consider t he fact that 1 000s of lines of configuration statements can be prone to human error adding even
more hours of troubleshooting and re-configuration. However, from the Contrail Command user interface, you simply c lick a
few buttons and "Voila! ", you have networked your workloads. Besides the networking aspects of CEM, you can optiona lly
use the security features of Contrail Networking (sometimes called Contrail Security) as we ll as App Formix to bring security
and analytics to your deployment.

www .juniper.net Multicloud DC Overview • Chapter 11-7


Data Center Fabric with EVPN and VXLAN

CEM Web-Based User Interface


• Contrail Command
=
-
~-...--: COM MANO
" ' " CON •PAOL I
I
INrRASTRUCT1JRC • Cluster 1

'
:
r...
"Cl .
admtn I o2 adm,n

Servers OwrvirtY duster Nodts $1 i.pr....ion 9 DMto Oust.< Adviinmd Oy,t""" >

Clww
I C

3
, t• llodc>

..
Control llodc> Analyt cs Nodes Config Node> Dotabas• Node>

• Fabr-ta 1 1 1
el e O el eO e l eO •• • o • I eO

ProJocts Virtual Nttworks

1 0 0 From the Command UI you can:


• I eO .o . o • •o • Onboard an IP fabric
Confic Nodes Rt-sponse Site and Time- Analytia M~cu
• Create virtual networks
:U.AI Ml I 000.00 r- 2$1' • Launch VMs/Containers
• Add a BMS to Inf rastructure
16,,U MB
',, ,_... "' ..---.,,.....__ _ _,.-..,__1 • Enable VM to BMS Bridging

10.74 MB S90.Q0_.,s tJO


• Enable BMS to BMS Routing
... and lots more!
S.l7 Me ..
0.00~ _ __ _ __ _ ,
o~----
11.:W v AM 11..°'5:00.tJil 10:CJ.:OOAM 10:A7:00AM 10:.Sl 00,_,. 10:57:00AM U:04':00AM U,07,00AM
.• .•
- --
<Q 2019 Juniper Networl<s. Inc All Rights ReseNed. 8

Contrail Command
Contrail Command is the CEM Web-based user interface. As the solution evolves you wi ll be able to do more and more from
Contrail Command.

Chapter 11-8 • Multicloud DC Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Multicloud and Fabric Management

Underlay DCI
automation automation
One-click networking services
with visibility across the cloud
Reimaging&
VM
Discovery/ Automate upgrades
import
Docker Azure IP Fabrics
BMS OSVM container (Beta)

00 00 Telemetry
automation
Topology
discovery

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElY.'OPKS
9

Multicloud and Fabric Management


CEM is an evolving product and will bring added features over time . The slide highlights what can be done in CEM 5 .0 .1,
which includes bridging between VM and BMS, routing between BMS and BMS (over Cont rail virtual net works), and
discovery, conf iguring, and upgrading an IP f abric.

www .j uniper.net Multicloud DC Overview • Chapt er 11-9


Dat a Center Fabric with EVPN and VXLAN

Features Available in 5.0

Junos release 17.3R3-S3


Contrail release 5.0
Solution delivers • Juniper QFX and MX IP fabric
• Ansible based device management
• Brownfield DC: Base EVPNNXLAN overlay configuration
• Greenfield DC: Underlay and Base EVPN/VXLAN overlay configuration
• BMS management (Greenfield , with PXE boot)
• Contrail Command user interface
• Overlay virtual networking across BMS and VMs
• Centrally routed bridging
• Roles-based device configuration for underlay/overlays
• EVPN Type 5 support and L3 VNI in vRouter

C> 2019 Juniper Networks, Inc All Rights Reserved

CEM 5.0 New Features


The slide lists the new features that are avai lable in CEM 5 .0 .

Chapter 11-10 • Multicloud DC Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Agenda: Contrail Enterprise Multicloud Overview

• Data Center Evolution


➔ CEM Use Cases

Q 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(lW()PKS
11

CEM Use Cases


The slide highlights the topic we d iscuss next.

www .j uniper.net Multicloud DC Overview • Chapter 11- 11


Data Center Fabric with EVPN and VXLAN

Greenfield Underlay Autoconfiguration


■ Underlay can be autoconfigured by CEM (many fabric nodes)
• Configured a topology based on EBGP with physical-to-physical interface

peering
•········• EBGP Session (1Pv4 Unicast)

qfx10k qfx10k
Spine Nodes - - •
••••
..~
AS 65000 AS 65001

• •♦
••
Prone to human error ••
••• •• ••
•• ••••••

•• •• •• •
•• •• ••
•• •••
••• ••••
,,
••

••• ••
•• •
••• •••• ••• •
•• •
- - - - - - - , , ~ ~ ~.
,-
qfx5100-1
•... • •• ♦
•• •••
....
·················· ► qfx5100-2
Leaf Nodes - - - - AS 65101 AS 65102

control host csn host2

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NEl\\'OPKS
12

Greenfield Underlay Autoconfiguration


With all IP fabric nodes in a "zeroized " state (factory default), CEM can ZTP, discover, and automatically configure an
underlay routing protocol (EBGP) on all nodes of the IP fabric. For a large IP fabric, this could be thousands of lines of
configuration enabled automatica lly by CEM.

Chapter 11-12 • Multicloud DC Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Overlay Autoconfiguration
■ EVPNNXLAN Overlay (Brownfield or Greenfield) autoconfiguration
• Full-mesh IBGP peering
• Brownfield underlay can be any routing protocol •········• MP-IBGP Session (EVPN)

•••••••••••• •••••••••••••••••• • •••

•• •••• •••
·····•••••••• ----,
qfx10k
•••
.... ----. qfx10k
...••• Spine Nodes--• •······ ►
••
••
···"'.
AS 64512 AS 64512 "··••• •••••
• •• ••• ••••
•• ••
••• • ••
• •••
Prone to human error •••
•••

•• •• ••
•• •• •♦ •
••••• •• ••
•••••

•••• ••
••
·-..••
••
••
• •• •

•• ••• •• • • ••
•••
•• • •• • •• ••••• • •• •• ••
♦ • •• ••

... .•.. -------,,..,,....~~,,....


•• • • •• •• ••
.... .... ... .
•• ♦ •• ••
• • •• •• • • •
• •

..••
♦ • •

Leaf Nodes :
.·. .•••

.
qfx5100-1 ..
AS 64512
r-
...... · · • · · ... · · ....
•• ••••••••••••••••••••••••

............................ r.►
• ••
. · · · · · · · · · · · · · · · · · · · · · •.. ..... .......
•••
· · · · ·...
.. .
~....;:i,,_----,
qfx5100 2
AS 64512
-

.• •.:••
.. • y
....... . ••••••••• •••••

.
• ♦ ••••
•••••
.•,. : .•.
..- - - - ~·~---r-:--"""T"'---~~ ...... . ••• ········
••

.......•••
.--........_ ...,,, ........ .
control host csn host

C> 2019 Juniper Networks, Inc All Rights Reserved

Overlay Autoconfiguration
Once there is an established underlay network, CEM can automatically discover t he IP fabric nodes, take user input to assign
a physical and routing/bridging ro le to each node (physica l spine node, physica l Leaf node, L2 VXLAN Gateway, L3 VXLAN
Gateway), and then automatically configure a baseline EVPN/VXLAN overlay between the IP fabric nodes as wel l as the
Contrail Control node. For a large IP fabric, th is could be thousands of lines of configuration enabled on the IP fabric nodes
(and Contrail) automatically by CEM.

www .juniper.net Multicloud DC Overview • Chapter 11-13


Data Center Fabric with EVPN and VXLAN

BMS to VM Bridging
■ By autoconfiguring the leaf nodes as Layer 2 VXLAN gateways,
VM to BMS communication is achieved
1
, ,-----...'
Virtual \~-.. ,
1
,, .J

Network A ... ,
'
' '------------ ____,,,,
, 192.168.1.0/24 I\

--L-
I I -.J--
I I
1VM -1 I 1VM-2 1
L--..l L--..l
8B MS1 MS Key
qfx10k qfx10k
Spine Node--... AS
64512 AS 64512 ~
~ VR
OUter

Configuration changes to
Contrail, Orchestrator, and
Fabric
qfx5100-1 qfx5100-2 - - - L e a f Nodes
AS 64512 AS 64512 (L2 VXLAN GWs)
ge-0/0/4 ge-01 14

ethO .21 (static)


csn MS1
(DHCP/DNS)

C> 2019 Juniper Networks, Inc All Rights Reserved

BMS to VM Bridging
After an IP fabric has been onboarded (previous two slides), BMSs can be added to the infrastructure as well as Contrail
virtual networks. In this example, CEM wi ll automatically configure the Leaf nodes as Layer 2 VXLAN gateways. Normally, this
would be a tedious, manual task but is complete ly automated by CEM.

Chapter 11-14 • Multicloud DC Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Centrally Routed and Bridging (CRB)


• Autoconfigure nodes based on their role in the environment
• Enable CRB for L3 VXLAN bridging between VNls
• Autoconfigures VTEP (L2 GW)
• Autoconfigures L3 GW
.. - .. , Logical .. - .. ,
,
, lVirtua1 '
- , , Router _./ Virtuar - ..., '
{ Network A ~ ~ Network B \..,
:::-,r:; j
\' 192.168.1.0/24
~ ~--- ~----~
192.168.2.0/24 Spine Node
(L3 VXLAN GW)--...
Spine
Role
Spine
Role

Problem Statement Leaf Role Leaf Role


+---Leaf Nodes
Many configuration (L2 VXLAN GWs)
changes on large fabric ge-0/0/4 ge-

ethO .21 (static) eth1 .101 (DHCP)


Prone to human error csn MS1 MS
(DHCP/DNS)

C> 2019 Juniper Networks, Inc All Rights Reserved

Central Routed and Bridging


After an IP fabric has been onboarded and BMSs added to Contrail virtual networks (previous slides), data from BMSs on
different Contrai l virtua l networks can be routed over a logical router (ro uting instances on the spine nodes). In this example,
CEM will automatically configure the spine nodes as Layer 3 VXLAN gateways. Normally, this would be a ted ious, manual task
but is completely automated by CEM.

www .juniper.net Multicloud DC Overview • Chapter 11- 15


Dat a Center Fabric with EVPN and VXLAN

SRIOV Support
e O ==
O_.CHSt-lrt
C C•

e I
Benefits
I . Unified solution to manage heterogeneous
DC Leaf
I~'TIPP Network BLUE compute environments
switches
I
I . Visibility and control of accelerated performance
I vlan vlan vlan Network RED
I 100 210
forwarding on all workloads
[ 110
I
BMS with sr-iov •wi!i!iiRoiiuiiil
teii r s•r-iiiioiiiv

Container Application RED Virtual Virtual


using SRIOV VF4 Machine Machine
on VLAN 110

Problem Statement ( One-click action to automate ANY workload with the same operations )
Different compute
performance ( Single view across VM, containers, SRIOV workloads and physical )

Disjoint operations for ( Reduce lead time and cross team dependencies )
virtual and SRIOV
accelerated VNFs
iQ 2019 Juniper Networks, Inc All Rights Reserved

SRIOV Support
The slide shows that a vRouter that can be used with SR-IOV interfaces is now available to be used with CEM.

Chapter 11-16 • Multicloud DC Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Summary

■ In this content, we:


• Described the evolution of data center environments
• Described the use cases of CEM

iQ 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


NElWOPKS
17

We Discussed:
• The benefits of CEM; and

• The use cases of CEM.

www .juniper.net Multicloud DC Overview • Chapter 11- 17


Data Center Fabric with EVPN and VXLAN

Review Questions

1. What are some of the complexities that have been introduced into
data centers in a multicloud environment?
2. What is the user interface for CEM?
3. How can CEM automate the configuration of an IP fabric?

C> 2019 Juniper Networks, Inc All Rights Reserved Jun1Per


N(l\\'OPl(S
1s

Review Questions
1.

2.

3.

Chapter 11-18 • Multicloud DC Overview www.juniper.net


Data Center Fabric with EVPN and VXLAN

Answers to Review Questions


1.
Traffic management and control policies are more difficu lt to synchron ize in a mult icloud environment .

2.
The user interface for CEM is a Web-based user interface that is used t o inte ract with multip le underlying systems and
components .

3.
CEM can configure spine and leaf devices based on t heir role in the fabric and ca n autogenerate addressi ng and AS
numbering.

www .j uniper.net Multicloud DC Overview • Chapter 11-19


Acronym List

AD • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . aggregation device
• • • •

AFI • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Add ress Family Identif ier


BGP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . Border Gateway Protocol
BUM • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • broadcast, unknown un icast, an d multicast
Ca pEx • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . .capital expenses
CE • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • customer edge
CLI • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • command-line interface
CSP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • .Control and Status Protocol

DCI. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Dat a Center Interconnect

EVI . • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • EVPN Instance

EVPN-VXLAN • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Ethernet VPN Control led Virtual Extensible LAN


FCoE • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Fibre Channel over Ethernet

FCS • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . Frame Check Sequence


FEC • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . forwa rding equivalence class
GRE • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • .generic routing encapsulation

GUI • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • .graph ica l user interface


IBGP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . internal BGP
ICCP. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . Inter-Chassis Control Protocol

IGMP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Internet Group Management Protocol

IGP. • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • interior gateway protocol


1Pv6 . • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . . . . . IP version 6
JNCP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • .Juniper Networks Certification Program

LAG • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . link aggregation group


LSP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • label switched path
LSR • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • label-switching router
MAC . • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . media access control
MC-LAG • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . Multi chassis Link Aggregation
M P-BGP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . . . . Multi-protocol Border Gateway Protocol
M PLS • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . Multiprotocol Labe l Switching
OpEx • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • .. operating expenditures
OS . . • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . operating system
p • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . . . . . . .. provider
PE • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . provider edge
PHP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . . . penu ltimate-hop popping
PIM-SM • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • .Protocol Independent Multicast Sparse Mode
RID • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • router ID

RP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • rendezvous point

RPT • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • rendezvous point t ree

SD • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • satellite device
STP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Spa nning Tree Protocol

vc • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Virtua l Chassis
VCF • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Virtual Chassis Fabric

VM • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . virtua l machine

VPN • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • . .. virtua l private network

VRF • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • VPN route and forward ing


VTEP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • VXLAN Tunne l End Point
VXLAN • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Virtual extensible Local Area Network

ZTP • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • .Zero Touch Provisioning

www.juniper.net Acronym List • ACR-1


Jun IPe [
NETWORKS •
Education Services

Corporate and Sales Headquarters


Juniper Networks, Inc.
1133 Innovation Way
Sunnyvale, CA 94089 USA
Phone: 888.JUNIPER (888.586.4737)
or 408. 745.2000
Fax: 408.745.2100
www.juniper.net

APAC and EMEA Headquarters


Juniper Networks International B.V.
Boeing Avenue 240
1110 PZ SCHIPHOL-RIJK
Amsterdam, Netherlands
Phone: 31.0.207.125.700
Fax: 31.0.207.125.701

']

Engineering
Simplicity
EDU-JUN-ADCX, Revision V18A

You might also like