HCIE Datacom
HCIE Datacom
• If the interval for triggering route calculation is long, the network convergence
  speed is affected.
• The first timeout period of the intelligent timer is fixed. Before the intelligent
  timer expires, if an event that triggers the timer occurs, the next timeout period
  of the intelligent timer becomes longer.
• Command: [Huawei-ospf] lsa-originate-interval { 0 | { intelligent-timer max-
  interval start-interval hold-interval | other-type interval } }
     ▫ 0: sets the interval for updating LSAs to 0s, that is, cancels the interval of 5s
       for updating LSAs.
     ▫ intelligent-timer: uses the intelligent timer to set the update interval for
       router-LSAs and network-LSAs.
     ▫ max-interval: specifies the maximum interval for updating OSPF LSAs. The
       value is an integer ranging from 1 to 120000, in milliseconds. The default
       value is 5000.
     ▫ start-interval: specifies the initial interval for updating OSPF LSAs. The value
       is an integer ranging from 0 to 60000, in milliseconds. The default value is
       500.
     ▫ hold-interval: specifies the hold interval for updating OSPF LSAs. The value
       is an integer ranging from 1 to 60000, in milliseconds. The default value is
       1000.
     ▫ other-type: sets an update interval for OSPF LSAs except router-LSAs and
        network-LSAs.
     ▫ interval: specifies the interval for updating LSAs. The value is an integer
       ranging from 0 to 10, in seconds. The default value is 5.
• Command: [Huawei-ospf-1] lsa-arrival-interval { interval | intelligent-timer
  max-interval start-interval hold-interval }
     ▫ interval: specifies the interval for receiving LSAs. The value is an integer
       ranging from 0 to 10000, in milliseconds.
     ▫ intelligent-timer: uses the intelligent timer to set the receive interval for
       LSAs.
     ▫ max-interval: specifies the maximum interval for receiving OSPF LSAs. The
       value is an integer ranging from 1 to 120000, in milliseconds. The default
       value is 1000.
     ▫ start-interval: specifies the initial interval for receiving OSPF LSAs. The value
       is an integer ranging from 0 to 60000, in milliseconds. The default value is
       500.
     ▫ hold-interval: specifies the hold interval for receiving OSPF LSAs. The value
       is an integer ranging from 1 to 60000, in milliseconds. The default value is
       500.
• Command: [Huawei-ospf-1] spf-schedule-interval { interval1 | intelligent-timer
  max-interval start-interval hold-interval | millisecond interval2 }
     ▫ interval1: specifies an interval for OSPF SPF calculation. The value is an
       integer ranging from 1 to 10, in seconds.
     ▫ intelligent-timer: uses the intelligent timer to set the interval for OSPF SPF
       calculation.
     ▫ max-interval: specifies the maximum interval for OSPF SPF calculation. The
       value is an integer ranging from 1 to 120000, in milliseconds. The default
       value is 10000.
     ▫ start-interval: specifies the initial interval for OSPF SPF calculation. The
       value is an integer ranging from 1 to 60000, in milliseconds. The default
       value is 500.
     ▫ hold-interval: specifies the hold interval for OSPF SPF calculation. The value
       is an integer ranging from 1 to 60000, in milliseconds. The default value is
       1000.
     ▫ As shown in the right figure, traffic flows from node S to node D. The link
       cost satisfies the node-and-link protection inequality. If the primary link
       fails, node S switches the traffic to the backup link. This ensures that the
       traffic interruption time is less than 50 ms.
     ▫ Link protection takes effect when the traffic to be protected flows along a
       specified link.
• BFD for OSPF is introduced to resolve this problem. After BFD for OSPF is
  configured in a specified process or on a specified interface, the link status can be
  rapidly detected and fault detection can be completed in milliseconds. This
  speeds up OSPF convergence when the link status changes.
• Prerequisites:
     ▫ Before using BFD to quickly detect link faults, run the bfd command in the
       system view to enable BFD globally.
• The BFD configuration on an interface takes precedence over that in a process. If
  BFD is enabled on an interface, the BFD parameters on the interface are used to
  establish BFD sessions.
• OSPF IP FRR can be associated with BFD.
     ▫ During the OSPF IP FRR configuration, the underlying layer needs to fast
       respond to a link status change so that traffic can be switched to the
       backup link immediately.
     ▫ OSPF IP FRR and BFD can be bound to rapidly detect link faults. This
       ensures that traffic is rapidly switched to the backup link in the case of link
       failures.
• Command: [Huawei-ospf-1] bfd all-interfaces { min-rx-interval receive-
  interval | min-tx-interval transmit-interval | detect-multiplier multiplier-
  value | frr-binding }
     ▫ min-rx-interval receive-interval: specifies an expected minimum interval for
       receiving BFD packets from the peer. The value is an integer ranging from
       10 to 2000, in milliseconds. The default value is 1000.
     ▫ min-tx-interval transmit-interval: specifies a minimum interval for sending
       BFD packets to the peer. The value is an integer ranging from 10 to 2000, in
       milliseconds. The default value is 1000.
     ▫ detect-multiplier multiplier-value: specifies a local detection multiplier.
       The value is an integer ranging from 3 to 50. The default value is 3.
     ▫ frr-binding: binds the BFD session status to the link status of an interface.
       If a BFD session goes down, the physical link of the bound interface also
       goes down, triggering traffic to be switched to the backup link.
• This course describes only equal-cost routes, default routes, and LSA filtering. For
  other information, see HCIP-Datacom-Core Technology.
• Command: [Huawei-ospf-1] maximum load-balancing number
• Common area:
• Stub area:
     ▫ Type 5 LSAs cannot be advertised within a stub area. All routers within a
       stub area can learn AS external routes only through an ABR.
     ▫ The ABR in a stub area automatically generates a default Type 3 LSA and
       advertises it to the entire stub area. The ABR uses the default route to
       divert traffic destined for a destination outside the AS to itself and then
       forwards the traffic.
• Command: [Huawei-ospf-1] default-route-advertise [[always | permit-
  calculate-other] | cost cost | type type | route-policy route-policy-name
  [match-any]]
     ▫ always: An LSA that describes the default route is generated and advertised
       regardless of whether the local device has an active default route that does
       not belong to the current OSPF process.
           ▪ If always is configured, the device does not calculate the default
             routes from other devices.
           ▪ If always is not configured, an LSA that describes the default route
             can be generated only if an active default route that does not belong
             to the current OSPF process exists in the routing table of the local
             device.
     ▫ permit-calculate-other: An LSA that describes the default route is
       generated and advertised only if the device has an active default route that
       does not belong to the current OSPF process, and the device still calculates
       the default routes from other devices.
     ▫ type type: specifies the type of an external route. The value is 1 or 2. The
       default value is 2.
           ▪ 1: Type 1 external route
           ▪ 2: Type 2 external route
     ▫ route-policy route-policy-name: specifies the name of a route-policy. The
       device advertises default routes according to the configuration of the route-
       policy when the routing table of the device contains a default route that
       matches the route-policy but does not belong to the current OSPF process.
       The value is a string of 1 to 40 case-sensitive characters. If spaces are used,
       the string must start and end with double quotation marks (").
• Command: [Huawei-GigabitEthernet0/0/1] ospf filter-lsa-out { all | { summary
  [ acl { acl-number | acl-name } ] | ase [ acl { acl-number | acl-name } ] | nssa [
  acl { acl-number | acl-name } ] } }
     ▫ acl acl-number: specifies the number of a basic ACL. The value is an integer
       ranging from 2000 to 2999.
     ▫ As long as the border-2 router and its uplink work properly, the data flows
       of the marketing department are forwarded only through the border-2
       router.
     ▫ As long as the core-2 router and its uplink work properly, the data flows of
       the marketing department are forwarded only through the core-2 router.
• This case uses the data forwarding path of the finance department as an
  example. The data forwarding path of the marketing department is not described
  here.
• Type 2 external route:
     ▫ Because a Type 2 external route offers low reliability, its cost is considered
       to be much greater than the cost of any internal route to an ASBR.
     ▫ Cost of a Type 2 external route = Cost of the route from an ASBR to the
       destination
• The internal path cost to each ASBR is not considered during traffic egress
  control.
• VPN: virtual private network
• If a VPN instance is specified for an OSPF process that is to be created, the OSPF
  process belongs to this instance. Otherwise, the OSPF process belongs to the
  global instance.
• Command: [Huawei-ospf-1] stub-router [ on-startup [ interval ] ]
     ▫ The propagate bit (P-bit) in the Options field of an LSA header is used to
       notify a translator whether the Type 7 LSA needs to be translated into a
        Type 5 LSA. A Type 7 LSA can be translated into a Type 5 LSA only when
        the P-bit is set to 1 and the FA is not 0.
• Note: All OSPF LSAs have the same LSA header, and the P-bit is in the Options
  field of the LSA header.
• As shown in the figure:
     ▫ Configure R5 to import direct external routes and set the IP address of the
       FA to 10.1.45.5, which is used by R5 to access the destination network
       segment 10.1.5.0/24.
     ▫ R3 translates Type 7 LSAs into Type 5 LSAs and the LSAs continue to carry
       the FA 10.1.45.5.
     ▫ Upon receipt, R1 searches its OSPF routing table for a route to the FA and
       uses the next hop address of the route as the next hop address of the
       external route.
     ▫ NSR and GR, however, are mutually exclusive. That is, for a specific
       protocol, only one of them can be used after a switchover.
• Application scenario:
     ▫ NSF can be used if a network has low requirements for the packet loss rate
       and route convergence.
     ▫ NSR can be used if a network has high requirements for the packet loss
       rate and route convergence.
• Note: NSR fundamentals are the same in OSPF, IS-IS, and BGP. This course uses
  OSPF as an example.
• High availability (HA): implements data backup from the AMB to the SMB.
• I-SPF is an improvement of SPF. Unlike SPF that calculates all nodes, I-SPF
  calculates only affected nodes. The SPT generated using I-SPF is the same as that
  generated using SPF. This significantly decreases CPU usage and speeds up
  network convergence.
     ▫ If the SPT calculated by I-SPF changes, PRC processes all the leaves (routes)
       of only the changed node.
     ▫ If the SPT calculated by I-SPF does not change, PRC processes only the
       changed leaves (routes). For example, if IS-IS is newly enabled on an
       interface of a node, the SPT on the network remains unchanged. In this
        case, PRC updates only the routes of this interface, which consumes less
        CPU resources.
• Command: [Huawei-isis-1] flash-flood [ lsp-count | max-timer-interval interval
  | [ level-1 | level-2 ] ]
     ▫ level-1: enables the LSP flash-flood function in the Level-1 area. If no level
       is specified in the command, this function is enabled in both Level-1 and
       Level-2 areas.
     ▫ level-2: enables the LSP flash-flood function in the Level-2 area. If no level
       is specified in the command, this function is enabled in both Level-1 and
       Level-2 areas.
• This course involves only equal-cost and default routes. For details about other
  control methods, see HCIP-Datacom-Core Technology.
• Command: [Huawei-isis-1] maximum load-balancing number
     ▫ ip-address: specifies the IP address of the next hop. The value is in dotted
       decimal notation.
     ▫ weight value: specifies the weight of the next hop. A smaller value
       indicates a higher preference. The value is an integer ranging from 1 to 254.
• Command: [Huawei-isis-1] attached-bit advertise { always | never }
     ▫ always: indicates that the ATT bit is set to 1. After receiving an LSP with
       ATT bit 1, a Level-1 device generates a default route.
     ▫ never: indicates that the ATT bit is set to 0. This prevents the Level-1 device
       from generating default routes and reduces the size of the routing table.
• Although the ATT bit is defined in both Level-1 and Level-2 LSPs, it is set to 1
  only in Level-1 LSPs advertised by Level-1-2 devices. Therefore, this command
  takes effect only on Level-1-2 devices.
• To prevent Level-1 devices from advertising default routes to their routing tables,
  perform either of the following operations:
• The difference between the preceding commands lies in that the attached-bit
  avoid-learning command applies to specified Level-1 devices.
• Command: [Huawei-isis-1] default-route-advertise [ always | match default |
  route-policy route-policy-name ] [ cost cost | tag tag | [ level-1 | level-1-2 |
  level-2 ] ] [ avoid-learning ]
     ▫ cost cost: specifies the cost of the default route. The value is an integer. The
        value range depends on cost-style. When cost-style is narrow, narrow-
        compatible, or compatible, the value ranges from 0 to 63. When cost-style
        is wide or wide-compatible, the value ranges from 0 to 4261412864.
• IS-IS multi-process and multi-instance have the following characteristics:
     ▫ In IS-IS multi-process, processes share the same global routing table. IS-IS
       multi-instance, however, uses the routing tables of VPNs, with each VPN
       having a separate routing table.
     ▫ When creating an IS-IS process, you can bind it to a VPN instance. The IS-IS
       process then accepts and processes only the events related to the VPN
       instance. When the bound VPN instance is deleted, the IS-IS process is also
       deleted.
• Note: the additional and normal system IDs must be unique throughout a routing
  domain.
• Mode-1 implementation:
     ▫ Virtual systems participate in SPF calculation. The LSPs advertised by the
       originating system contain information about links to each virtual system.
       Similarly, the LSPs advertised by each virtual system contain information
       about links to the originating system. In this way, virtual systems function
       like physical routers that connect to the originating system.
     ▫ Mode-1 is a transitional mode used to support earlier versions that are
       incapable of LSP fragment extension. In these earlier versions, IS-IS cannot
       identify TLV 24. As a result, the LSPs sent by a virtual system must look like
       LSPs sent by an originating system.
     ▫ Precautions:
           ▪ The LSPs sent by a virtual system must contain the same area address
             and overload bit as those in LSPs sent by an originating system. Other
             TLVs must also be the same.
           ▪ The neighbor of a virtual system must point to an originating system,
             and the metric is the maximum value minus 1. The neighbor of the
             originating system must point to the virtual system, and the metric
             must be 0. This ensures that the virtual system is the downstream
             node of the originating system when other routers calculate routes.
• Mode-2 implementation:
     ▫ Virtual systems do not participate in SPF calculation. All the routers on the
       network know that the LSPs generated by the virtual systems actually
       belong to the originating system.
     ▫ IS-IS working in mode-2 can identify TLV 24, which is used as the basis for
       calculating an SPT and routes.
• Note: In both modes, the originating system and virtual systems must include
  TLV 24 in their LSPs whose LSP Number is 0 to indicate which is the originating
  system.
• Command: [Huawei-isis-1] lsp-fragments-extend [ [ level-1 | level-2 | level-1-
  2 ] | [ mode-1 | mode-2 ] ]
    ▫ The IETF defined the GR standard (RFC 3847) for IS-IS. The protocol
      restarts in which the FIB is retained and the protocol restarts in which the
       FIB is not retained are both processed, preventing route flapping and traffic
       interruptions caused by protocol restarts.
• Notes:
    ▫ In Step 2, if the neighbor does not have the GR Helper capability, it ignores
      the Restart TLV and resets the adjacency with the GR Restarter according to
      normal IS-IS processing.
    ▫ In Step 3, the Restarter sets the value of the T3 timer to the Holdtime of
      the neighbor, preventing neighbor disconnection during the GR, which
       would otherwise cause routes to be recalculated on the whole network.
• During the restarting, the Restarter starts the T1, T2, and T3 timers at the same
  time after the protocol restart. The value of the T1 timer indicates the longest
  time during which the GR Restarter waits for the Hello packet used for GR
  acknowledgement from the GR Helper. The value of the T2 timer indicates the
  longest time during which the system waits for the LSDB synchronization. The
  value of the T3 timer indicates the longest time allowed for a GR. The device
  cancels the T3 timer after synchronization of Level-1 and Level-2 LSDBs
  completes during the GR. If LSDB synchronization has not completed when the
  T3 timer expires, the GR fails.
• Additional remarks for Step 1:
     ▫ The IIH packet in which SA is set indicates that the Restarter requests its
       neighbor to suppress the advertisement of their adjacency until the
       neighbor receives an IIH packet in which SA is cleared from the Restarter.
     ▫ If GR is not supported, the neighbor ignores the Restart TLV, resets the
       adjacency with the GR Restarter, replies with an IIH packet that does not
       contain the Restart TLV, and returns to normal IS-IS processing. In this case,
       the neighbor does not suppress the advertisement of the adjacency with the
       GR Restarter. In the case of a P2P link, the neighbor also sends a CSNP.
• Usage scenario of the graceful-restart no-impact-holdtime command:
     ▫ It is recommended that the value of the T3 timer is greater than that of the
       T2 timer. Otherwise, the GR may fail.
• Usage scenario of the graceful-restart suppress-sa command:
     ▫ A router does not maintain the forwarding status when it starts for the first
       time (excluding the post-GR cases). If it is not the first time that a router
       has started, the LSPs generated when the router ran last time may still exist
       in the LSP database of other routers on the network.
     ▫ Because the sequence numbers of LSP fragments are also reinitialized when
       the router starts, the LSP copies stored on other routers seem to be newer
       than the LSPs generated after the local router starts. This leads to a
       temporary "blackhole" on the network, and the blackhole persists until the
       router regenerates its own LSPs and advertises them with the largest
       sequence number.
2. B
3. A
• As shown in the figure:
     ▫ R2 can filter out the Net2 route through BGP route control so that R2's BGP
       routing table does not contain the Net2 route.
• Note: For details about the ACL, IP prefix list, filter-policy, route-policy, and BGP
  path attributes, see the "HCIP-Datacom-Core Technology" course.
• A regex has the following functions:
     ▫ Checks and obtains the sub-character string that matches a specific rule in
       the character string.
     ▫ ^a.$: matches a character string that starts with the character a and ends
       with any single character, for example, a0, a!, ax, and so on.
     ▫ ^100_: matches a character string starting with 100, for example, 100, 100
       200, 100 300 400, and so on.
     ▫ 100$|400$: matches a character string ending with 100 or 400, for example,
       100, 1400, 300 400, and so on.
• Type 2:
     ▫ abc*d: matches the character c zero or multiple times, for example, abd,
       abcd, abccd, abcccd, abccccdef, and so on.
     ▫ abc+d: matches the character c once or multiple times, for example, abcd,
       abccd, abcccd, abccccdef, and so on.
     ▫ abc?d: matches the character c zero times or once, for example, abd, abcd,
       abcdef, and so on.
     ▫ a(bc)?d: matches the character string bc zero times or once, for example,
       ad, abcd, aaabcdef, and so on.
• The AS_Path attribute is a well-known mandatory attribute of BGP. All BGP
  routes must carry this attribute. This attribute records the numbers of all the ASs
  that a BGP route traversed during transmission.
• The default behavior of an AS_Path filter is deny. That is, if a route is not
  permitted in a filtering, the route fails to match the AS_Path filter. If all matching
  rules in an AS_Path filter work in deny mode, all BGP routes are denied by the
  filter. To prevent this problem, configure a matching rule in permit mode after
  one or more matching rules in deny mode so that the routes except for those
  denied by the preceding matching rules can be permitted by the filter.
• The community attribute is an optional transitive attribute. It can identify the
  routes with the same characteristics, regardless of the scattered route prefixes
  and various AS numbers. That is, a specific community value can be assigned to
  some routes so that these routes can be matched against the community value
  instead of the network number or mask. Then, a corresponding routing policy
  can be applied to the matched routes.
• Command: [Huawei-route-policy] apply community { community-number |
  aa:nn | internet | no-advertise | no-export | no-export-subconfed } [ additive ]
     ▫ no-export: prevents the matched routes from being advertised outside the
       local AS but allows them to be advertised to other sub-ASs in the local AS.
        After a device receives a route with this attribute, it cannot advertise this
        route outside the local AS.
     ▫ additive: adds community attributes to the routes that match the filtering
       conditions.
• Command: [Huawei] ip community-filter { basic comm-filter-name | basic-
  comm-filter-num } { permit | deny } [ community-number | aa:nn | internet |
  no-export-subconfed | no-advertise | no-export ]
     ▫ basic comm-filter-name: specifies the name of a basic community filter.
       The value is a string of 1 to 51 case-sensitive characters. It cannot be
       comprised of only digits.
     ▫ basic-comm-filter-num: specifies the number of a basic community filter.
       The value is an integer ranging from 1 to 99.
     ▫ deny: sets the matching mode of the community filter to deny.
     ▫ permit: sets the matching mode of the community filter to permit.
     ▫ community-number: specifies a community number. The value is an integer
       ranging from 0 to 4294967295.
     ▫ aa:nn: specifies a community number. A maximum of 20 community
       numbers can be specified at a time using this command. The values of aa
       and nn are integers ranging from 0 to 65535.
     ▫ internet: allows the matched routes to be advertised to any peers.
     ▫ no-export-subconfed: prevents the matched routes from being advertised
       outside the local AS. If a confederation is used, the matched routes will not
       be advertised to the other sub-ASs in the confederation.
     ▫ no-advertise: prevents the matched routes from being advertised to any
       other peers.
     ▫ no-export: prevents the matched routes from being advertised outside the
       local AS. If a confederation is used, the matched routes will not be
       advertised outside the confederation but will be advertised to the other
       sub-ASs in the confederation.
• Command: [Huawei-route-policy] if-match community-filter { basic-comm-
  filter-num [ whole-match ] | adv-comm-filter-num }
• Command: [Huawei-route-policy] if-match community-filter comm-filter-name
  [ whole-match ]
     ▫ import: applies the routing policy to the routes received from the peer or
       peer group.
     ▫ export: applies the routing policy to the routes to be advertised to the peer
       or peer group.
• Command: [Huawei-bgp] peer { group-name | ipv4-address } capability-
  advertise orf [ non-standard-compatible ] ip-prefix { both | receive | send }
  [ standard-match ]
▫ both: enables the local device to both send and accept ORF packets.
• Note: BGP MD5 authentication and BGP keychain authentication are mutually
  exclusive.
• As shown in the figure, if BGP GTSM is not enabled, the device finds that the
  received numerous bogus BGP messages are destined for itself, and directly sends
  them to the control plane for processing. As a result, the control plane has to
  process a large number of bogus messages, causing the CPU usage to go
  excessively high and the system to be unexpectedly busy.
• Command: [Huawei-bgp] peer { group-name | ipv4-address | ipv6-address }
  keychain keychain-name
     ▫ drop: indicates that the messages that do not match the GTSM policy
       cannot pass filtering and are dropped.
     ▫ pass: indicates that the messages that do not match the GTSM policy can
       pass filtering.
     ▫ Assume that static routes are used or OSPF is used to ensure internal
       network reachability in AS 101. The configuration details are not provided
       here.
     ▫ On the network shown in this figure, before the new speaker in AS 2.2
       sends an Update message to the old speaker in AS 65002, the new speaker
        replaces each 4-byte AS number (1.1 and 2.2) with 23456 (AS_TRANS) in
        AS_Path; therefore, the AS_Path carried in the Update message is (23456,
        23456, 65001), and the carried AS4_Path is (2.2, 1.1, 65001). Upon receiving
        the Update message, the old speaker in AS 65002 transparently transmits
        AS4_Path (2.2, 1.1, 65001) to another AS.
• When a new speaker receives an Update message carrying the AS_Path and
  AS4_Path attributes from an old speaker, the new speaker obtains the actual
  AS_Path attribute based on the reconstruction algorithm.
     ▫ Assume that static routes are configured or OSPF is used to ensure internal
       network reachability in AS 1.1. The configuration details are not provided
       here.
• Notes:
     ▫ This slide uses NetEngine 8000 series routers as an example. For the 4-byte
       AS number configuration on any other type of product, see the
       corresponding product documentation.
     ▫ If you adjust the display format of 4-byte AS numbers, the matching results
       in the case of filtering using AS_Path regular expressions or extended
       community filters are affected. Specifically, after the display format of 4-
       byte AS numbers is changed when an AS_Path regular expression or
       extended community filter has been used in an export or import policy, the
       AS_Path regular expression or extended community filter needs to be
       reconfigured. If reconfiguration is not performed, routes may fail to match
       the export or import policy, leading to a network fault.
• RR-related roles:
     ▫ RR: BGP device that reflects the routes learned from an IBGP peer to other
       IBGP peers. An RR is similar to the designated router (DR) on an OSPF
       network.
     ▫ Client: IBGP peer whose routes are reflected by the RR to other IBGP peers.
       In an AS, clients only need to be directly connected to the RR.
     ▫ Non-client: IBGP device that is neither an RR nor a client. In an AS, full-
       mesh connections still must be established between non-clients and RRs,
       and between all non-clients.
     ▫ Originator: device that originates routes in an AS. The Originator_ID
       attribute is used to prevent routing loops in a cluster.
     ▫ Cluster: a set of RRs and their clients. The Cluster_List attribute is used to
       prevent routing loops between clusters.
• When configuring a BGP router as an RR, you also need to specify a client of the
  RR. A client does not need to be configured because it is not aware that an RR
  exists on the network.
• Rules for an RR to advertise routes:
     ▫ After learning routes from non-clients, the RR selects and advertises the
       optimal route to all its clients.
     ▫ After learning routes from clients, the RR selects and advertises the optimal
       route to all its non-clients and clients (except the originating client).
     ▫ After learning routes learned from EBGP peers, the RR selects and
       advertises the optimal route to all its clients and non-clients.
• The route advertisement rules for hierarchical RR networking are the same as
  those for single-cluster RR networking.
     ▫ Number of alternate paths: This factor affects load balancing and resource
       consumption. More layers reduce the number of links for load balancing
       but require fewer router resources.
1. D
2. A
3. A
• When Layer 2 isolation and Layer 3 interworking are used, you can enable intra-
  VLAN proxy ARP on the VLANIF interface and configure arp-proxy inner-sub-
  vlan-proxy enable to implement communication between hosts in the same
  VLAN.
• A MAC address table is used by the switch to record the mappings between
  learned MAC addresses of other devices and interfaces on which MAC addresses
  are learned, as well as VLANs to which the interfaces belong.
• When performing Layer 2 switching, the device searches the MAC address table
  according to the destination MAC address of the packet. If the MAC address table
  contains the entry corresponding to the destination MAC address of the packet
  and the interface that receives the packet is different from the interface
  corresponding to the entry, the packet is directly forwarded through the
  outbound interface in the entry. If they are the same, the packet is discarded.
• If the MAC address table does not contain the entry matching the destination
  MAC address of the packet, the device broadcasts the packet through all the
  interfaces in the VLAN except the interface that receives the packet.
• To prevent unauthorized users from modifying MAC address entries of some key
  devices (such as servers or uplink devices), you can configure the MAC address
  entries of these devices as static MAC address entries. Static MAC address entries
  take precedence over dynamic MAC address entries and can hardly be modified
  by unauthorized users.
• To prevent useless MAC address entries from occupying the MAC address table
  and prevent hackers from attacking user devices or networks using MAC
  addresses, you can configure untrusted MAC addresses as blackhole MAC
  addresses. In this way, when the device receives a packet with the destination or
  source MAC address as the blackhole MAC address, the device discards the
  packet without modifying the original MAC address entry or adding a MAC
  address entry.
• To reduce manual configuration of static MAC address entries, Huawei S series
  switches are enabled with dynamic MAC address learning by default. The aging
  time needs to be set properly for dynamic MAC address entries so that the switch
  can delete unneeded MAC address entries.
• To improve network security and prevent the device from learning invalid MAC
  addresses and incorrectly modifying the original MAC address entries in the MAC
  address table, you can disable MAC address learning on a specified interface or
  all interfaces in a specified VLAN so that the device does not learn new MAC
  addresses from these interfaces.
• You can limit the number of MAC address entries that can be learned on the
  device. When the number of learned MAC address entries reaches the limit, the
  device does not learn new MAC address entries. You can also configure an action
  to take when the number of learned MAC address entries reaches the limit. This
  prevents MAC address entries from being exhausted and improves network
  security.
• Dynamic secure MAC addresses can be aged out using two modes: absolute
  aging and relative aging.
     ▫ Absolute aging time: If the absolute aging time is set to 5 minutes, the
       system calculates the lifetime of each MAC address every minute. If the
       lifetime is larger than or equal to 5 minutes, the secure dynamic MAC
       address is aged immediately. If the lifetime is smaller than time minutes,
        the system determines whether to delete the secure dynamic MAC address
        after 1 minute.
     ▫ Relative aging time: If the value is set to 5 minutes, the system checks
       whether there is traffic from a specified dynamic secure MAC address every
       1 minute. If no traffic is received from the secure dynamic MAC address,
       this MAC address is aged out 5 minutes later.
• By default, an interface in error-down state can be restored only after the restart
  command is run in the interface view.
▫ protect
          ▪ Discards packets with new source MAC addresses when the number
            of learned MAC addresses exceeds the limit.
          ▪ When static MAC address flapping occurs, the interface discards the
            packets with this MAC address.
▫ restrict
          ▪ Discards packets with new source MAC addresses and sends a trap
            message when the number of learned MAC addresses exceeds the
            limit.
          ▪ When static MAC address flapping occurs, the interface discards the
            packets with this MAC address and sends a trap.
▫ shutdown
          ▪ Sets the interface state to error-down and generates a trap when the
            number of learned MAC addresses exceeds the limit.
• When the device is configured to prevent MAC address entries from being
  overridden on interfaces with the same priority, if the authorized device is
  powered off, the MAC address entry of the bogus device is learned. After the
  authorized device is powered on again, its MAC address cannot be learned.
  Exercise caution when using this feature. If a switch interface is connected to a
  server, when the server is powered off, other interfaces can learn the same MAC
  address as the server. When the server is powered on again, the switch cannot
  learn the correct MAC address.
• Whether all Huawei switches support MAC address flapping detection depends
  on the switch model.
• After MAC address flapping occurs, the following actions are performed: 1. A trap
  is generated and reported. 2. GE0/0/2 on SW1 is disabled from sending and
  receiving packets. 3. GE0/0/2 on SW1 is disabled from sending and receiving
  packets with a specified MAC address.
     ▫ When detecting MAC address flapping in VLAN 2, the device blocks the
       interface where MAC address flapping occurs.
     ▫ The interface will be blocked for 10s (specified by the block-time keyword).
       The blocked interface cannot receive or send data.
     ▫ After 10 seconds, the interface is unblocked and starts to send and receive
       data. If MAC address flapping is not detected within 20 seconds, the
       interface is unblocked. If MAC address flapping is detected again on the
       interface within 20 seconds, the switch blocks the interface again. If the
       switch still detects MAC address flapping on the interface, the switch
        permanently blocks the interface. The retry-times parameter specifies the
        number of times that MAC address flapping is detected.
• By default, global MAC address flapping detection is enabled on a Huawei switch.
  Therefore, the switch performs MAC address flapping detection in all VLANs by
  default.
• If an interface is set to enter the Error-Down state due to MAC address flapping,
  the interface does not automatically restore to the Up state by default.
• If MAC address flapping occurs on an interface and the interface is removed from
  the VLAN, you can run the following command in the system view to implement
  automatic recovery of the interface:
• You can limit the number of MAC address entries that can be learned on the
  device. When the number of learned MAC address entries reaches the limit, the
  device does not learn new MAC address entries. You can also configure an action
  to take when the number of learned MAC address entries reaches the limit. This
  prevents MAC address entries from being exhausted and improves network
  security.
• When Switch3 and Switch4 are incorrectly connected, the MAC address of
  GE0/0/1 on Switch2 is learned by GE0/0/2, causing GE0/0/2 to enter the Error-
  Down state.
• You can run the display mac-address flapping record command to check MAC
  address flapping records.
• A CAK is not directly used to encrypt data packets. Instead, the CAK and other
  parameters derive the encryption key of data packets. The CAK can be delivered
  during 802.1X authentication or statically configured.
• The SAK is derived based on the CAK using an algorithm and is used to encrypt
  data transmitted over secure channels. The MKA limits the number of packets
  that can be encrypted by each SAK. When the PNs encrypted by a SAK are
  exhausted, the SAK is updated. For example, on a 10 Gbit/s link, the SAK can be
  updated every 4.8 minutes.
• The key server determines the encryption scheme and the MKA entity that
  distributes the key.
• In the outbound direction of an interface, the device can block broadcast packets,
  unknown multicast packets, and unknown unicast packets.
• Traffic suppression can also rate-limit ICMP packets by setting a threshold. A
  large number of ICMP packets may be sent to the CPU without traffic
  suppression. When this happens, other service functions may become abnormal.
• The threshold can be configured for incoming packets on interfaces. The system
  discards the traffic exceeding the threshold and forwards the traffic within the
  threshold. In this way, the system limits the traffic rate in an acceptable range.
• Note that traffic suppression can also block outgoing packets on interfaces.
• In storm control, rate thresholds are configured for incoming packets only on
  interfaces. When the traffic exceeds the threshold, the system rejects the packets
  of this particular type on the interface or shuts down the interface.
• Run the display flow-suppression interface interface-type interface-number
  command to check the traffic suppression configuration.
• When traffic suppression is configured in both the interface view and VLAN view,
  the configuration in the interface view takes precedence over the configuration in
  the VLAN view.
• The difference between traffic suppression and storm control is as follows: The
  storm control function can take the punishment action (block or shutdown) for
  an interface, whereas the traffic suppression function only limits the traffic on an
  interface.
• In traffic suppression, rate thresholds are configured for incoming packets on
  interfaces. When the traffic exceeds the threshold, the system discards excess
  traffic and allows the packets within the threshold to pass through. In this way,
  the traffic is limited within a proper range. Note that traffic suppression can also
  block outgoing packets on interfaces.
• In storm control, rate thresholds are configured for incoming packets only on
  interfaces. When the traffic exceeds the threshold, the system rejects the packets
  of this particular type on the interface or shuts down the interface.
• No DHCP relay agent is deployed:
     ▫ In the discovery stage, the DHCP client broadcasts a DHCP Discover
       message to discover DHCP servers. Information carried in a DHCP Discover
       message includes the client's MAC address (Chaddr field), parameter
       request list (Option 55), and broadcast flag (Flags field, determining
       whether the response should be sent in unicast or broadcast mode).
     ▫ In the offer stage, a DHCP server selects an address pool on the same
       network segment as the IP address of the interface receiving the DHCP
       Discover message, and selects an idle IP address from the address pool. The
       DHCP server then sends a DHCP Offer message carrying the allocated IP
       address to the DHCP client.
     ▫ In the request stage, if multiple DHCP servers reply with a DHCP Offer
       message to the DHCP client, the client accepts only the first received DHCP
       Offer message. The client then broadcasts a DHCP Request message
       carrying the selected DHCP server identifier (Option 54) and IP address
       (Option 50, with the IP address specified in the Yiaddr field of the accepted
       DHCP Offer message). The DHCP Request message is broadcast so as to
       notify all the DHCP servers that the DHCP client has selected the IP address
       offered by a DHCP server. Then the other servers can allocate IP addresses
       to other clients.
     ▫ In the acknowledgement stage, after receiving the DHCP ACK message, the
       DHCP client broadcasts a gratuitous ARP packet to check whether any
       other terminal on the network segment uses the IP address allocated by the
       DHCP server.
• After the dhcp snooping enable command is run on an interface, the interface
  forwards received DHCP Request messages to all trusted interfaces and discards
  received DHCP Reply messages.
• After an interface on which the dhcp snooping trusted command is run receives a
  DHCP Request message, it forwards the message to all other trusted interfaces. If
  there are no other trusted interfaces, it discards the message. After receiving a
  DHCP Reply message, it forwards the message only to the interfaces that are
  connected to clients and have the dhcp snooping enable command configured. If
  such interfaces cannot be found, it discards the DHCP Reply message.
• DHCP snooping binding entries are aged out when the DHCP release expires, or
  the entries are deleted when users send DHCP Release messages to release IP
  addresses.
• In a DHCP starvation attack, an attacker continuously applies for a large number
  of IP addresses from the DHCP server to exhaust IP addresses in the address pool
  of the DHCP server. As a result, the DHCP server cannot allocate IP addresses to
  authorized users. The DHCP message contains the Client Hardware Address
  (CHADDR) field. This field is filled in by a DHCP client, indicating the hardware
  address of the client, that is, the MAC address of the client. The DHCP server
  assigns IP addresses based on the CHADDR field, and assigns different IP
  addresses if values of the CHADDR field are different. The DHCP server cannot
  distinguish valid CHADDR field. By exploiting this vulnerability, an attacker fills a
  different value in the CHADDR field of a DHCP message each time the attacker
  applies for an IP address. In this way, the attacker forges different users to
  request IP addresses.
• In a DHCP starvation attack, an attacker continuously applies for a large number
  of IP addresses from the DHCP server to exhaust IP addresses in the address pool
  of the DHCP server. As a result, the DHCP server cannot allocate IP addresses to
  authorized users. The DHCP message contains the Client Hardware Address
  (CHADDR) field. This field is filled in by a DHCP client, indicating the hardware
  address of the client, that is, the MAC address of the client. The DHCP server
  assigns IP addresses based on CHADDR values. The DHCP server cannot
  distinguish valid CHADDR values. By exploiting this vulnerability, an attacker fills
  a different value in the CHADDR field of a DHCP message each time the attacker
  applies for an IP address. In this way, the attacker forges different users to
  request IP addresses.
• To prevent starvation attacks, DHCP snooping checks whether the source MAC
  address of a DHCP Request message is the same as the CHADDR value on an
  interface. If they are the same, the interface forwards the DHCP Request
  message. If they are different, the interface discards the message. To check the
  consistency between the source MAC address and the CHADDR field on an
  interface, run the dhcp snooping check dhcp-chaddr enable command on the
  interface.
• An attacker may continuously change both the MAC address and CHADDR value
  simultaneously, and uses the same CHADDR value as the MAC address each
  time. In this way, the consistency check between the source MAC address and the
  CHADDR can be avoided.
• As shown in the figure, the attacker uses the ARP mechanism to enable PC1 to
  learn the mapping between IP-S and MAC2 and enable the server to learn the
  mapping between IP1 and MAC2. When PC1 sends an IP packet to the DHCP
  server, the destination IP address is IP-S and the source IP address is IP1. The
  destination MAC address of the frame in which the IP packet is encapsulated is
  MAC2 and the source MAC address is MAC1, so the frame reaches PC2 first. After
  receiving the frame, the attacker changes the destination MAC address to MAC-S
  and the source MAC address to MAC2, and then sends the frame to the server.
  When the DHCP server sends an IP packet to PC1, the destination IP address is
  IP1 and the source IP address is IP-S. The destination MAC address of the frame
  in which the IP packet is encapsulated is MAC2 and the source MAC address is
  MAC-S, so the frame reaches PC2 first. After receiving the frame, the attacker
  changes the destination MAC address to MAC1 and the source MAC address to
  MAC2, and then sends the frame to PC1.
• The IP packets transmitted between PC1 and the DHCP server traverse the
  attacker's device (man-in-the-middle). Therefore, the attacker can easily obtain
  some information in the IP packets and use the information to perform other
  damage operations. The attacker can easily tamper with the DHCP messages
  transmitted between PC1 and the DHCP server. These messages are encapsulated
  in UDP packets, and UDP packets are encapsulated in IP packets. In this way, the
  attacker can directly attack the DHCP server.
• A DHCP man-in-the-middle attack is a spoofing IP/MAC attack. Preventing DHCP
  man-in-the-middle attacks is equivalent to preventing spoofing IP/MAC attacks.
• As shown in the figure, if the DHCP server assigns IP address IP1 to PC1 and IP
  address IP2 to PC2, IP1 is bound to MAC1 and IP2 is bound to MAC2. These
  bindings are stored in the DHCP snooping binding table. To enable the server to
  learn the mapping between IP1 and MAC2, the attacker sends an ARP Request
  packet in which the source IP address is set to IP1 and the source MAC address is
  set to MAC2. After receiving the ARP Request packet, the switch checks the
  source IP address and source MAC address in the packet and finds that the IP-
  MAC (IP1-MAC2) mapping does not match any entry in the DHCP snooping
  binding table. Therefore, the switch discards the ARP Request packet, this
  effectively prevents spoofing IP/MAC attacks.
• In this example, the firewall is deployed at the egress of the enterprise network,
  and services between the intranet and external network are forwarded by the
  firewall. Such services will be all interrupted if the firewall is faulty. Therefore, if
  only one device is deployed in a key position of the network, the network may be
  interrupted due to a single point of failure, regardless of the reliability of this
  single device. Therefore, in network architecture design, a key position on the
  network usually has two network devices planned for high availability.
• A firewall is a stateful inspection device. It inspects the first packet of a flow and
  establishes a session to record packet status information (including the source IP
  address, source port number, destination IP address, destination port number,
  and protocol). Subsequent packets of the flow are then forwarded according to
  the session entry. Only those matching this entry will be forwarded. Packets that
  do not match this entry will be discarded by the firewall.
• Each firewall has a VGMP group. A VGMP group can be in any of the following
  states:
     ▫ Initialize: indicates the temporary initial status of a VGMP group after hot
       standby is enabled.
     ▫ Load Balance: When the priority of the local VGMP group is the same as
       that of the peer VGMP group, the VGMP groups at both ends are in the
        Load Balance state.
     ▫ Active: When the priority of the local VGMP group is higher than that of the
       peer VGMP group, the local VGMP group is in Active state.
     ▫ Standby: When the priority of the local VGMP group is lower than that of
       the peer VGMP group, the local VGMP group is in Standby state.
• After two firewalls are deployed in hot standby mode, the VGMP groups on them
  have the same priority, and both are in Load Balance state. In this case, the two
  firewalls are in load balancing state.
• You can configure VRRP or manually specify a standby device to enable the two
  firewalls to work in active/standby mode. The VRRP configuration method applies
  to networks where the firewalls connect to Layer 2 switches, and the method of
  manually specifying a standby device applies to other hot standby networks.
• A firewall has an initial VGMP group priority. When an interface or a card on the
  firewall becomes faulty, the initial VGMP group priority is decreased by a specific
  value.
• To achieve this, Huawei firewalls use HRP to back up dynamic status data and
  key configuration commands between the active and standby firewalls.
• In load balancing mode, both firewalls are active. Therefore, if both firewalls
  synchronize commands to each other, command overwrite or conflict problems
  may occur. To centrally manage the configurations of the two firewalls, you need
  to configure the designated active and standby devices.
• The firewall implements virtualization in the following aspects:
     ▫ Configuration virtualization: Each virtual system has its own virtual system
       administrator and configuration interface. Virtual system administrators can
       only manage their own virtual systems.
• With the preceding virtualization techniques, each virtual system can function as
  a dedicated firewall that is exclusively managed by its administrator.
• As shown in the figure, virtual interfaces of virtual systems and the public system
  are connected to form virtual links. You can consider virtual systems and the
  public system as independent devices, and virtual interfaces as communication
  interfaces between them. Virtual systems can communicate with each other and
  with the public system after their virtual interfaces are added to security zones
  and routes and policies are configured for device communications.
• Communication between a virtual system and the public system involves two
  scenarios: from a virtual system to the public system, and from the public system
  to a virtual system. The packet forwarding processes in the two scenarios are
  slightly different.
• This slide uses the access from a virtual system to the public system as an
  example. Packets are processed in both the virtual system and public system
  according to the firewall's packet forwarding process. As such, you must perform
  key configurations such as security policies and routes in both the virtual system
  and public system.
• On a firewall, virtual systems are isolated by default. As such, hosts attached to
  different virtual systems cannot communicate with each other. To enable
  communication between two hosts attached to different virtual systems,
  configure security policies and routes. In this example, virtual system A initiates
  an access request to virtual system B. The request packet enters virtual system A,
  which then processes the packet according to the firewall's packet forwarding
  process. Then, the request packet enters virtual system B, which also processes
  the packet according to the firewall's forwarding process.
• As both virtual systems need to process the packet according to the firewall's
  packet forwarding process, you must perform key configurations such as security
  policies and routes in both virtual systems.
• The preceding configuration allows only the unidirectional communication from
  vsysa to vsysb. If hosts in vsysb need to access hosts in vsysa, you must configure
  the routes and security policies for access from vsysb to vsysa.
1. ABD
2. A
• MPLS is derived from the Internet Protocol version 4 (IPv4). Core MPLS
  technologies can be extended to support multiple network protocols, such as the
  Internet Protocol version 6 (IPv6), Internet Packet Exchange (IPX), Appletalk,
  DECnet, and Connectionless Network Protocol (CLNP). MPLS uses label-based
  forwarding to replace IP forwarding. A label is a short connection identifier of
  fixed length that is meaningful only to a local end.
• MPLS label operations will be introduced in following courses.
• In traditional IP forwarding that uses the longest match algorithm, all packets
  that match the same route belong to the same FEC.
• In MPLS, the most common example of FEC is: Packets whose destination IP
  addresses match the same IP route are considered to belong to the same FEC.
• An LSP is composed of an ingress LSR, an egress LSR, and a variable number of
  transit LSRs. Therefore, an LSP can be considered as an ordered set of these LSRs.
• An LSP is a unidirectional path from the start point to the end point. If
  bidirectional data communication is required, an LSP for return traffic needs to
  be established between the two ends.
• The EXP field is defined in early MPLS standards and is an experimental field.
  Actually, this field is mainly used for CoS. To avoid ambiguity, this field is
  renamed Traffic Class in RFC 5462.
• When the upper layer is the MPLS label stack, the Type field in the Ethernet
  header is 0x8847, and the Protocol field in the PPP header is 0x8281.
• The label spaces of different LSRs are independent of each other, indicating that
  each router can use the entire label space.
• If the ingress LSRs of packets belonging to the same FEC are different, the LSPs
  for forwarding the packets are different.
• An LSR uses the same way to process packets in the same FEC, regardless of
  where the packets' inbound interfaces are the same.
• An LSP is composed of the forwarding actions of LSRs, and the label forwarding
  table determines the forwarding action. Therefore, establishing a label
  forwarding table can also be considered as establishing an LSP.
• As shown in the figure, the three packets belong to the same FEC, FEC1, because
  they have the same destination. However, as their ingress LSRs are different, the
  packets are forwarded along different LSPs (LSP1, LSP2, and LSP3, respectively).
  The labels assigned by different LSRs to the same FEC can be the same or
  different, because labels are valid only on their local LSRs.
• Control plane:
          ▪ Routing information base (RIB): stores static routes, direct routes, and
            routes generated by IP routing protocols. Routes can be selected from
             the RIB to guide packet forwarding.
• Forwarding plane
     ▫ A static LSP is meaningful only to the local node, and the local node cannot
       be aware of the entire LSP.
• Dynamic LSP:
• When an IP packet enters an MPLS domain, the ingress searches the FIB to check
  whether the tunnel ID corresponding to the destination IP address is 0x0.
     ▫ If the tunnel ID is 0x0, the ingress LSR performs IP forwarding for the
       packet.
     ▫ If the tunnel ID is not 0x0, the ingress LSR performs MPLS forwarding for
       the packet.
• A transit LSR searches for ILMs and NHLFEs to guide MPLS packet forwarding.
• The egress LSR searches the ILM table to guide MPLS packet forwarding.
• An outgoing label occupies the label space of the downstream LSR, but the label
  distribution mode used by the downstream space is uncertain. As such, the value
  of an outgoing label ranges from 16 to 1048575.
• An incoming label occupies the label space of the current LSR. When a static LSP
  is used, the value of an incoming label ranges from 16 to 1023.
1. AC
2. B
• LDP mentioned in this course refers to that defined in RFC 3036 for the first time.
  This protocol has been replaced by RFC 5036.
• LDP messages are carried over UDP or TCP, with the port number being 646.
  Discovery messages, which are used to discover peers, are carried over UDP.
  Other LDP messages must be transmitted in a reliable and ordered manner.
  Therefore, LDP uses TCP to establish sessions. Session, advertisement, and
  notification messages are transmitted over TCP.
• An LDP header is 10 bytes long. It consists of three parts: Version, PDU Length,
  and LDP Identifier.
     ▫ The Version field occupies 2 bytes. It indicates the LDP version number. The
       current version number is 1.
     ▫ The PDU Length field occupies 2 bytes. It indicates the packet length in
       bytes, excluding the Version and PDU Length fields.
     ▫ The LDP Identifier field (that is, LDP ID) occupies 6 bytes. The first 4 bytes
       uniquely identify an LSR, and the last 2 bytes identify the label space of the
       LSR.
• An LDP message consists of five parts.
     ▫ The U field occupies 1 bit, which is an unknown message. When an LSR
       receives an unknown message, the LSR returns a notification message to
       the message originator if the U field is 0, but ignores the message and does
       not respond with a notification message if the U field is 1.
     ▫ Message Length occupies 2 bytes. It indicates the total length of Message
       ID, Mandatory Parameters, and Optional Parameters, in bytes.
     ▫ Message ID occupies 32 bits. It identifies a message.
     ▫ Each of the Mandatory Parameters and Optional Parameters fields has a
       variable length.
     ▫ Message Type indicates a specific message type. Currently, common
       messages defined by LDP include Notification, Hello, Initialization,
       KeepAlive, Address, Address Withdraw, Label Mapping, Label Request, Label
       Abort Request, Label Withdraw, and Label Release.
• The LDP session negotiation process can be described through the state machine.
  As shown in the figure, there are five states. They are Non-Existent, Initialized,
  OpenRec, OpenSent, and Operational.
     ▫ Non-Existent: It is the initial state of an LDP session. In this state, both LSRs
       send Hello messages to elect the active LSR. After a TCP connection
       establishment success event is received, the state changes to Initialized.
     ▫ Initialized: In this state, the active LSR sends an Initialization message to the
       passive LSR, sets the session state to OpenSent, and waits for an
       Initialization message. The passive LSR waits for the Initialization message
       sent by the active LSR. If the parameters in the received Initialization
       message are accepted, the passive LSR sends Initialization and KeepAlive
       messages, and sets the session state to OpenRec. When the active and
       passive LSRs receive any non-initialization message or the waiting period
       times out, both of them set the session state to Non-Existent.
• LDP transport addresses are used to establish TCP connections with peers.
     ▫ After two LSRs discover each other and learn each other's transport address
       through Hello messages, the LSRs attempt to perform the TCP three-way
       handshake (based on the transport addresses), and exchange LDP
       Initialization messages, Label Mapping messages, and so on. All these
       messages use the transport addresses of the two ends as source and
       destination IP addresses.
     ▫ By default, the transport address for a device on a public network is the LSR
        ID of the device, and the transport address for a device on a private
        network is the primary IP address of an interface on the device.
     ▫ The mpls ldp transport-address command can be run in the interface view
       to change a transport address.
• LDP session states:
     ▫ NonExistent: initial state of an LDP session. In this state, the two ends send
       Hello messages to each other. After the TCP connection establishment
       success event is triggered, the session enters the Initialized state.
     ▫ OpenSent: The active LSR sends an Initialization message to the passive LSR
       and waits for a reply.
     ▫ Open Recv: LDP peers at both ends of the LDP session wait for a KeepAlive
       message from each other after the session enters the initialization state. If
       they receive each other's KeepAlive message, the LDP session enters the
       Operational state.
• Label distribution: An LSR notifies the upstream LSR of the binding between
  labels and FECs.
• When the DU label advertisement mode is used, an LSR can assign labels to all
  its peers by default. Specifically, each LSR can distribute label mappings to all its
  peers, regardless of whether the LSR is an upstream or a downstream one. If an
  LSR distributes labels only to upstream peers, it must identify its upstream and
  downstream nodes based on routing information before sending Label Mapping
  messages. An upstream node cannot send Label Mapping messages to its
  downstream node.
• An LSR advertises label mappings to an upstream peer only after receiving Label
  Request messages from the upstream peer.
• The label distribution control mode works with the label advertisement mode:
     ▫ If the network shown in the figure uses the DU label advertisement mode,
       R2 and R3 can actively notify the upstream LSR of the label binding for the
       FEC 192.168.4.0/24 even if the upstream LSR does not send Label Request
       messages and R2 and R3 do not receive label binding information from the
       downstream LSR.
     ▫ If the network uses the DoD label advertisement mode, R2 and R3 can
       notify the upstream LSR of the label binding for the FEC 192.168.4.0/24
       given that R2 and R3 have received Label Request messages from the
       upstream LSR, regardless of whether R2 and R3 have received label binding
       information from the downstream LSR.
• In ordered label distribution control mode, an LSR can send a Label Mapping
  message to its upstream node only when the LSR receives Label Mapping
  messages of a FEC from the downstream of the FEC or when the LSR is the
  egress of an LSP.
     ▫ If the network shown in the figure uses the DU label advertisement mode,
       an LSR sends the label binding information of the FEC 192.168.4.0/24 to its
        upstream node only after the LSR receives the label binding information of
        the FEC from its downstream node, even if the upstream node has sent
        Label Request messages. Therefore, the initiator for LSP establishment must
        be an egress LSR (R4 in this example).
     ▫ If the network uses the DoD label advertisement mode, an LSR advertises
       the label binding information of the FEC 192.168.4.0/24 to the upstream
       node only after the LSR receives Label Request messages from the
       upstream node as well as the label binding information of the FEC from the
        downstream node. Therefore, a Label Request message can be initiated by
        the ingress LSR (R1) only. After a Label Request is sent hop by hop to the
        egress LSR (R4), R4 advertises a Label Mapping message to the upstream
        LSR to establish an LSP.
• If MPLS is deployed on an IP network, an LSR uses the IP routing table to
  determine whether a label mapping is received from the next hop.
• In liberal mode, a new LSP can be quickly established when routes change,
  because all received labels are retained, which is the biggest advantage of this
  mode. The disadvantage is that unnecessary label mappings are distributed and
  maintained.
• If the next hop of a FEC changes, either of the following situations occurs:
     ▫ In liberal label retention mode, the LSR can use an existing label advertised
       by a non-next hop LSR to quickly establish an LSP. The liberal mode
       requires more memory and label space.
     ▫ In conservative label retention mode, the LSR retains the labels advertised
       by the next hop only. This mode saves memory and label space but
       consumes more time to reestablish the LSP.
     ▫ An LSR that has a limited label space usually uses the conservative mode
       and DoD mode.
• During label advertisement, R3 is the egress of the FEC 192.168.3.0/24. During
  label distribution, R3 assigns label 3 to the FEC and advertises the label binding
  information to R2.
• During data forwarding, R2, as the penultimate hop to 192.168.3.0, finds that the
  outgoing label value is 3. Then, R2 removes the label header and forwards the IP
  packet to R3. R3 only needs to query the FIB once to obtain the corresponding
  forwarding information, improving the forwarding efficiency.
• Run the label advertise { explicit-null | implicit-null | non-null } command in the
  MPLS view to configure the label to be assigned to the penultimate hop.
• For a packet that enters the MPLS domain from R1 and is destined for
  192.168.4.0/24, R1 is the ingress LSR, and R4 is the egress LSR.
• Note: By default, 32-bit host IP routes are used to trigger LSP establishment. You
  can manually trigger the establishment of an LSP with non-32-bit host IP routes.
• Note: If R2 fails, OSPF routes re-converge. The next hop of the route
  192.168.4.0/24 in the routing table of R1 is switched to R3. In this case, R1 uses
  the label advertised by R3 for 192.168.4.0/24.
• When R1 receives an IP packet destined for 192.168.4.1, it searches the FIB for a
  forwarding entry matching the destination IP address of the packet, and finds
  that the tunnel ID in the matching entry is not 0. As such, R1 continues to search
  for an NHLFE matching the tunnel ID, pushes a label to the IP packet, and
  forwards the packet. The outbound interface is GE 0/0/0, the next hop is R2, and
  the outgoing label is 1021. Therefore, R1 adds a label header to the packet and
  forwards the packet.
• When R2 receives a packet with label 1021, it searches for a matching ILM entry
  and an NHLFE matching the ILM entry. Then, R2 changes the label of the packet
  to 1041 and forwards the packet through the matching outbound interface.
• When R4 receives a packet with label 1041, it searches for a matching ILM entry
  and finds that the operation type is pop. R4 then performs a pop operation to
  remove the outer label from the packet. The packet then becomes a standard IP
  packet, and therefore R4 performs the standard IP forwarding on the packet.
• When R4 forwards the packet, it searches the LFIB and FIB. How can the
  forwarding efficiency be improved on the egress LSR (R4)?
• BGP routes can also be used to trigger LDP LSP establishment. This trigger policy
  is not covered in this course.
1. C
2. B
• Unless otherwise specified, MPLS VPN refers to BGP/MPLS IP VPN.
• MPLS VPN backbone networks can also be constructed by enterprises themselves,
  with technical implementation similarly to that of carriers. This course focuses on
  the scenario where enterprises purchase MPLS VPN services from carriers.
• CE: an edge device on a user network. A CE provides interfaces that are directly
  connected to a carrier network. A CE can be a router, switch, or host. CEs are
  usually unaware VPNs and do not need to support MPLS.
• PE: an edge device on a carrier network and directly connected to a CE. On an
  MPLS network, PEs process all VPN services, and therefore PEs must have high
  performance.
• P: a backbone router on a carrier network and not directly connected to a CE. P
  devices need only to provide basic MPLS forwarding capabilities and do not
  maintain VPN information.
• The meaning of a site can be understood from the following aspects:
     ▫ A site is a group of IP systems that can communicate without using carrier
       networks.
     ▫ Sites are classified based on topological relationships between devices
       rather than the geographical locations of devices. In the preceding figure,
       the networks in provinces X and Y of company A need to communicate
       through the backbone network of the carrier. Therefore, the two networks
       are considered as two sites. If a physical private line is added between the
       CEs on the networks of provinces X and Y, the two networks can
       communicate without the need of the carrier network. In this case, the two
       networks are considered as one site.
• Relationship between sites and VPNs:
     ▫ Sites connected to the same service provider network can be classified into
       different collections based on configured policies. Only sites that belong to
       the same collection can access each other, and this collection is defined as a
       VPN.
     ▫ Devices at a site can belong to multiple VPNs. In other words, a site can
       belong to more than one VPN.
• Multiprotocol Extensions for BGP (MP-BGP): an extended BGP protocol that
  supports multiple address families. For details, see related courses.
• MPLS traffic engineering (MPLS TE): steers traffic to constrained LSPs for
  forwarding so that the traffic is transmitted along specified paths. MPLS TE fully
  uses network resources and provides bandwidth and QoS guarantee without the
  need for hardware upgrades. It minimizes network costs.
• Intranet networking is the simplest and most typical MPLS VPN networking
  scheme. The following technical implementation of MPLS VPN will be described
  based on this networking scheme.
• A VPN is a private network. Different VPNs independently manage their own
  address ranges, which are also called address spaces. Address spaces of different
  VPNs may overlap. For example, in the preceding figure, both user X and user Y
  use 192.168.1.0/24, indicating that the address spaces overlap. VPNs can use
  overlapping address spaces in the following situations:
     ▫ They share a same site, but devices at the site do not communicate with
       devices using overlapping address spaces at the other sites of the VPNs.
• For details about VRFs, see the related HCIP-Datacom-Core course.
• When configuring an RD, you need to specify only the Administrator and
  Assigned Number subfields in the RD.
• Four types of RD configuration formats are available. The following two types are
  commonly used:
• When configuring a VPN target, you need to specify only the Administrator and
  Assigned Number subfields in the VPN target. VPN targets have the same
  configuration formats as RDs.
• A PE device distributes MPLS labels in either of the following ways:
     ▫ One label per route: Each route in a VRF is assigned one label. When many
       routes exist on the network, the Incoming Label Map (ILM) maintains these
       entries, requiring high router capacity.
     ▫ One label per instance: Each VPN instance is assigned one label. All the
       routes of a VPN instance share the same label, reducing the number of
        labels required.
• VPN route leaking: a process of matching VPNv4 routes against the VPN targets
  of local VPN instances. After a PE receives a VPNv4 route, the PE directly matches
  the route against the VPN targets of local VPN instances, without selecting the
  optimal route or checking whether a desired tunnel exists.
• Tunnel recursion: A public network tunnel is required to transmit VPN traffic from
  one PE to the other PE over the public network. After VPN route leaking, the
  route must be successfully recursed to an LSP based on the destination IPv4
  prefix before the route is added to the routing table of the corresponding VPN
  instance. This means that the next hop of the IPv4 route must match an LSP.
• By default, only the peer relationships in the BGP IPv4 unicast address family
  view are automatically enabled. In other words, after the peer as-number
  command is run in the BGP view, the system automatically configures the peer
  enable command. In other address family views, however, peering must be
  enabled manually.
1. A
2. ABCD
• Note: Unless otherwise specified, MPLS VPN in this document indicates
  BGP/MPLS IP VPN.
• The advantages of MPLS VPN include but are not limited to the following:
     ▫ The MPLS VPN technology ensures the dedicated bandwidth of each VPN
       to meet the requirements of different users, traffic models, and QoS of
       various services.
• RFC2547 defines three inter-AS VPN solutions:
     ▫ Inter-AS VPN Option C (inter-provider backbones Option C): PEs use multi-
       hop MP-EBGP to advertise VPNv4 routes, which are also called multi-hop
       EBGP redistribution of labeled VPN-IPv4 routes.
• Note: This course describes only the non-inter-AS MPLS VPN deployment.
• Note: 192.168.100.1 and 192.168.200.1 are the IP addresses of the interfaces on
  CE1 and CE3, respectively, that are used to set up the BGP peer relationship with
  PE1.
• The process of advertising routes from Spoke-CE1 to Spoke-CE2 is as follows:
     ▫ The Hub-PE imports the route into the VPN_in routing table through the
       import RT attribute of the VPN instance (VPN_in), and then advertises the
       route to the Hub-CE through EBGP.
     ▫ The Hub-CE learns the route through the EBGP connection and advertises
       the route to the VPN instance (VPN_out) of the Hub-PE through another
       EBGP connection.
     ▫ The Hub-PE advertises the route with the export RT attribute of VPN_out to
       all Spoke-PEs.
     ▫ The Hub-PE imports the route to the VPN_in routing table through the
       import RT attribute of the VPN instance (VPN_in). After the BGP route is
       imported into OSPF 100, the route transmitted from Spoke-PE1 is
        advertised to the Hub-CE.
     ▫ The Hub-CE receives the route through OSPF 100. After route import is
       configured, the route is advertised to OSPF 200, and then OSPF 200
       advertises the route to the Hub-PE.
     ▫ The VPN instance (VPN_out) of the Hub-PE imports the route of OSPF 200
       multi-instance and advertises the route with the export RT attribute to all
       Spoke-PEs.
     ▫ The Hub-PE imports the route into the VPN_in routing table through the
       import RT attribute of the VPN instance (VPN_in), and then advertises the
       route to the Hub-CE through EBGP.
     ▫ The Hub-CE learns the route through the EBGP connection and advertises
       the route to the VPN instance (VPN_out) of the Hub-PE through another
       EBGP connection.
     ▫ The Hub-PE advertises the route with the export RT attribute of VPN_out to
       Spoke-PE2.
     ▫ The Hub-PE imports the route to the VPN_in routing table through the
       import RT attribute of the VPN instance (VPN_in) and advertises the route
        to the Hub-CE through OSPF 100.
     ▫ Hub-CE learns the route through OSPF 100 and advertises the route to the
       Hub-PE through OSPF 200.
     ▫ The VPN instance (VPN_out) of the Hub-PE imports the route of OSPF 200
       and advertises the route with the export RT attribute of VPN_out to all
       Spoke-PEs.
     ▫ The VPN instance on Spoke-PE2 imports the route based on the import RT.
       Spoke-PE2 advertises the route to Spoke-CE2 through EBGP.
• The VPN instance (VPN_out) on the Hub-PE advertises the route to Spoke-PE2
  and Spoke-PE1 at the same time with the export RT. The route is imported by the
  Hub-PE through an IGP (OSPF 200). Because the IGP route does not carry the
  AS_Path attribute, the AS_Path attribute is null. The AS_Path of the route
  destined for 192.168.1.0/24 from Spoke-CE1 is not null. Therefore, the route
  returned by the Hub-PE takes precedence over the route from Spoke-CE1. As a
  result, route flapping occurs.
• In actual applications, if two sites that need to communicate are in the same AS,
  each site should consider the route of the other site as an inter-area route rather
  than an AS external route.
• The domain ID can be configured using the domain-id command in the view of
  the OSPF process bound to the VRF instance.
• It is recommended that all OSPF instances related to the same VPN use the same
  domain ID or the default domain ID.
• The loop generation process is as follows:
     ▫ PE1 imports the route of the OSPF VPN1 process to BGP and advertises the
       route to PE2 and PE3 through MP-IBGP.
     ▫ PE3 advertises the optimal route learned from OSPF to PE1 through MP-
       IBGP.
     ▫ In this case, PE1 has two routes to the destination network segment
       192.168.1.0/24. One is learned from CE1 through OSPF, and the other is
       learned from PE3 through MP-IBGP. The following problems may occur:
• By default, the DN bit in LSAs generated by OSPF is set to 1. You can run the dn-
  bit-set disable command to disable OSPF from setting the DN bit in LSAs.
• The loop generation process is as follows:
     ▫ CE1 advertises a route destined for 192.168.1.0/24 to PE1 through EBGP.
       The AS_Path of the route is 65001.
     ▫ PE1 advertises the route to PE2 and PE3 through MP-IBGP.
     ▫ PE2 imports BGP routes to the OSPF VPN1 process and advertises a Type 5
       LSA destined for 192.168.1.0/24 to CE2.
     ▫ CE2 advertises the Type5 LSA to PE3.
     ▫ PE3 preferentially selects an OSPF route (the OSPF route has a higher
       priority than the IBGP route) and advertises an Update message to PE1
       through MP-IBGP.
     ▫ PE1 receives the MP-IBGP Update message from PE3. The MP-IBGP route
       advertised by PE3 has a higher priority than the EBGP route advertised by
       CE1 because the MP-IBGP route is an IGP (OSPF) route imported by BGP on
       PE3 and its AS_Path is null. PE1 prefers the route advertised by PE3.
     ▫ In this case, a routing loop is formed: PE3 -> CE2 -> PE2 -> PE1 -> PE3.
• Because PE1 does not preferentially select the route learned from CE1, PE1
  withdraws the route advertised to PE2. The imported BGP route is also
  withdrawn in the OSPF VPN instance process on PE2. Then, both CE2 and PE3
  withdraw the OSPF routes. The BGP route advertised by PE3 to PE1 is also
  withdrawn. On PE1, the route learned from CE1 becomes the optimal route. As a
  result, route flapping occurs.
• The generation and elimination of Type 7 LSA-related loops are similar to those
  of Type 5 LSA-related loops, and are not described here.
• The VPN route tag is not transmitted in the BGP extended community attribute.
  The VPN route tag is valid only on the PEs that receive BGP routes and generate
  OSPF LSAs.
• By default, the VPN route tag is calculated based on the AS number of BGP. If
  BGP is not configured, the default value is 0.
• You can run the route-tag command to set a VPN route tag.
• Multiple sham links of the same OSPF process can share the same endpoint
  address, but different OSPF processes cannot have two sham links with the same
  endpoint address.
• When configuring a sham link, you can specify the route cost of the sham link.
  The default value is 1.
1. BCD
2. B
• To facilitate understanding, the devices connecting ASs in the original AS are
  called ASBR-PEs, and P devices are numbered differently. In addition, CE3 and
  CE4 will not be discussed.
• The numbers in this example are only for ease of understanding, and do not
  represent the actual processing sequence on the devices.
• The LDP PHP behavior is not considered in data forwarding.
• In the inter-AS VPN scenario, it is recommended that independent RRs be
  deployed to transmit routes only without forwarding traffic.
• If the P device of each AS knows the routes to the PEs of other ASs, the data
  forwarding process will be relatively simple. But if the P device does not know
  these routes, then when a PE receives VPN data from a CE, the PE will add three
  labels. The bottom label is the VPN label that is allocated by the peer PE and
  associated with VPN routes, the middle label is the label allocated by an ASBR
  and associated with the route to the peer PE, and the outer label is the label
  associated with the route to the next-hop ASBR.
• Note: For convenience, as shown in the figure above, a symmetric LSP is used for
  illustration, but in fact, in the working process of the control plane and the data
  plane, the LSP of the ASs at both ends is asymmetric.
1. C
2. C
• Common VPLS PW creation modes:
• After receiving the ARP reply, PE1 searches the MAC-VRF table and sends the
  packet through Port1.
• CE1 sends traffic to PE1 and PE2 over two active paths for load balancing.
  Because PE1 and PE2 each have established two paths to CE2, traffic can be load
  balanced between the two paths. Finally, four service data flows are sent to CE2
  over different paths.
• After receiving the Type 1 route, PE4 becomes a DF.
• After receiving the Type 1 route, PE1 and PE2 update their MAC labels and
  withdraw the MAC routes to ES2. Traffic is automatically switched to PE4.
• Note: The peer PE may first receive the Ethernet A-D per EVI route and then the
  Ethernet A-D per ES route. To prevent this problem, the peer PE forwards traffic
  only when it receives the Ethernet A-D per EVI route and Ethernet A-D per ES
  route at the same time.
     ▫ A PE discovers the ES and ESI of the local connection and advertises the
       Type 4 route carrying the ES-Import.
     ▫ The PE starts the timer. The default value of the timer is 3 seconds, within
       which ES routes can be received.
     ▫ After the timeout, the PE generates an ordered list. The list contains the IP
       addresses of all PEs and information about their connections to the ES. The
       sequence number of the list starts from 0 in ascending order. The sequence
       number is used to determine the DF.
     ▫ The PE elected as the DF forwards BUM traffic to CEs. When a link fault
       occurs, the PE withdraws its ES routes, which triggers a re-election process.
• For more details about Type 5 routes, see the RFC draft.
• In this service mode, the sub-interface, VLAN, BD, and EVI are exclusively used by
  a user to access the network, and a separate MAC forwarding table is used on
  the forwarding plane for each user. Although this mode effectively ensures
  service isolation, it consumes a large amount of EVI resources because each user
  requires one EVI.
• When EVPN peers send routes to each other, a BD tag is encapsulated into the
  Ethernet Tag ID field of Ethernet A-D route packets, MAC/IP advertisement route
  packets, and inclusive multicast route packets.
• This course describes inter-AS EVPN L3VPN.
• E-Line, E-Tree, and E-LAN are three types of Ethernet virtual connection (EVC),
  which are point-to-point EVC, multipoint-to-multipoint EVC, and rooted-
  multipoint EVC.
2. B
• IPv6 static routes and IPv4 static routes differ mainly in destination and next-hop
  IP addresses. IPv6 static routes use IPv6 addresses, whereas IPv4 static routes use
  IPv4 addresses.
• [Huawei] ipv6 route-static dest-ipv6-address prefix-length { interface-type
  interface-number [ nexthop-ipv6-address ] | nexthop-ipv6-address | vpn-instance
  vpn-destination-name nexthop-ipv6-address } [ preference
  preference][ permanent | inherit-cost ] [ description text ]
     ▫ preference preference: specifies a preference value for the route. The value
       is an integer ranging from 1 to 255. The default value is 60.
     ▫ inherit-cost: enables the static route to inherit the cost of the recursive
       route.
     ▫ description text: specifies a description for the static route. The value is a
       string of 1 to 80 characters and can contain spaces.
• In OSPFv3, the concepts "link" and "prefix" are frequently used, which however
  are independent of each other. The terms "network" and "subnet" used in
  OSPFv2 should be replaced with the term "link" when OSPFv3 is discussed.
• In multi-instance, each instance is differentiated by adding a specific instance ID
  to the OSPFv3 packet header. If an instance is assigned a specific instance ID, the
  OSPFv3 packets that do not match the instance ID are discarded.
• IPv6 implements neighbor discovery and automatic configuration using link-local
  addresses. Routers running IPv6 do not forward IPv6 packets whose destination
  addresses are link-local addresses. Such packets are valid only on the local link.
     ▫ OSPFv3 assumes that each router has been assigned a link-local address on
       each link. All OSPFv3 interfaces except virtual-link interfaces use the
       associated link-local addresses as the source addresses to send OSPFv3
       packets.
     ▫ A router learns the link-local addresses of all the other routers attached to
       the same link and uses these addresses as the next-hop addresses to
       forward packets.
• Note: On a virtual link, the global unicast address or a site's local address must
  be used as the source address of OSPFv3 packets.
• OSPFv3 packets have the following functions:
     ▫ Hello packet: Hello packets are sent periodically to discover, establish, and
       maintain OSPFv3 neighbor relationships.
     ▫ LSR packet: An LSR packet is used to request the required LSAs from a
       neighbor. An OSPFv3 device sends LSR packets to its neighbor only after DD
       packets have been successfully exchanged between them.
• Type: indicates the type of an OSPFv3 packet and occupies 1 byte. The following
  types are available:
▫ 1: Hello packet
▫ 2: DD packet
▫ 3: LSR packet
▫ 4: LSU packet
▫ 5: LSAck packet
• Packet length: indicates the total length of an OSPFv3 packet, including the
  packet header. The field occupies 2 bytes.
• Router ID: indicates the router ID of the router that originates the packet, and
  occupies 4 bytes.
• Area ID: indicates the area in which the packet is sent, and occupies 4 bytes.
• Checksum: indicates the standard 16-bit IPv6 checksum and occupies 2 bytes.
• Options: indicates the optional capabilities supported by the router and occupies
  3 bytes.
      ▫ NP: indicates whether the area to which the originating router interface
        belongs is a not-so-stubby area (NSSA). This option occupies 1 bit.
      ▫ MC: indicates whether multicast data packets can be forwarded. This option
        occupies 1 bit.
      ▫ V6: indicates whether the router or link can participate in route calculation.
        This option occupies 1 bit. If it is set to 0, the router or link does not
         participate in IPv6 route calculation.
• LS Age: indicates the time elapsed since the LSA was generated, in seconds. This
  field occupies 2 bytes. The value of this field continually increases regardless of
  whether the LSA is transmitted over a link or saved in an LSDB.
• LS Type: indicates the LSA type. This field occupies 2 bytes. The high-order three
  bits of this field identify generic properties of the LSA, whereas the remaining bits
  identify the LSA's specific function.
     ▫ The U-bit indicates how to process an unknown LSA, that is, how a router
       that does not recognize an LSA's function code should process this LSA.
▪ 1: The LSA is stored and flooded as if its type had been understood.
           ▪ S2 S1 = 1 1: reserved.
• As shown in the figure, the U-bit in the LS Type field of the OSPFv3 LSA header is
  0 by default. Except the Type 5 and Type 8 LSAs, the other types of LSAs all have
  the area flooding scope (S2 S1 = 0 1).
     ▫ Link-local flooding scope: LSAs, including link-LSAs, are flooded only on the
       local link.
     ▫ Area flooding scope: The following types of LSAs are flooded in a single
       OSPF area: router-LSA, network-LSA, inter-area-prefix-LSA, inter-area-
       router-LSA, NSSA-LSA, and intra-area-prefix-LSA.
     ▫ V: virtual link. The value 1 indicates that the router that generates the LSA
       is at one end of the virtual link.
     ▫ E: external. The value 1 indicates that the router that generates the LSA is
       an ASBR.
     ▫ B: border. The value 1 indicates that the router that generates the LSA is an
       ABR.
     ▫ Metric: indicates the cost of the route to the destination address and
       occupies 3 bytes.
           ▪ MC-bit: multicast bit. If this bit is set to 1, the prefix is used for
             multicast route calculation. Otherwise, the prefix is not used for
             multicast route calculation.
           ▪ LA-bit: local address capability bit. If this bit is set to 1, the prefix is an
             interface address of the router.
           ▪ NU-bit: no unicast capability bit. If this bit is set to 1, the prefix is not
             used for IPv6 unicast route calculation.
• Note: The prefix length of the default route is 0. An ABR can also originate an
  inter-area Type 3 LSA to advertise a default route to a stub area.
• The fields in an OSPFv3 inter-area-router-LSA are described as follows:
     ▫ Metric: indicates the cost of the route to the destination address and
       occupies 3 bytes.
• The fields in an OSPFv3 AS-external-LSA are described as follows:
     ▫ Bit E: indicates the cost type of an AS external route and occupies 1 bit.
           ▪ The value 1 indicates the cost of a Type 2 external route. This cost
             does not increase during route transmission.
           ▪ The value 0 indicates the cost of a Type 1 external route. This cost
             increases during route transmission.
     ▫ Bit F: occupies 1 bit. The value 1 indicates that the Forwarding Address field
       (optional) is included.
     ▫ Bit T: occupies 1 bit. The value 1 indicates that the External Route Tag field
       (optional) is included.
     ▫ Metric: indicates the cost of the route to the destination address and
       occupies 3 bytes.
     ▫ PrefixLength, PrefixOptions, and Address Prefix are triplets that describe a
       prefix and have the same meanings as those in an inter-area-prefix-LSA.
     ▫ Forwarding Address: is an optional 128-bit IPv6 address and occupies 4
       bytes. This field is included if bit F is 1. In this case, a data packet needs to
       be forwarded to this address before reaching its destination.
     ▫ External Route Tag: an optional flag, which occupies 4 bytes. It can be used
       for communication between ASBRs. In a typical scenario where each of two
       ASBRs imports an AS external route, the imported routes can be tagged
       differently to facilitate route filtering.
     ▫ Referenced Link State ID: occupies 4 bytes. This field is included if the
       Referenced LS Type field is not 0, indicating the link state ID of the
       referenced LSA.
• The fields in an OSPFv3 link-LSA are described as follows:
     ▫ Rtr Pri: indicates the router priority of the interface attaching the
       originating router to the link and occupies 1 byte.
     ▫ Options: indicates a collection of Options bits that the router sets in the
       network-LSA and occupies 3 bytes.
• After the network becomes stable, check the LSDB of R2. The command output
  shows information about the following types of LSAs: router-LSA (Type 1),
  network-LSA (Type 2), Link-LSA (Type 8), and intra-area-prefix-LSA (Type 9).
• The command output is described as follows:
     ▫ LS Seq Number: sequence number of the LSA. This field is carried in the LSA
       header.
     ▫ To maintain the same topology, each interface must run both IS-IS (IPv4)
       and IS-IS (IPv6), which is not flexible.
     ▫ IPv4 areas cannot be used to connect different IPv6 areas. That is, IPv4
       networks cannot be used to address IPv6 network isolation.
• The IS-IS MT feature can overcome the disadvantages of IS-IS single topology.
• To support MT, IS-IS defines multiple types of TLVs, including Multi-Topology
  TLV, MT Intermediate Systems TLV, Multi-Topology Reachable IPv4 Prefixes TLV,
  and Multi-Topology Reachable IPv6 Prefixes TLV. This course focuses on the
  Multi-Topology TLV and does not elaborate on the other ones.
• Multi-Topology TLV:
▫ This TLV is contained only in IIH PDUs and fragment zero LSPs.
▫ Reserved MT IDs:
     ▫ ipv6: sets the topology type to IPv6. That is, the IPv6 capability for the IS-IS
       process is enabled in an IPv6 topology. Links on the network can be
       configured as IPv4 or IPv6 links. SPF calculation is performed separately in
       IPv4 and IPv6 topologies.
     ▫ standard: sets the topology type to standard. That is, the IPv6 capability for
       the IS-IS process is enabled in an integrated topology. A network
       administrator must ensure that all links on the network support the same
       topology type. By default, the standard type is used when the IPv6
       capability is enabled for an IS-IS process.
• To support IPv6, BGP needs to map IPv6 routing information to the NLRI
  attributes.
• Update message:
    ▫ An Update message can be used to advertise multiple routes with the same
      path attribute. These routes are stored in the NLRI attribute. An Update
      message can also carry multiple unreachable routes, which are stored in the
      Withdrawn Routes field, to instruct peers to withdraw these routes.
• Fields in the MP_REACH_NLRI attribute are described as follows:
     ▫ Length of Next Hop Network Address: indicates the length of the next-hop
       address and occupies 1 byte. Generally, the value is 16.
     ▫ Network Address of Next Hop: The length is variable and depends on the
       preceding field. Generally, the value is a global unicast address.
2. A
• Disadvantages of IPv4:
     ▫ The IPv4 address space is insufficient. IPv4 addresses are 32 bits long.
       Theoretically, 4.3 billion IPv4 addresses can be provided. However, the
       number of IPv4 addresses that are actually available cannot reach this
       number due to various address assignment reasons. Additionally, IPv4
       address resources are unevenly assigned. IPv4 addresses in U.S. account for
       almost half of all addresses, leaving insufficient addresses for Europe and
       even fewer for the Asia-Pacific region. The shortage of IPv4 addresses limits
       further development of mobile IP and broadband technologies that require
       an increasing number of IP addresses.
     ▫ A large number of routing entries on devices need to be maintained. At
       the initial stage of IPv4 development, many discontinuous IPv4 addresses
       are assigned. As a result, routes cannot be effectively summarized. The
       increasingly large routing table consumes a lot of memory resources and
       affects forwarding efficiency. Device vendors must continually upgrade
       devices to improve route addressing and forwarding performance.
     ▫ Address autoconfiguration and readdressing cannot be easily
       performed. IPv4 addresses are only 32 bits long, and they are unevenly
       assigned. As a result, IP addresses need to be reassigned during network
       expansion or reconstruction, increasing the maintenance workload.
• Advantages of IPv6:
     ▫ Easy to deploy.
     ▫ Compatible with various applications.
     ▫ Smooth transition from IPv4 networks to IPv6 networks.
• The IPv6 header is also called fixed header, which contains eight fields in 40
  octets. The fields are Version, Traffic Class, Flow Label, Payload Length, Next
  Header, Hop Limit, Source Address, and Destination Address.
     ▫ Version: This field indicates the version of IP and its value is 6. The length is
       4 bits.
     ▫ Traffic Class: This field indicates the class or priority of an IPv6 packet and
       its function is similar to that of the ToS field in an IPv4 header. The length
       is 8 bits.
     ▫ Flow Label: An IPv4 header does not contain this field. It is a new field. It is
       used by a source to label sequences of packets for which the label requests
       special handling by IPv6 routers. The length is 20 bits. Generally, the flow
       label, together with the source and destination IPv6 addresses, can
       determine a flow.
     ▫ Payload Length: This field indicates the length of the IPv6 payload. The
       payload refers to the extension header and upper-layer protocol data unit
       that follow the IPv6 header. This field is 16 bits long and can specify a
       maximum length of 65535 octets for the payload. If the payload size
       exceeds 65535 octets, the field is set to 0, and the Jumbo Payload option in
       the Hop-by-Hop Options header is used to express the actual payload
       length.
• Global unicast address (GUA): It is also known as an aggregatable GUA. This type
  of address is globally unique and is used by hosts that need to access the
  Internet. It is equivalent to a public IPv4 address.
• Unique local address (ULA): It is a private IPv6 address that can be used only on
  an intranet. This type of address cannot be routed on an IPv6 public network and
  therefore cannot be used to directly access a public network.
     ▫ Semantics: Different fields are defined and assigned with meanings such as
       service type and physical location, facilitating O&M and fault locating.
• Note: The VXLAN and SRv6 technologies will be described in detail in subsequent
  courses.
• Manual IPv6 over IPv4 tunnel:
     ▫ Devices at both ends of a tunnel must support the IPv4/IPv6 dual stack.
       Other devices only need to support a single protocol stack.
     ▫ Tunnel forwarding mechanism: Same as that of the manual IPv6 over IPv4
       tunnel.
• The data forwarding process of an automatic IPv4-compatible IPv6 tunnel is as
  follows:
     ▫ When receiving an IPv6 packet, R1 searches for an IPv6 route destined for
       ::A01:102 and finds that the next hop of the route is a virtual tunnel
       interface.
     ▫ R1 encapsulates the IPv6 packet into an IPv4 packet. The source address of
       the IPv4 packet is the source IPv4 address 10.1.1.1 of the tunnel, and the
       destination IPv4 address is the last 32 bits (10.1.1.2) of the IPv4-compatible
       IPv6 address ::A01:102.
     ▫ R1 sends the resulting IPv4 packet out from its tunnel interface. Then, the
       packet is routed to the destination node R2 at 10.1.1.2 over the IPv4
       network. When receiving this packet, R2 decapsulates the packet to obtain
       the original IPv6 packet, and processes the IPv6 packet using the IPv6
       protocol stack.
• 6to4 tunnel:
          ▪ The first 48 bits (2002: a.b.c.d) are determined by the IPv4 address
            assigned to a router and cannot be changed.
     2. Upon receipt, 6PE2 changes the next hop of the IPv6 route to itself, and
        assigns an inner label to the IPv6 route. Then, 6PE2 sends the labeled IPv6
        route to its IBGP peer 6PE1.
     3. When receiving the labeled IPv6 route, 6PE1 recurses the route to a tunnel,
        and adds the route to the local forwarding table. Then, 6PE1 changes the
        next hop of the IPv6 route to itself, removes the label from the route, and
        sends the route to its EBGP peer CE1.
     1. CE1 sends an ordinary IPv6 packet to 6PE1 over an IPv6 link on the public
        network.
     2. Upon receipt of the IPv6 packet, 6PE1 looks up the destination address of
        the packet in its forwarding table, and encapsulates the packet with inner
        and outer labels. Then, 6PE1 sends the resulting IPv6 packet to 6PE2 over a
        public network tunnel.
     3. When receiving the IPv6 packet, 6PE2 removes the inner and outer labels
        and forwards the resulting IPv6 packet to CE2 based on the destination
        address over an IPv6 link.
• When 6PE routes are configured to share the same explicit null label on 6PE2,
  6PE2 advertises 6PE routes with an explicit null label to 6PE1 without applying
  for labels for the routes.
• When forwarding data to 6PE2, 6PE1 adds two labels to the data. The outer label
  is distributed by LDP pointing to 6PE2, and the inner label is an explicit null label
  distributed by MP-BGP.
• When the IPv6 data packet arrives at 6PE2, 6PE2 pops out the explicit null label
  and forwards the packet to CE2.
• In 6VPE, IPv6 routing protocols run between PEs and CEs. The following IPv6
  routing protocols can be used to provide IPv6 VPN services:
▫ BGP4+
▫ OSPFv3
     1. IPv4 packet forwarding: Host 1 sends an IPv4 packet destined for Host 2 to
        R1.
     5. IPv4 packet forwarding: R2 searches its IPv4 routing table for an entry
        matching the destination address of the IPv4 packet and forwards the
         packet to Host 2.
• Scenario where IPv6 users access IPv4 servers:
     ▫ There are two NAT64 modes: static and dynamic. Static NAT64 is
       recommended when a small number of IPv6 users use fixed IP addresses,
       while dynamic NAT64 is recommended when a large number of IPv6 users
       use unfixed IP addresses.
     2. After receiving the request, the DNS server parses the request to obtain the
        IPv4 address (2.1.1.10) corresponding to the domain name, and sends a
        reply containing the IPv4 address to the user. In this scenario, the mapping
        between the domain name and IPv4 address has been predefined on the
        DNS server. If the DNS A request does not contain any IPv4 address, the
        packet is discarded.
     3. After receiving the DNS reply, the user sends a packet with the obtained
        IPv4 address as the destination address to the remote server.
     4. Upon receipt of the IPv4 packet, the NAT64 device translates the
        destination IPv4 address into an IPv6 address (2001:DB8::2) according to
        the preconfigured static address mapping (based on which a server
        mapping table is generated), combines the source IPv4 address and the
        preconfigured NAT64 prefix into a source IPv6 address (64:FF9B::101:101),
        and converts the IPv4 packet into an IPv6 packet. The NAT64 device then
        sends the IPv6 packet to the remote server on the IPv6 network, and
        generates a session table.
5. Upon receipt of the IPv6 packet, the server returns a reply packet.
     6. After receiving the reply packet from the IPv6 server, the NAT64 device
        converts the IPv6 packet into an IPv4 packet according to the session table,
        and sends the IPv4 packet to the IPv4 user.
• NAT64 prefix configuration:
     ▫ A NAT64 prefix must be different from the IPv6 address prefix of any
       interface on the device. Otherwise, the device considers the packets whose
       destination IPv6 addresses are on the same network segment as the
       interfaces to be NAT64 packets, and starts NAT64 processing for these
       packets.
     ▫ When multiple NAT64 prefixes are configured and dynamic NAT64 is used,
       all of these NAT64 prefixes can be used for NAT64 translation of IPv6-to-
       IPv4 traffic. On the other hand, if static NAT64 is used, the device randomly
       selects one of these prefixes.
     ▫ global: specifies the global 3-tuple NAT mode. The generated server
       mapping table does not contain security zone parameters and is not subject
       to restrictions of interzone relationships.
     ▫ local: specifies the local 3-tuple NAT mode. The generated server mapping
       table contains security zone parameters and is subject to restrictions of
       interzone relationships.
• Other configurations:
     ▫ Configure the IPv6 address, route, and DNS server for the PC. (The method
       of configuring an IPv6 address and a route for the PC varies according to
       the PC operating system. The detailed configuration procedure is not
       provided here.)
     ▫ Configure an IPv4 address for the server. (The configuration method varies
        according to the server operating system. The detailed configuration
        procedure is not provided here.)
           ▪ Set the IPv4 address of the server to 1.1.1.2/24, which is on the same
             network segment as GE1/0/2 of Firewall 1.
• IVI supports communication requests initiated by both IPv6 and IPv4 hosts.
• The following uses access from an IVI6 host to a global IPv4 host as an example:
     2. The IVI6 host sends an AAAA query request to the dual-stack IVI DNS
        server. This DNS server stores the IVI4 addresses of IVI servers and their
        corresponding IVI6 addresses. When receiving the AAAA query request, the
        IVI DNS server sends an AAAA query request to the target network. If no
        AAAA record exists, the IVI DNS server sends an A query request, converts
        the obtained A record into an AAAA record according to the IVI mapping
        rule, and returns the AAAA record to the IVI6 host.
     3. The IVI6 host sends a data packet. When receiving this data packet, the IVI
        gateway statelessly converts the packet into an IPv4 packet. During
        address translation, the IPv4 address embedded in the IVI6 address is
        extracted and used as the source address in the IPv4 header. During
        header encapsulation, the Stateless IP/ICMP Translation (SIIT) algorithm is
        used.
     4. The resulting IPv4 data packet is routed to the IPv4 network, thereby
         implementing access from the IVI6 host to the IPv4 host.
• IVI restrictions: The IPv6 addresses of hosts and servers must be planned and
  configured in compliance with the IVI format.
1. ABD
QoS Fundamentals
    Foreword
    ⚫      With continuous development of networks, the network scale and traffic types increase continuously.
           As a result, Internet traffic increases sharply, network congestion occurs, the forwarding delay
           increases, and even packet loss occurs. In this case, the service quality deteriorates or even services are
           unavailable. To deploy real-time and non-real-time services on the IP network, network congestion
           must be resolved. The commonly used solution is to increase the network bandwidth. However, this
           solution is not ideal considering the network construction cost.
    ⚫      Quality of service (QoS) is introduced in this situation. At limited bandwidth, QoS uses a "guaranteed"
           policy to manage network traffic and provides different priority services for different traffic.
    ⚫      This course describes QoS fundamentals.
1       Huawei Confidential
    Objectives
2   Huawei Confidential
    Contents
1. Introduction to QoS
6. Introduction to HQoS
3   Huawei Confidential
 "Best-Effort" Traditional Network
  ⚫       When the IP network emerges, there is no QoS guarantee.
  ⚫       You only know that the packets have been sent out. Whether the packets can be received
          and when the packets can be received are unknown.
                                      Undifferentiated
                                          treatment
                                  First In First Out (FIFO)
4 Huawei Confidential
5 Huawei Confidential
          Live                                 QoS:
       streaming
                                               QoS is designed to provide different service
                                               quality according to networking requirements.
       Video
    communication
6 Huawei Confidential
• To support voice, video, and data services of different requirements, the network
  is required to distinguish different communication types before providing
  corresponding QoS.
      ▫ For example, real-time services such as Voice over IP (VoIP) demand shorter
        latency. A long latency for packet transmission is unacceptable. Email and
        the File Transfer Protocol (FTP) services are comparatively insensitive to the
        latency.
• To support voice, video, and data services of different requirements, the network
  is required to distinguish different communication types before providing
  corresponding QoS.
       ▫ The BE mode of traditional IP networks cannot identify and distinguish
         various communication types on the networks. This distinguishing capability
         is the premise for providing differentiated services. The BE mode cannot
         satisfy application requirements, so QoS is introduced.
• What is QoS?
QoS Service Models
                       QoS provides
                           three           Integrated Services (IntServ)
                      service models:      model
                                        Differentiated Services
                                        (DiffServ) model
 8   Huawei Confidential
 BE Model
  ⚫       An application can send any number of packets at any time.
  ⚫       The network then makes the best effort to transmit the packets.
                       ! No guarantee of performance in
                         terms of delay and reliability
                                           Undifferentiated
                                             treatment
                                                 FIFO
9 Huawei Confidential
• The BE model is the simplest service model in which an application can send any
  number of packets at any time without obtaining approval or notifying the
  network.
• The network then makes the best effort to transmit the packets but provides no
  guarantee of performance in terms of delay and reliability.
• The BE model is the default service model for the Internet and applies to various
  network applications, such as the File Transfer Protocol (FTP) and email. It uses
  FIFO queues.
  IntServ Model
  ⚫    Before sending packets, an application needs to apply for specific services through signaling.
  ⚫    After receiving a resource request from an application, the network reserves resources for
       each information flow by exchanging RSVP signaling information.
                     !Complex implementation
                      and waste of resources
                                                 I require 1 Mbit/s
                                                    bandwidth.
                                      Live
                                   streaming      Reserve 1 Mbit/s      Reserve 1 Mbit/s
                                                     bandwidth             bandwidth
                                    Video
                                 communication
……
10 Huawei Confidential
                           Live      1
                                         Traffic classification                                      Live
                                                                               3
                        streaming           and marking                              Queue        streaming
                                                                                   scheduling
                        Video                                                                      Video
                     communication   DS edge node                    DS node                    communication
                                                       CoS Mapping                 DS domain
                                                   2
                            FTP                                                                      FTP
                          Branch                                                                     HQ
11 Huawei Confidential
• Before sending a packet, the application does not need to notify the network to
  reserve resources for it. In the DiffServ model, the network does not need to
  maintain the status of each flow. Instead, it provides specific services based on
  precedence fields of packets (for example, the DS field in the IP header).
• The DiffServ model classifies network traffic into multiple classes for
  differentiated processing. To be specific, the DiffServ model implements traffic
  classification first and allocates different identifiers to different classes of packets.
  After a network node receives these packets, it simply identifies these identifiers
   and processes packets based on the actions corresponding to these identifiers.
• There is an analogy between the DiffServ model and train ticket service system. A
  train ticket marks the service that you book: soft sleeper, hard sleeper, hard seat,
  or no seat. You get on a train and enjoy the specific service marked in your ticket.
  On an IP network, an identifier is to a packet as a train ticket is to a passenger.
• In addition to traffic classification and marking, the DiffServ model provides the
  queuing mechanism. When network congestion occurs on a device, the device
  buffers packets in queues. The device sends the packets out of queues when
   network congestion is relieved.
 Common QoS Technologies (DiffServ Model)
                                              Traffic limiting
                                              Traffic policing and traffic shaping are used to
                                              monitor the rate of traffic entering the network
                                              and limit the usage of traffic and resources.
                                                   Congestion avoidance
                                 Common            It adjusts the network traffic to relieve
                               technologies        network overload.
                                              Congestion management
                                              It adjusts the scheduling sequence of packets
                                              to meet high QoS requirements of delay-
                                              sensitive services.
12 Huawei Confidential
• Rate limiting: Traffic policing and traffic shaping monitor the rate of traffic
  entering the network to limit the traffic and resource usage, providing better
  services for users.
Token
                      Video
                                                                                                           Queue 0
  Inbound interface
                                                                                                                                              Outbound interface
                                                                                                           Queue 1
                                                                                                                      Scheduling
                                                                                   Other
                                      Traffic                           Re-
                                                      Token     CAR              processing    WRED                                 GTS
                                   classification                      marking                             Queue 2
                                                      bucket                         …
                      Voice                                                                                  …
                                                                                              Congestion                            Traffic
                                                    Traffic policing
                                                                                              avoidance                            shaping
                                                                                                           Queue N
Congestion management
Data
13 Huawei Confidential
                      ▫ Traffic policing: monitors the volume of specific data traffic that arrives at
                          network devices, and is usually applied to incoming traffic. When the traffic
                          volume exceeds the maximum value, traffic limiting or punishment
                          measures are taken to protect business interests and network resources of
                          service providers.
B. IntServ model
C. BE model
15 Huawei Confidential
1. ABC
     Section Summary
16   Huawei Confidential
     Contents
1. Introduction to QoS
6. Introduction to HQoS
17   Huawei Confidential
 QoS Data Processing
Token
                       Video
                                                                                                 Queue 0
   Inbound interface
                                                                                                                              Outbound interface
                                                                                                           Scheduling
                                                                                                 Queue 1
                                                                               Other
                                       Traffic                      Re-
                                                             CAR             processing   WRED                          GTS
                                    classification                 marking                       Queue 2
                                                                                 …
                       Voice                                                                       …
Queue N
Data
18 Huawei Confidential
                                            Traffic marking
                                          Packets with different                    Queue
                                Video           priorities
            Inbound interface
                                                                      Traffic
                                                                   classification
Voice
Data
19 Huawei Confidential
▪ Internal marking
                                 ▪ Sets the CoS and drop precedence of packets for internal processing
                                   on a device so that packets can be placed directly in specific queues.
                                                                                                    Downlink direction
                         Packet header
                                                 Service class               Color
                             priority
                        Different packets
                                                     CoS                 Drop priority
                                use
                                                 of packets               of packets
                          different QoS
                                                on the device            on the device
                            priorities.
21 Huawei Confidential
• Packets carry different types of precedence field depending on the network type.
  For example, packets carry the 802.1p value on a VLAN network, the EXP value
  on an MPLS network, and the DSCP value on an IP network. To provide
  differentiated services for different packets, the device maps the QoS priority of
  incoming packets to the scheduling precedence (also called service class) and
  drop precedence (also called color), and then performs congestion management
  based on the service-class and congestion avoidance based on the color. Before
  forwarding packets out, the device maps the service class and color of the
  packets back to the QoS priority, which provides a basis for other devices to
  process the packets.
 External Priority: VLAN Packet
22 Huawei Confidential
• Eight service priorities (PRIs) are defined in the VLAN tag of the Ethernet frame
  header.
 External Priority: MPLS Packet
23 Huawei Confidential
• The EXP field in the label is used as the external priority of MPLS packets to
  differentiate service classes of data traffic.
 External Priority: IP Packet
                       External priority
                                                      DSCP        7        6    5         4   3         2        1    0      Value range: 0–63
24 Huawei Confidential
• Eight IP service types are defined in the Precedence field of the ToS field in an
  IPv4 packet header.
• The ToS field in the IPv4 packet header is redefined as the Differentiated Services
  (DS) field. That is, the IP Precedence field is extended.
Mapping Between External Priorities
                             5         5             5         40-47   EF   EF (46)
                             4         4             4         32-39          AF4      AF41 (34)   AF42 (36)   AF43 (38)
                             3         3             3         24-31          AF3      AF31 (26)   AF32 (28)   AF33 (30)
                                                                       AF
                             2         2             2         16-23          AF2      AF21 (18)   AF22 (20)   AF23 (22)
                             1         1             1         8-15           AF1      AF11 (10)   AF12 (12)   AF13 (14)
                             0         0             0          0-7    BE    BE (0)
   25         Huawei Confidential
  Service Class
                BA classification           Uplink direction                                                                   Downlink direction
                                                                                    SFU
          VLAN packet            802.1p
                                                               Service class
          MPLS packet           MPLS EXP        Mapping
                                                                  Color
                            Queue                                              CS7
                                             Service class              CS
26 Huawei Confidential
• Service classes refer to the internal priorities of packets. Eight service class values
  are available: class selector 7 (CS7), CS6, expedited forwarding (EF), assured
  forwarding 4 (AF4), AF3, AF2, AF1, and best-effort (BE). Service classes determine
  the types of queues to which packets belong.
         ▫ If queues with eight service classes all use priority queuing (PQ) scheduling,
           queues are displayed in descending order of priority: CS7 > CS6 > EF > AF4
           > AF3 > AF2 > AF1 > BE.
         ▫ If the queues of eight service classes all use WFQ scheduling, their priorities
           are the same.
 Color
               BA classification          Uplink direction                                                               Downlink direction
                                                                                   SFU
         VLAN packet            802.1p
                                                             Service class
         MPLS packet           MPLS EXP     Mapping
                                                                Color
Red
27 Huawei Confidential
• Color, referring to the drop priority of packets on a device, determines the order
  in which packets in one queue are dropped when traffic congestion occurs.
• As defined by the Institute of Electrical and Electronics Engineers (IEEE), the color
  of a packet can be green, yellow, or red.
• Drop priorities are compared based on the configured parameters. For example,
  if a maximum of 50% of the buffer is configured to store packets colored green,
  whereas a maximum of 100% of the buffer is configured to store packets colored
  red, the drop priority of packets colored green is higher than that of packets
  colored red.
 Mapping
                 BA classification         Uplink direction                                                  Downlink direction
                                                                                    SFU
           VLAN packet            802.1p
                                                                Service class
           MPLS packet        MPLS EXP            Mapping
                                                                    Color
            IP packet             DSCP                                                                                                        802.1p
                                                                                                      Service class
                                                                                                                           Mapping           MPLS EXP
                                                                                                         Color
                                                                                                                                              DSCP
• Mapping from external priorities to internal priorities • Mapping from internal priorities to external priorities
28 Huawei Confidential
• A device maps the QoS priority to the service class and color for incoming
  packets and maps the service class and color back to the QoS priority for
  outgoing packets.
 Multi-field Classification
                              Real-time services such
                                 as voice and video
                               services are given the
                                  highest priority.
        Live streaming                                                                                                          Live streaming
           Video                                                                                                                  Video
        communication                                                                                                          communication
                                      DS edge node
29 Huawei Confidential
                                       OR                     Modification in                              OR                   Modification in
                                                                  sequence                                                          sequence
                                Traffic matching            Traffic modification                  Traffic matching            Traffic modification
                                      rule 1                        rule 1                              rule 1                        rule 1
                                                                                   Data flow
                                Traffic matching            Traffic modification                  Traffic matching            Traffic modification
                                      rule 2                        rule 2                              rule 2                        rule 2
    30   Huawei Confidential
 Traffic Classification Process
                          Real-time services such
                          as voice and video
                          services are given the
                          highest priority.
           Video                                                                                    Video
        communication                                                                            communication
                                     DS edge node        DS node        DS node   DS edge node
FTP FTP
HQ Branch
31 Huawei Confidential
• Implementation: The DS edge node obtains service traffic such as voice and video
  traffic through MF classification and maps the traffic to the corresponding
  priorities. It processes the remaining traffic through BA classification.
Configuring MF Classification
                   DS edge node     DS node                     system-view
                                                                  traffic classifier [classifier-name]     //Create a traffic classifier.
                                                                    if-match [acl | vlan-id | …. ]     //Match traffic based on traffic
                                              DS domain         characteristics.
    32    Huawei Confidential
Checking the MF Classification Configuration
⚫    After MF classification is configured, you can run the following commands to check the configuration.
         system-view
           display traffic classifier user-defined [ classifier-name ] //Check the traffic classifier configuration.
           display traffic behavior [ system-defined | user-defined ] [ behavior-name ] //Check the traffic behavior configuration.
           display traffic policy user-defined [ policy-name ] classifier [classifier-name ] //Check the traffic policy configuration.
           display traffic-policy applied-record [ policy-name ] //Check the record of the specified traffic policy.
    33      Huawei Confidential
(Optional) Modifying the BA Classification Configuration
                                                           ⚫   Specify the packet priority trusted on an
               DS edge node   DS node
                                                               interface.
                                                           system-view
                                                             interface [interface-type interface-num]       //Enter the
                                    DS domain
                                                           interface view.
                                                                trust [8021p | dscp]     //Specify the priority to be trusted.
• Based on the priority mapping table, BA
  classification maps data with the specific QoS           ⚫  Configure a priority mapping table.
                                                           system-view
  field to the internal priority.                            qos map-table [ dot1p-dot1p | dot1p-dscp | dot1p-lp | dscp-
                                                           dot1p | dscp-dscp | dscp-lp ]  //Enter the priority mapping table
• The priority mapping table can be modified as            view.
                                                               input [input-value1] output [output-value]      //Configure
  required. The roadmap is as follows:                     mappings in the priority mapping table.
 34   Huawei Confidential
Checking the Priority Mapping Configuration
⚫    After the priority mapping configuration is modified, you can run the following commands to check the
     configuration.
         system-view
           display qos map-table [ dot1p-dot1p | dot1p-dscp | dot1p-lp | dscp-dot1p | dscp-dscp | dscp-lp ]
         //Check the mapping between priorities.
    35      Huawei Confidential
        Quiz
B. False
        2. (Multiple-answer question) Which of the following parameters are used to mark the QoS
            priority of data packets?(   )
            A. EXP
B. 802.1p
C. DSCP
D. IP precedence
36 Huawei Confidential
1. A
2. ABCD
     Section Summary
     ⚫   The DiffServ model must mark packets for differentiating them. Generally, MF
         classification is used to mark incoming traffic on edge devices in a DS domain, and
         BA classification is used to mark incoming traffic on devices in a DS domain.
     ⚫   Tags can be added to multiple types of data packet headers.
            The Pri bit (802.1p priority) in the VLAN header is used to mark the QoS priority.
            The EXP bit in the MPLS header is used to mark the QoS priority.
            The TOS bit (DSCP/IP precedence) in the IP header is used to mark the QoS priority.
37   Huawei Confidential
     Contents
1. Introduction to QoS
6. Introduction to HQoS
38   Huawei Confidential
                                                                                                         Traffic Limiting     Token Bucket    Traffic Policing          Traffic Shaping
Video Queue 0
                                                                                                                                                                             Outbound interface
    Inbound interface
Queue 1
                                                                                                                                             Scheduling
                                                                                           Other
                                         Traffic
                                                            CAR           Re-marking     processing     WRED                                                   GTS
                                      classification                                                                            Queue 2
                                                                                             …
                        Voice                                                                                                         …
                                                       Traffic policing                                                                                       Traffic
                                                                                                                                                             shaping
                                                                                                                                Queue N
Data
39 Huawei Confidential
• This course describes two rate limiting technologies: traffic policing and traffic
  shaping.
• Traffic policing: If the traffic rate of a connection exceeds the specifications on an
  interface, traffic policing allows the interface to drop excess packets or re-mark
  the packet priority to protect network resources and protect carriers' profits. An
  example of this process is restricting the rate of HTTP packets to 50% of the
  network bandwidth.
• Traffic shaping: allows the traffic rate to match that on the downstream device.
  When traffic is transmitted from a high-speed link to a low-speed link or a traffic
  burst occurs, the inbound interface of the low-speed link is prone to severe data
  loss. To prevent this problem, traffic shaping must be configured on the
  outbound interface of the device connecting to the high-speed link.
                                                                                                 Traffic Limiting   Token Bucket    Traffic Policing       Traffic Shaping
Video Queue 0
                                                                                                                                                               Outbound interface
   Inbound interface
                                                                                                                                   Scheduling
                                                                                                                      Queue 1
                                                                                      Other
                                         Traffic                           Re-
                                                            CAR                     processing   WRED                                             GTS
                                      classification                      marking                                     Queue 2
                                                                                        …
                        Voice                                                                                            …
                                                                                                                                                 Traffic
                                                       Traffic policing
                                                                                                                                                shaping
                                                                                                                      Queue N
                                                                                                    Token
                                                                                                    bucket
40 Huawei Confidential
• Both traffic policing and traffic shaping use the token bucket technology.
                       ▫ Token bucket: A token bucket is used to check whether traffic meets packet
                         forwarding conditions.
                                                                                       Traffic Limiting     Token Bucket      Traffic Policing    Traffic Shaping
 Single-Rate-Single-Bucket Mechanism
   •        Committed Information Rate (CIR):
                indicates the rate at which tokens are put into
                 bucket C, in kbit/s.
                                                                                                   Token
   •        Committed burst size (CBS):
                                                                                                                     Discard packets
                indicates the maximum volume of burst traffic that                                                   in the case of
                 bucket C allows before the rate of some traffic                                          CIR            overflow
                 exceeds the CIR, that is, the capacity of bucket C.
                 The value is expressed in bytes.
                                                                                                                             Initial
                                                                                                   Bucket           CBS    number of
   •        The single-rate-single-bucket mechanism does                                             C
                                                                                                                          tokens (Tc)
            not allow burst traffic. Only committed traffic is                                                               = CBS
            allowed.                                                                                                               The data packet is marked green
                                                                                                                Yes (Tc = Tc-B)       and forwarded by default.
                                                                                                   B < Tc?
                                                                       Size of an arriving
                                                                           packet (B)                     No (Tc remains unchanged)
                                                                                                                          The data packet is marked red
                                                                                                                            and discarded by default.
41 Huawei Confidential
• When a packet arrives, the device compares the packet with the number of
  tokens in the bucket. If there are sufficient tokens, the packet is forwarded (one
  token is associated with 1-bit forwarding permission). If there are no enough
  tokens, the packet is discarded or buffered.
 Single-Rate-Two-Bucket Mechanism
                                                                                                                               Initial number of tokens
   •        CIR:
                                                                                      Token                                    Bucket C: Tc = CBS
               Indicates the rate at which tokens are put into                                                                Bucket E: Te = EBS
                bucket C, in kbit/s.                                                                  Token overflow
   •        CBS:                                                                             CIR
               Indicates the maximum volume of burst traffic that
                bucket C allows before the rate of some traffic
                exceeds the CIR, that is, the capacity of bucket C.
                The value is expressed in bytes.                                    Bucket                                  Bucket
                                                                                                     CBS                                      EBS
                                                                                      C                                       E
42 Huawei Confidential
• When a packet arrives, the device compares the packet with the number of
  tokens in the bucket. If there are sufficient tokens, the packet is forwarded (one
  token is associated with 1-bit forwarding permission). If there are no enough
  tokens, the packet is discarded or buffered.
            ▫ If B is less than or equal to Tc, the packet is marked green and Tc decreases
              by B.
            ▫ If B is greater than Tc and less than or equal to Te, the packet is marked
              yellow and Te decreases by B.
            ▫ If B is greater than Te, the packet is marked red, and Tc and Te remain
              unchanged.
                                                                                       Traffic Limiting     Token Bucket           Traffic Policing   Traffic Shaping
 Two-Rate-Two-Bucket Mechanism
   •    Peak Information Rate (PIR):                                                                                            Initial number of tokens
           Indicates the rate at which tokens are put into                       Token                                   Token Bucket P: Tp = PBS
                                                                                                                                Bucket C: Tc = CBS
            bucket P, that is, the maximum traffic rate that                              Discard packets in the
            bucket P allows. The PIR is greater than the CIR. The       PIR                  case of overflow                       Discard packets in the
            value is expressed in kbit/s.                                                                                              case of overflow
   •    Peak burst size (PBS):                                                                                      CIR
           Indicates the capacity of bucket P, that is, the
            maximum volume of burst traffic that bucket P
            allows. The PBS is greater than the CBS. The value is
                                                                              Bucket                                      Bucket
            expressed in bytes.                                                                PBS                                          CBS
                                                                                P                                           C
   •    CIR:
           Indicates the rate at which tokens are put into                                                      Yes (Tc and Tp  The data packet is marked red
            bucket C, in kbit/s.                                                                               remain unchanged)   and discarded by default.
                                                                                                   B > Tp?
   •    CBS:
           Indicates the maximum volume of burst traffic that      Size of an arriving                                                  The data packet is marked
                                                                        packet (B)                        No
            bucket C allows before the rate of some traffic                                                    Yes (Tp = Tp-B)       yellow and forwarded by default.
            exceeds the CIR, that is, the capacity of bucket C.
                                                                                                 Tp > B > Tc?
            The value is expressed in bytes.
                                                                                                          No (Tc = Tc-B)
   •    The two-rate-two-bucket mechanism allows
                                                                                                                              The data packet is marked green
        long-term burst traffic.                                                                                                 and forwarded by default.
43 Huawei Confidential
• The two rate three color marker (trTCM) algorithm focuses on the traffic burst
  rate and checks whether the traffic rate conforms to the specifications. Therefore,
  traffic is measured based on bucket P and then bucket C.
        ▫ If B is greater than Tp, the packet is marked red and Tc and Tp remain
          unchanged.
        ▫ If B is greater than Tc and less than or equal to Tp, the packet is marked
          yellow and Tp decreases by B.
        ▫ If B is less than or equal to Tc, the packet is marked green, and Tp and Tc
          decrease by B.
                                                                                           Traffic Limiting   Token Bucket      Traffic Policing   Traffic Shaping
44 Huawei Confidential
• In the figure:
         ▫ An edge network device connects a wide area network (WAN) and a local
           area network (LAN). The LAN bandwidth (100 Mbit/s) is higher than the
           WAN bandwidth (2 Mbit/s).
         ▫ When a LAN user attempts to send a large amount of data to a WAN, the
           edge network device is prone to traffic congestion. Traffic policing can be
           configured on the edge network device to restrict the traffic rate,
           preventing traffic congestion.
         ▫ Drops excess traffic over the specifications or re-marks such traffic with a
           lower priority.
 CAR
  ⚫    CAR uses token buckets to measure traffic and determines whether a packet conforms to the specification.
                                          Packets are forwarded at the original rate.
                                              (Traffic policing is not required.)
                                                                     Packets                                        Remark
                                                                  match rules.
                                                                                           Compliant                Forward
                                                           Traffic
                                  Arriving packets      classification
                                                                                                                    Discard
                                                                            Token bucket
                              •   Token bucket modes                   •   The device marks the packet red, yellow, or green based on
                                                                           the metering result using the token bucket.
                                  1. Single-rate-single-bucket
                                                                           1. Green indicates that the packets comply with the specifications
                                  2. Single-rate-two-bucket
                                                                              and are directly forwarded.
                                  3. Two-rate-two-bucket
                                                                           2. Yellow indicates that temporary burst traffic is allowed although it
                                                                              does not comply with specifications. After the traffic is re-marked,
                                                                              the priority is reduced and the traffic is forwarded in BE mode.
                                                                           3. Red indicates that the packet rate is high and does not comply
                                                                              with the specifications. Therefore, the packets are discarded.
45 Huawei Confidential
• Traffic policing uses CAR to control traffic. CAR uses token buckets to measure
  traffic and determines whether a packet conforms to the specification.
           ▫ Rate limiting: Only packets allocated enough tokens are allowed to pass so
             that the traffic rate is restricted.
• CAR process:
           ▫ When a packet arrives, the device matches the packet against matching
             rules. If the packet matches a rule, the device uses token buckets to meter
             the traffic rate.
           ▫ The device marks the packet red, yellow, or green based on the metering
             result using the token bucket. Red indicates that the traffic rate exceeds the
             specifications. Yellow indicates that the traffic rate exceeds the
             specifications but is within an allowed range. Green indicates that the
             traffic rate is conforming to the specifications.
           ▫ The device drops packets marked red, re-marks and forwards packets
               marked yellow, and forwards packets marked green.
                                                                               Traffic Limiting    Token Bucket       Traffic Policing   Traffic Shaping
47 Huawei Confidential
• Voice, video, and data services are transmitted on an enterprise network. When a
  large amount of traffic enters the network, congestion may occur due to
  insufficient bandwidth. Different guaranteed bandwidth must be provided for the
  voice, video, and data services in descending order of priority. In this situation,
  traffic policing can be configured to provide the highest guaranteed bandwidth
  for voice packets and lowest guaranteed bandwidth for data packets. This
  configuration ensures preferential transmission of voice packets during
  congestion.
                                               DS domain
                                                                       system-view
                                                                         interface [interface-type interface-num]         //Enter the
                                                                       interface view.
•    Typically, traffic policing is performed in the inbound               qos car [ inbound | outbound ] [ acl acl-number |
     direction of a device. Traffic policing can be deployed           destination-ip-address | source-ip-address ] cir [cir-value] [ pir pir-
                                                                       value ] [ cbs cbs-value pbs pbs-value ] //Configure traffic
     on the terminal side or in the inbound direction of an            policing for specific traffic in the inbound or outbound direction of
                                                                       an interface. The CIR must be configured. The CIR indicates the
     egress device as required. Traffic policing can be                maximum committed rate of traffic policing. If the PIR is not
     configured based on interfaces or MQC.                            configured, it is equal to the CIR. In this case, the traffic rate
                                                                       cannot be higher than the CIR.
•    The configuration roadmap of interface-based traffic
     policing is as follows:
     ▫   Set the maximum bandwidth for traffic policing on an
         interface, select the traffic to be policed, and adjust the
         behavior to be taken on the excess traffic.
    48    Huawei Confidential
                                                                    Traffic Limiting   Token Bucket   Traffic Policing   Traffic Shaping
    49    Huawei Confidential
                                                                             Traffic Limiting   Token Bucket   Traffic Policing   Traffic Shaping
⚫    After MQC-based traffic policing is configured, you can run the following commands to check
     the configuration.
         system-view
           display traffic classifier user-defined [ classifier-name ] //Check the traffic classifier configuration.
           display traffic behavior [ system-defined | user-defined ] [ behavior-name ] //Check the traffic behavior configuration.
           display traffic policy user-defined [ policy-name ] classifier [classifier-name ] //Check the traffic policy configuration.
           display traffic-policy applied-record [ policy-name ] //Check the record of the specified traffic policy.
    50      Huawei Confidential
                                                                                             Traffic Limiting    Token Bucket   Traffic Policing   Traffic Shaping
51 Huawei Confidential
         ▫ When packets are sent at a high speed, they are cached and then evenly
           sent through the token bucket.
         ▫ Consumes memory resources for buffering excess traffic and brings delay
           and jitter.
                                 Scheduling
                                              Leave a                        No                                            outbound interface.
                  Queue                        queue         Shaping?                  Forward
Yes
                                                                           Compliant
                                              Token bucket                                       The data packets
                                                                                                 that are leaving
                                                                                                 queues are still
                                                                                                 forwarded.
              When packets in a queue are                    Exceeding
              transmitted at a rate exceeding the
              specifications, the queue is marked
              unscheduled and will be scheduled
              when the bandwidth is available.
52 Huawei Confidential
• When packets leave queues, the packets that do not need to be shaped are
  forwarded. The packets that need to be shaped are measured against token
  buckets.
        ▫ If the packet rate conforms to the rate limit, the packet is marked green
          and forwarded.
        ▫ If the rate of a data packet exceeds the threshold, the data packet is still
          forwarded. In this case, the status of the queue where the data packet is
          located is changed to unscheduled, and the queue is scheduled when the
          token bucket is filled with new tokens. After the queue is marked
          unscheduled, more packets can be put into the queue, but excess packets
            over the queue capacity are dropped. Therefore, traffic shaping allows
            traffic to be sent at an even rate but does not provide zero-packet-loss
            guarantee.
                                                                                               Traffic Limiting   Token Bucket     Traffic Policing   Traffic Shaping
                                                  Scheduling
                 an interface
53 Huawei Confidential
• When packets leave queues, all queues are measured together against token
  buckets.
        ▫ If the packet rate conforms to the rate limit, the packet is marked green
          and forwarded.
        ▫ If the packet rate exceeds the threshold (that is, tokens in the token bucket
          are insufficient), the packet is marked red. In this case, the interface stops
          scheduling and continues to schedule the packets when there are sufficient
          tokens.
                                                                                         Traffic Limiting    Token Bucket       Traffic Policing    Traffic Shaping
⚫    If all branches connect to the Internet at the same time, a large amount of web traffic sent from the headquarters to the Internet causes network
     congestion. As a result, some web traffic is discarded.. As shown in the figure, to prevent web traffic loss, traffic shaping can be configured before traffic
     sent from enterprise branches enters the enterprise headquarters.
Uplink mapping
Traffic direction
                                                                               Configure
                                                                        traffic shaping in the
                            Branch 1                                    outbound direction of
                                                                             an interface.
ISP HQ Internet
Branch 2
                   • Traffic shaping is generally used in the outbound direction of an interface and is mainly used to limit the traffic rate. It
                      is recommended for packet loss-sensitive traffic (such as Internet access and service download).
    54    Huawei Confidential
Configuring Interface-based Traffic Shaping
                                                                  ⚫   Configure interface-based traffic shaping.
                   DS edge node    DS node
                                                                  system-view
                                             DS domain              interface [interface-type interface-num]      //Enter the
                                                                  interface view.
                                                                      qos gts cir [cir-value] [ cbs cbs-value ]  //Configure traffic
                                                                  shaping in the outbound direction of an interface. The CIR
•    Traffic shaping can be configured only in the                indicates the maximum traffic shaping rate. You can configure the
                                                                  CBS as required to control the size of the token bucket. The CIR
     outbound direction of a device. It falls into interface-
                                                                  must be configured.
     based, queue-based, and MQC-based traffic shaping.
    55    Huawei Confidential
Configuring Queue-based Traffic Shaping
                                                                ⚫   Create a queue profile and configure queue shaping.
                   DS edge node     DS node
                                                                system-view
                                                                  interface [interface-type interface-num]      //Enter the
                                              DS domain         interface view.
                                                                     qos queue-profile [queue-profile-name] //Create a queue
                                                                profile.
•    To shape packets in each queue on an interface,                   queue [start-queue-index] to [end-queue-index ] gts cir
                                                                [cir-value] [ cbs cbs-value ]   //Configure traffic shaping for a
     configure a queue profile and apply it to the interface.   specified queue in the outbound direction and set the CIR.
•    You can set different traffic shaping parameters for       ⚫   Apply the queue profile to an interface.
     queues with different priorities to provide                system-view
     differentiated services. The configuration roadmap is        interface [interface-type interface-num] //Enter the
                                                                interface view.
     as follows:                                                    qos queue-profile [queue-profile-name]   //Apply the queue
                                                                profile to the interface.
     ▫   Create a queue profile.
    56    Huawei Confidential
Configuring MQC-based Traffic Shaping
                   DS edge node       DS node                            system-view
                                                                           traffic classifier [classifier-name]     //Create a traffic
                                                                         classifier.
                                                DS domain                     if-match [acl | vlan-id | …. ]    //Match traffic based on
                                                                         traffic characteristics.
                                                                         system-view
•    MQC-based traffic policing uses traffic classifiers to                traffic behavior [behavior-name]           //Create a traffic
     implement differentiated services.                                  behavior.
                                                                              gts cir [cir-value] | pct [pct-value] //Configure traffic shaping
                                                                         based on the maximum traffic rate or the percentage of the
•    The configuration roadmap is as follows:
                                                                         occupied interface bandwidth.
     ▫   Configure a traffic classifier to match traffic.                system-view
                                                                           traffic policy [policy-name] //Create a traffic policy.
     ▫   Configure a traffic behavior to define an action for packets.
                                                                             classifier [classifier-name] behavior [behavior-name]
     ▫   Bind the traffic classifier and traffic behavior to a traffic   //Bind the traffic classifier to the traffic behavior.
         policy.                                                         system-view
                                                                           interface [interface-type interface-num]       //Enter the
     ▫   Apply the traffic policy to an interface in the outbound        interface view.
         direction.                                                          traffic-policy [policy-name] [inbound | outbound]        //Apply
                                                                         the traffic policy to the interface in the outbound direction.
    57    Huawei Confidential
Checking the Traffic Shaping Configuration
⚫    After queue-based traffic shaping is configured, you can run the following commands to check the
     configuration.
         system-view
           display qos queue-profile [ queue-profile-name ] //Check the queue profile configuration.
⚫    After MQC-based traffic shaping is configured, you can run the following commands to check the
     configuration.
         system-view
           display traffic classifier user-defined [ classifier-name ] //Check the traffic classifier configuration.
           display traffic behavior [ system-defined | user-defined ] [ behavior-name ] //Check the traffic behavior configuration.
           display traffic policy user-defined [ policy-name ] classifier [classifier-name ] //Check the traffic policy configuration.
           display traffic-policy applied-record [ policy-name ] //Check the record of the specified traffic policy.
    58      Huawei Confidential
        Quiz
        1. (True or false) Traffic shaping caches excess traffic by default, and traffic policing discards
            excess traffic by default.(    )
           A. True
B. False
        2. (Multiple-answer question) How many modes of token buckets are used to measure
            traffic?(         )
            A. Single-rate-single-bucket
B. Three-rate-two-bucket
C. Single-rate-two-bucket
D. Two-rate-two-bucket
59 Huawei Confidential
1. A
2. ACD
     Section Summary
     ⚫      There are two traffic limiting technologies: traffic policing and traffic shaping.
     ⚫      Traffic policing discards excess traffic by default. It can be deployed in inbound and outbound
            directions of a device.
     ⚫      Traffic shaping caches excess traffic by default. It can be deployed only in the outbound direction of a
            device.
     ⚫      The device uses token buckets to measure traffic. There are three modes of token buckets:
               The single-rate-single-bucket mechanism can be used together with traffic policing and traffic shaping.
               The single-rate-two-bucket mechanism can be used only with traffic policing, and is mainly used in scenarios where
                burst traffic occurs occasionally.
               The two-rate-two-bucket can be used only with traffic policing, and is mainly used in scenarios with long-term
                burst traffic.
60       Huawei Confidential
     Contents
1. Introduction to QoS
6. Introduction to HQoS
61   Huawei Confidential
 QoS Data Processing
Token
Video Queue 0
                                                                                                                                     Outbound interface
  Inbound interface
Queue 1
                                                                                                                  Scheduling
                                                                                Other
                                    Traffic                       Re-
                                                  Token    CAR                processing    WRED                               GTS
                                 classification                  marking                                Queue 2
                                                  bucket                          …
                      Voice                                                                               …
                                                                                           Congestion
                                                                                           avoidance
                                                                                                        Queue N
                                                                             What is
                      Data                                                 congestion?
62 Huawei Confidential
                                                             Bandwidth mismatch
                                                             100 Mbit/s      10 Mbit/s
        Congestion point
                                                             Aggregation problem
                                      Data flow
                                      10 Mbit/s
63 Huawei Confidential
• Traffic congestion occurs when multiple users compete for the same resources
  (such as the bandwidth and buffer) on the shared network.
        ▫ For example, a user on a LAN sends data to a user on another LAN through
          a WAN. The WAN bandwidth is lower than the LAN bandwidth. Therefore,
          data cannot be transmitted at the same rate on the WAN as that on the
          LAN. Traffic congestion occurs on the router connecting the LAN and WAN.
64 Huawei Confidential
• Impact of congestion:
• Solutions:
        ▫ The solutions need to make full use of network resources on the premise of
          meeting users' requirements for service quality. Congestion management
          and congestion avoidance are commonly used to relieve traffic congestion.
                                         Queue 1
                                                                          1.   Tail drop: traditional processing
                 Congestion                                     Drop
                 avoidance               Queue 2                          2.   Random Early Detection (RED)
                                                               policies
                                            …                             3.   Weighted Random Early Detection (WRED)
Queue N
    65       Huawei Confidential
 Policy 1: Tail Drop
  ⚫    When the length of a queue reaches the maximum value, the device enabled with tail drop discards all
       new packets buffered at the tail of the queue.
6 5 4 3 2 1
66 Huawei Confidential
• Due to the limited length of each queue, when a queue is full, the traditional
  processing method discards all the packets sent to the queue until the congestion
  is relieved. This processing method is called tail drop.
 Disadvantage 1: Global TCP Synchronization
  ⚫    When the length of a queue reaches the maximum value, the device enabled with tail drop discards all
       new packets buffered at the tail of the queue.
                                                                                                                 Problem
                                                                             Global TCP
                                                                           synchronization
                                            The TCP connection
                                           cannot be established.
                                                                           •   Process:
           Traffic
                                                                               1.   TCP starts.
                                                                               2.   Traffic is too heavy. As a result, the queue is full and tail
                                   2
                                       3                                            drop occurs.
  Maximum                                                                      3.   The TCP ACK packet returned by the server is discarded
   value                                                                            due to congestion. Therefore, the sender does not receive
                                                                                    the TCP ACK packet and considers that the network is
                                       4                                            congested. In this case, the TCP sliding window size is
                                                                                    reduced, and the overall traffic is also reduced.
                                                                               4.   At this time, network congestion is eliminated, and the
                                                                                    sender can receive the TCP ACK packet. Therefore, the
                                                                                    sender considers that the network is not congested, and
                      1
                                                                    Time            enters the TCP slow start process. This process is repeated.
67 Huawei Confidential
• As shown in the following figure, three colors indicate three TCP connections.
           ▫ In tail drop mechanism, all newly arrived packets are dropped when
             congestion occurs, causing all TCP sessions to simultaneously enter the slow
             start state and the packet transmission to slow down.
                                                                                                                 Problem
                                                                            Undifferentiated drop
68 Huawei Confidential
• Tail drop cannot differentiate services and discard traffic in the same way.
 Policy 2: RED
  ⚫    Random early detection (RED) randomly discards data packets.
           Drop probability
                                                                                              •   Process:
                       No drop      Random drop        Tail drop                                  1. When the queue length is less than the lower
           100%                                                      Drop probability curve          threshold, no packets are discarded.
69 Huawei Confidential
• RED defines upper and lower thresholds for the length of each queue:
           ▫ When the queue length is less than the lower threshold, no packets are
             discarded.
           ▫ When the queue length is greater than the upper drop threshold, all
             packets are discarded.
Traffic
      Maximum                                               •   Symptom:
       value
                                   Global TCP                      Global TCP synchronization may still occur,
                                 synchronization                    but the link usage is greatly increased.
                                                            •   Disadvantage:
                                                                   RED cannot distinguish traffic.
Time
70 Huawei Confidential
• RED is used to avoid global TCP synchronization that occurs with tail drop. It
  does this by randomly discarding packets so that the transmission speed of
  multiple TCP connections is not reduced simultaneously. This results in more
  stable rates of TCP traffic and other network traffic. — Do not adjust TCP sliding
  window sizes simultaneously.
 Policy 3: WRED
  ⚫    Weighted Random Early Detection (WRED) sets different drop policies for data packets or
       queues with different priorities to discard different types of traffic.
               Drop                                                                 •   Example:
           probability (%)
                                                                                        1.   The lower threshold is 20 and the upper threshold
                                                                                             is 40 for the traffic whose IP precedence is 0.
            100%
                                                                                        2.   The lower threshold is 35 and the upper threshold
                                                                                             is 40 for the traffic whose IP precedence is 2. The
       Maximum                                                                               traffic whose IP precedence is 2 is discarded later
  drop probability
                                                                                             than the traffic whose IP precedence is 0.
                                        1                  2
                                                                                    •   Advantage:
                                                                           Actual
                                                                           queue        1.   Do not adjust TCP sliding window sizes
                                   20       30   35   40                   length            simultaneously to avoid global TCP
  IP precedence used as an example:                                                          synchronization.
  The corresponding precedences are                            Traffic 1
              as follows:           0       1    2             Traffic 2
                                                                                        2.   Different traffic is discarded based on weights.
Traffic 3
71 Huawei Confidential
Red
72 Huawei Confidential
• Color:
        ▫ The color of packets determines the order in which packets are dropped in
          a congested queue.
• Application:
        ▫ The WRED lower threshold is recommended to start from 50% and change
          with the drop priority. The lowest drop probability and highest lower and
          upper thresholds are recommended for green packets; a medium drop
          probability and medium lower and upper thresholds are recommended for
          yellow packets; the highest drop probability and smallest lower and upper
          thresholds are recommended for red packets.
        ▫ When traffic congestion aggravates, red packets are first dropped due to
          the smallest lower threshold and high drop probability. As the queue length
          increases, the device drops green packets at last. If the queue length
          reaches the upper threshold for red/yellow/green packets, red/yellow/green
          packets start to be tail dropped.
 Curve of the WRED Drop Probability
                                                                                                                                               Actual
                                                                             Red       Red     Yellow Yellow        Green     Green Maximum queue length
                                                                            Lower     Upper     Upper     Upper     Upper     Upper   queue
                                                                          threshold threshold threshold threshold threshold threshold length
73 Huawei Confidential
• Color:
        ▫ The color of packets determines the order in which packets are dropped in
          a congested queue.
• Application:
        ▫ The WRED lower threshold is recommended to start from 50% and change
          with the drop priority. The lowest drop probability and highest lower and
          upper thresholds are recommended for green packets; a medium drop
          probability and medium lower and upper thresholds are recommended for
          yellow packets; the highest drop probability and smallest lower and upper
          thresholds are recommended for red packets.
        ▫ When traffic congestion aggravates, red packets are first dropped due to
          the smallest lower threshold and high drop probability. As the queue length
          increases, the device drops green packets at last. If the queue length
          reaches the upper threshold for red/yellow/green packets, red/yellow/green
          packets start to be tail dropped.
 Application of Congestion Avoidance
Traffic direction
                                  Configure congestion
                                         avoidance
                                     in the outbound
                                direction of the interface
                   Video flow                                            Video
                   Data flow
                                                                         Data
                                LAN                          WAN   LAN
74 Huawei Confidential
• Example:
        ▫ Users in different LANs may upload data to the same server, so data
          exchanged between users and the server passes the WAN. Because WAN
          bandwidth is lower than LAN bandwidth, congestion may occur on the
          edge device between the WAN and LANs. Congestion avoidance can be
          configured on the edge device to discard low-priority packets such as data
          packets, reducing network overload and ensuring forwarding of high-
          priority services.
Configuring Queue-based WRED
                   DS edge device    DS node
                                                                    system-view
                                                                      drop-profile [drop-profile-name]       //Create a drop profile.
                                               DS domain                wred [dscp | ip-precedence]       //Configure a WRED drop
                                                                    profile based on DSCP or IP priorities.
                                                                        dscp [dscp-value] low-limit [low-limit-percentage] high-limit
                                                                    [high-limit-percentage] discard-percentage [discard-percentage]
•    The device supports WRED based on DSCP priorities or           //Configure WRED parameters based on DSCP priorities.
     IP priorities. The configuration roadmap is as follows:            ip-precedence [ip-precedence-value] low-limit [low-limit-
                                                                    percentage] high-limit [high-limit-percentage] discard-
     ▫   Configure a drop profile.                                  percentage [discard-percentage] //(Optional) Configure WRED
                                                                    parameters based on IP priorities.
     ▫   Configure WRED parameters.                                   qos queue-profile [queue-profile-name] //Enter the queue
                                                                    profile view.
     ▫   Reference the drop profile to a queue profile.                 queue [queue-index] drop-profile [drop-profile-name]
                                                                    //Bind the drop profile to the specified queue in the queue profile.
     ▫   Apply the queue profile to the outbound direction of the     interface [interface-type interface-num]       //Enter the
         interface.                                                 interface view.
                                                                        qos queue-profile [queue-profile-name]         //Apply the queue
                                                                    profile to the interface.
    75    Huawei Confidential
Configuring MQC to Implement Congestion Avoidance (1)
                   DS edge device     DS node
                                                                         system-view
                                                                           drop-profile [drop-profile-name]      //Create a drop profile.
                                                DS domain                    wred [dscp | ip-precedence]      //Configure a WRED drop
                                                                         profile based on DSCP or IP priorities.
•    After a drop profile is bound to a traffic behavior,                    dscp [dscp-value] low-limit [low-limit-percentage] high-limit
                                                                         [high-limit-percentage] discard-percentage [discard-percentage]
     associate the traffic behavior with the corresponding               //Configure WRED parameters based on DSCP priorities.
     traffic classifier in the traffic policy and apply the                  ip-precedence [ip-precedence-value] low-limit [low-limit-
                                                                         percentage] high-limit [high-limit-percentage] discard-
     traffic policy to an interface to implement                         percentage [discard-percentage] //(Optional) Configure WRED
     congestion avoidance for traffic matching the traffic               parameters based on IP priorities.
    76    Huawei Confidential
Configuring MQC to Implement Congestion Avoidance (2)
                   DS edge device    DS node                             system-view
                                                                           traffic classifier [classifier-name]     //Create a traffic
                                                                         classifier.
                                               DS domain
                                                                              if-match [acl | vlan-id | …. ]    //Match traffic based on
•    After a drop profile is bound to a traffic behavior,                traffic characteristics.
    77    Huawei Confidential
Checking the Congestion Avoidance Configuration
⚫    Checking the queue-based congestion avoidance configuration
         system-view
           interface [interface-type interface-num]
             display this //Check the queue profile bound to the interface.
           qos queue-profile [queue-profile-name]
             display this //Check the drop profile bound to the queue profile.
           display drop-profile [ drop-profile-name ] //Check the drop profile configuration.
    78      Huawei Confidential
       Quiz
B. RED
C. MRED
D. WRED
79 Huawei Confidential
1. ABD
     Section Summary
80   Huawei Confidential
     Contents
1. Introduction to QoS
6. Introduction to HQoS
81   Huawei Confidential
 QoS Data Processing
Token
Video Queue 0
                                                                                                                                 Outbound interface
   Inbound interface
Queue 1
                                                                                                              Scheduling
                                                                               Other
                                      Traffic                       Re-
                                                    Token    CAR             processing   WRED                             GTS
                                   classification                  marking                          Queue 2
                                                                                 …
                                                    bucket
                       Voice                                                                          …
Queue N
                                                                                                 Congestion management
                       Data
82 Huawei Confidential
                            Queue 1
                                          Scheduling
Queue N
83 Huawei Confidential
• The queue scheduling algorithm determines the order in which packets are
  leaving a queue and the relationships between queues.
• Queuing technology
• Packets sent from one interface are placed into many queues which are identified
  with different priorities. The packets are then sent based on the priorities.
  Different queue scheduling mechanisms are designed for different situations and
  lead to varying results.
 What Is a Queue?
  ⚫    The queuing technology orders packets in the buffer.
84 Huawei Confidential
• What is a queue?
           ▫ The queuing technology orders packets in the buffer. When the packet rate
             exceeds the interface bandwidth or the bandwidth configured for packets,
             the packets are buffered in queues and wait to be forwarded.
           ▫ Each interface on the NE20E or NE40E stores eight downlink queues, which
             are called CQs or port queues. The eight queues are BE, AF1, AF2, AF3, AF4,
             EF, CS6, and CS7.
 Queue Scheduling Algorithms
  ⚫    Congestion management uses the queuing technology.
85 Huawei Confidential
• Queuing technology places packets sent from one interface into multiple queues
  with different priorities. These packets are then sent based on the priorities.
  Different queue scheduling mechanisms are designed for different situations and
  lead to varying results.
 FIFO
  ⚫    The FIFO mechanism is used to transfer packets in a queue. Resources used to forward
       packets are allocated based on the arrival order of packets.
                                                            FIFO                                 FIFO
                                                        Enter a queue                        Leave a queue
                                                                                Scheduling
                                                                                   FIFO
                  Packet 3        Packet 2   Packet 1                   Queue                           Packet 3   Packet 2   Packet 1
86 Huawei Confidential
• FIFO allows the packets that come earlier to enter the queue first. On the exit of
  a queue, FIFO allows the packets to leave the queue in the same order as that in
  which the packets enter the queue.
• Characteristics:
                                                     Classifica
                                                        tion
                                     Enter a queue
                                                                       Scheduling
                                                                                     queue
                                                                                              Packet 1 Packet 5 Packet 4 Packet 3 Packet 6 Packet 2
                                                                           SP
            Medium-priority queue     Packet 5 Packet 4 Packet 3
87 Huawei Confidential
• SP: Packets in queues with a low priority can be scheduled only after all packets
  in queues with a higher priority are scheduled.
• As shown in the figure, three queues with a high, medium, and low priorities
  respectively are configured with SP scheduling. The number indicates the order in
  which packets arrive.
• When packets leave queues, the device forwards the packets in descending order
  of priority. Packets in the higher-priority queue are forwarded preferentially. If
  packets in the higher-priority queue come in between packets in the lower-
  priority queue that is being scheduled, the packets in the high-priority queue are
  still scheduled preferentially. This implementation ensures that packets in the
  higher-priority queue are always forwarded preferentially. As long as there are
  packets in the high-priority queue, no other queue will be served.
• Characteristics:
                                                       Classific
                                                        ation
                                      Enter a queue
                                                                      Scheduling
                                                                                    queue
                                                                         WFQ
   Medium-priority queue: 25%                      6-bit packet
                                                                                                                            Packet assembly
           Low-priority queue: 25%              8-bit packet
Leaving packet
88 Huawei Confidential
                       ▪ Packets in different queues are scheduled fairly, and the flow delays
                         have slight differences.
                       ▪ The smaller the weight, the less the allocated bandwidth. Flows with
                         larger weights are allocated higher bandwidth.
90 Huawei Confidential
• PQ queue
    ▫ SP scheduling applies to PQ queues. Packets in high-priority queues are
       scheduled preferentially. Therefore, services that are sensitive to delays
       (such as VoIP) can be configured with high priorities.
           ▫ Generally, services that are sensitive to delays are put into PQ queues.
• WFQ queue
           ▫ WFQ queues are scheduled based on weights. The WFQ scheduling
             algorithm can be used to allocate the remaining bandwidth based on
               weights.
• LPQ queue
    ▫ LPQ is a queue scheduling mechanism that is implemented on a high-speed
       interface (such as an Ethernet interface). LPQ is not supported on a low-
       speed interface (such as a serial interface or MP-group interface).
                           Queue 1
                                             SP                                          Start
               PQ                ……
                                         scheduling
              queue
                           Queue m
                                                                                       Is the PQ         No   Perform a round of
                                                                                     queue empty?               PQ scheduling
                                                                   Destination
                           Queue 1
                                                                    interface
                                                                                            Yes
                                            WFQ           SP
               WFQ               …
                                         scheduling   scheduling
              queue                                                                                      No
                                                                                      Is the WFQ              Perform a round of
                               Queue i                                               queue empty?              WFQ scheduling
Yes
92 Huawei Confidential
        ▫ If PQ, WFQ, and LPQ queues use SP scheduling. PQ, WFQ, and LPQ queues
          are scheduled in sequence.
        ▫ and packets in WFQ queues are scheduled only when no packets are
          buffered in PQ queues. Bandwidths are preferentially allocated to PQ
          queues to guarantee the PIR of packets in PQ queues.
        ▫ Packets in LPQ queues are scheduled only after all packets in WFQ queues
          are sent.
• Scheduling result:
        ▫ When the PIR of all WFQ queues is guaranteed, the remaining bandwidth is
          allocated to LPQ queues.
 Application of Congestion Management
Traffic direction
                                Configure congestion
                                    management
                                  in the outbound
                                   direction of the
                                      interface
                   Video flow                                        Video
93 Huawei Confidential
• Example:
Token
                    Video
                                                                                                          Queue 0
Inbound interface
                                                                                                                                             Outbound interface
                                                                                                          Queue 1
                                                                                                                     Scheduling
                                                                                  Other
                                     Traffic                           Re-
                                                     Token     CAR              processing    WRED                                 GTS
                                  classification                      marking                             Queue 2
                                                     bucket                         …
                    Voice                                                                                   …
                                                                                             Congestion                            Traffic
                                                   Traffic policing
                                                                                             avoidance                            shaping
                                                                                                          Queue N
Congestion management
Data
         94          Huawei Confidential
Configuring Queue-based Congestion Management
               DS edge device   DS node               system-view
                                                        qos queue-profile [queue-profile-name] //Create a queue
                                          DS domain   profile.
                                                          schedule pq [queue-index] | wfq [queue-index] //Configure
                                                      scheduling modes for queues on a WAN interface.
• WAN interfaces support three scheduling               interface [interface-type interface-num]  //Enter the
                                                      interface view.
  modes: PQ, WFQ, and PQ+WFQ. The                         qos queue-profile [queue-profile-name]    //Apply the queue
                                                      profile to the interface.
  configuration roadmap is as follows:
  ▫ Create a queue profile.
 95   Huawei Confidential
 Configuring MQC to Implement Congestion Management
 (1)
                     DS edge device     DS node                            system-view
                                                                             traffic classifier [classifier-name]     //Create a traffic
                                                                           classifier.
                                                  DS domain                     if-match [acl | vlan-id | …. ]    //Match traffic based on
  •    MQC provides three types of queues:                                 traffic characteristics.
96 Huawei Confidential
• AF queue: AF queues ensure that service traffic is forwarded when the traffic rate
  does not exceed the minimum bandwidth.
• EF/LLQ queue: After packets matching certain rules enter EF or LLQ queues, they
  are scheduled in SP mode. Packets in other queues are scheduled only after all
  the packets in EF or LLQ queues are scheduled. In addition, EF queues can use
  the available bandwidth in AF or BE queues. The latency of LLQ queues is lower
  than that of common EF queues.
• The total bandwidth used by AF queues and EF queues cannot exceed 100% of
  the interface bandwidth.
97 Huawei Confidential
• AF queue: AF queues ensure that service traffic is forwarded when the traffic rate
  does not exceed the minimum bandwidth.
• EF/LLQ queue: After packets matching certain rules enter EF or LLQ queues, they
  are scheduled in SP mode. Packets in other queues are scheduled only after all
  the packets in EF or LLQ queues are scheduled. In addition, EF queues can use
  the available bandwidth in AF or BE queues. The latency of LLQ queues is lower
  than that of common EF queues.
• The total bandwidth used by AF queues and EF queues cannot exceed 100% of
  the interface bandwidth.
    98      Huawei Confidential
        Quiz
        1. (Single-answer question) How many queues are there on an interface?(     )
            A. 6
            B. 7
            C. 8
            D. 9
99 Huawei Confidential
1. C
2. ABD
      Section Summary
      ⚫   After a data packet enters a queue, the device sends the data packet
          according to the queue scheduling mechanism.
      ⚫   Common queue scheduling technologies include FIFO, PQ, and WFQ.
      ⚫   PQ scheduling is performed before WFQ scheduling and FIFO. Queues
          scheduled in WFQ mode can transmit data only when queues scheduled in
          PQ mode have no data to transmit. The queue scheduled in FIFO mode can
          transmit data only when queues scheduled in PQ and WFQ mode have no
          data to transmit.
1. Introduction to QoS
6. Introduction to HQoS
                                                                                                     14 households rent
                                                                                                    different bandwidths
                                                                                                         and services.
Internet
                    • In home broadband scenarios, different families may rent different network bandwidths and network services.
                       Therefore, QoS cannot manage these families in a refined manner.
                                            …
                    Video flow            Level 3 flow queue
       Tenant 1
                                                                                 Sub-
                                                                               interface
                                            …
                                                   Level 1 port queue
                                                                                  and
                                                                                tunnel
                    Internet                                                   interface
                    access traffic                                                          Physical
                                            …
                                                                                           interface
                    Gaming flow
       Tenant 2
            …
Video flow
                    Other traffic
       Tenant N
• Flow queue
• Subscriber queue
            ▫ Services from a user are placed into a subscriber queue. HQoS allows all
              services in the subscriber queue to share the bandwidth.
• Port queue
                                                 PQ/WFQ
                                   …
                                                           Level 2 subscriber
                                                                 queue                   Level 1 port queue
                                                                                PQ/WFQ
                                       Level 3 flow
                                   …
                                                                     ...
                                          queue
                                                                                                              RR
                                                                                                  ...
                                                  PQ/WFQ
                                   …
                                Discard
                                packets                                                                 Discard packets based
                                                                       Discard packets based
                                based on                                                                on drop policies
                                                                       on drop policies
                                drop
                                policies
                                                                                             WFQ
                                                                     family B (20 Mbit/s)           building (60 Mbit/s)
     Family B                 Deploy     HSI (WFQ scheduling)
                             HQoS at                             Level 2 subscriber queue
                            the egress    Level 3 flow queue
                                                                       Total bandwidth of
                                         HSI (WFQ scheduling)         family C (30 Mbit/s)
     Family C
                                                                                     system-view
                                                                                       traffic behavior [behavior-name]      //Create a traffic
•    Child traffic policies are used to differentiate services. You can              behavior.
     configure multiple child traffic policies based on services when                    queue [af | ef | llq] bandwidth [bandwidth | pct percentage]
                                                                                     //Configure AF, EF, or LLQ queue parameters in the traffic
     configuring HQoS.                                                               behavior.
                                                                                          drop-profile [drop-profile-name] //Bind the created drop
•    The configuration of HQoS child traffic policies is the same as
                                                                                     profile to the traffic behavior.
     that of common MQC. The configuration roadmap is as follows:
                                                                                     system-view
     ▫    Configure a traffic classifier where traffic is matched based on service
                                                                                       traffic policy [policy-name] //Create a traffic policy.
          characteristics.                                                               classifier [classifier-name] behavior [behavior-name]
     ▫    Configure a traffic behavior where the queue scheduling mode and
                                                                                     //Bind the traffic classifier to the traffic behavior.
          queue bandwidth are defined.
                                                                                      system-view
•    A parent traffic policy is used to differentiate users. When                       traffic behavior [behavior-name]       //Create a traffic
                                                                                      behavior.
     configuring HQoS, you can bind multiple child traffic policies                        queue [af | ef | llq] bandwidth [bandwidth | pct percentage]
     to a parent traffic policy.                                                      //(Optional) Configure AF, EF, or LLQ queue parameters in the
                                                                                      traffic behavior.
•    The configuration roadmap is as follows:                                              traffic-policy [policy-name] //Bind the sub traffic policy to
                                                                                      the traffic behavior.
     ▫    Configure a traffic classifier to match traffic based on user
          characteristics.
                                                                                      system-view
     ▫    Configure a traffic behavior that needs to invoke a child traffic policy.     traffic policy [policy-name] //Create a parent traffic policy.
     ▫    Bind the traffic classifier and traffic behavior to a traffic policy.           classifier [classifier-name] behavior [behavior-name]
                                                                                      //Bind the traffic classifier to the traffic behavior.
      system-view
        display traffic classifier user-defined [ classifier-name ] //Check the traffic classifier configuration.
        display traffic behavior [ system-defined | user-defined ] [ behavior-name ] //Check the traffic behavior configuration.
        display traffic policy user-defined [ policy-name ] classifier [classifier-name ] //Check the traffic policy configuration.
        display traffic-policy applied-record [ policy-name ] //Check the record of the specified traffic policy.
B. False
B. Subscriber queue
C. Data queue
D. Port queue
1. B
2. ABD
      Section Summary
     ▫ Draw up the project budget based on the project objective, project scope,
       and work content.
• Temperature and humidity easily affect the proper running of devices. Standard
  equipment rooms should be equipped with thermometers and hygrometers, and
  check and record of the temperature and humidity should be performed on a
  daily basis.
• The cleanness and neatness of the equipment room also affect the proper
  running of the equipment.
     ▫ Tidiness refers to the proper layout of devices and cables. Devices must be
       installed and cables must be routed according to installation and
       deployment requirements. However, during network operation, temporary
       adjustments, such as temporary jumper tests, are often made. After such
       activities are taken for a period of time, the equipment room becomes
        disordered. The purpose of checking the equipment environment is to find
        out and rectify these problems in time.
• The preceding check items may vary according to devices. For details, see the
  product documentation of each type of device.
• Software version running on a device:
     ▫ If a device is newly added, the software version may be different from the
       existing software version. Some devices may be upgraded or downgraded
       due to other reasons. Especially on a large-scale network, the same type of
       device may run different versions. In this case, verify that different versions
       can meet the same network function requirements.
• Startup information:
• License information:
     ▫ License rules vary according to devices. The licenses of some devices have
       validity periods.
• You can configure information output rules as needed to control the output of
  various types and levels of information along information channels in different
  output directions.
<R1>
YY-MM-DD10:14:21.751.1-08:00 R1 RM/6/RMDEBUG:
2. B
• Mapping between the preceding fault symptoms and categories varies according
  to scenarios.
• If an unstructured network troubleshooting is carried out, steps are performed
  repeatedly, leading to low efficiency even though a solution to the fault is found.
     ▫ The user description may be ambiguous, and the reported fault may not be
       the actual faulty point. In this situation, experienced engineers have to
       confirm the fault.
• A temporary network environment may need to be built for fault evaluation.
2. False
• R3 and SW5 are connected through Layer 3 sub-interfaces.
• This section describes common troubleshooting methods and tools, providing
  guidance for network maintenance personnel. The processing sequence in actual
  scenarios can be different from that in the example.
• This section uses the Windows 10 OS as an example to describe how to check the
  physical connection status of a PC.
     ▫ The indicator of the VLAN interface with the loop occurring frequently
       blinks.
• The destination MAC address of data frames sent from PC1 to a gateway is still
  00 00 5e 00 01 03.
• After the link between SW1 and SW3 is disconnected, SW1 cannot forward
  packets to SW2, because SW1 does not have the MAC address entry of 00 00 5e
  00 01 03.
• The debugging information on R1 shows that the OSPF router ID carried in the
  Hello packets sent from 10.0.12.2 is the same as the OSPF router ID on R1.
• On R3, the command output shows that the route to 192.168.56.0/24 has been
  imported into the BGP routing table.
• Possible causes for the failure to establish an IS-IS neighbor relationship are as
  follows:
     ▫ Area IDs do not match on both ends. (The inconsistency adversely affects
       only level-1 neighbor relationships.)
     ▫ IS-IS levels do not match on both ends. (Note that on Huawei devices if the
       system level differs from the interface circuit level, the system level takes
        effect.)
▫ The link between the DHCP relay agent and server becomes faulty.
2. B
3. ABCD
• The optical distribution frame (ODF) is mainly used on backbone networks,
  metropolitan area networks (MANs), and optical fiber and cable networks. It
  connects, terminates, distributes, splits, and schedules backbone optical cables.
• Time arrangement preparation:
     ▫ Negotiate the time arrangement with the customer and obtain customer's
       approval.
     ▫ Reserve some time for major operations to avoid engineering accidents due
       to timeout.
• Type B service: service that has low requirements on the latency but occupies
  much bandwidth. These services are carried over IPsec VPNs.
• Static return routes are manually specified for the headquarters, and NQA is used
  to switch services to the standby path upon faults. This case focuses on the
  branch network and does not involve the headquarters network.
1. We can set up a local pilot office and simulate the customer's network to verify
   the feasibility of the entire migration solution.
2. The configuration of the live network needs to be backed up. To verify the
   network status before and after the migration, collect dynamic data of the live
   network, including the port status, traffic, status of each routing protocol,
   number of routes, STP status, and ARP/MAC address entries of each port.
• DCNs do not have a fixed zone division mode. Different industries and enterprises
  have different area division modes.
• The zone division in the slide uses the financial industry as an example:
• Operating Expense (OPEX): the sum of the maintenance cost, marketing expense,
  labor cost, and depreciation expense during the enterprise operations.
• In the campus network scenario, iMaster NCE-Campus is used as the iMaster NCE
  controller.
• In the data center scenario, iMaster NCE-Fabric is used as the iMaster NCE
  controller.
• In this scenario, iMaster NCE-IP is used as the iMaster NCE controller.
1. AB
• CIO: Chief Information Officer
• 1st-generation campus network:
     ▫ In 1980, IEEE released the IEEE 802.3 standard, signaling the birth of
       Ethernet technology. By using twisted pair connections, Ethernet was more
       cost-effective and easier to implement than previous networking
       technologies. Consequently, Ethernet quickly became the mainstream
       technology for campus networks. During the early days, campus networks
       used hubs as access devices. A hub was a shared-medium device that
       worked at the physical layer. It was limited in the number of users it could
       support for concurrent access. If many users connected to a hub
       simultaneously, network performance degraded severely due to the
       expanded collision domain. In the late 1980s, Ethernet switches emerged.
       Due to their more advantageous working scheme than hubs, Ethernet
       switches therefore quickly replaced hubs to become the standard
       components of campus networks.
• Huawei CloudCampus Solution integrates service configuration and management
  models across LAN and WAN. It achieves LAN-WAN convergence by not only
  configuring and managing LAN services, but also managing WAN interconnection
  services.
• In addition to interface-based VLAN assignment, you can also use the following
  methods to assign VLANs:
     ▫ STP compares four parameters: root bridge ID, root path cost (RPC), bridge
       ID (BID), and port ID (PID). A smaller value indicates a higher priority. All
       these parameters are BPDU fields.
          ▪ Root bridge election: The device with the smallest root bridge ID is
            elected as the root bridge.
          ▪ Root port election: A device compares the RPC, peer BID, peer PID,
            and local PID of its ports in sequence. The port with the smallest
            value is elected as the root port.
          ▪ Designated port election: A device compares the RPC, local BID, and
            local PID of its ports in sequence. The port with the smallest value is
            elected as the root port.
          ▪ After the root port and designated port are determined, all the non-
            root ports and non-designated ports on the switch will be blocked.
• PBR is used in agile campus service orchestration, multi-egress, and anti-DDoS
  off-path deployment scenarios.
• The interface configured with port security can convert the learned MAC
  addresses into secure MAC addresses, preventing devices with other MAC
  addresses from accessing the network through the interface.
• The DHCP snooping-enabled device forwards DHCP Request messages of users
  (DHCP clients) to an authorized DHCP server through the trusted interface, and
  then generates DHCP snooping binding entries based on the DHCP ACK
  messages received from the DHCP server. When receiving DHCP messages from
  users through the DHCP snooping-enabled interfaces, the device checks the
  messages against the binding table, thereby preventing attacks initiated by
  unauthorized users. In addition, DHCP snooping supports multiple security
  features, such as limiting the rate for sending DHCP messages.
• An MITM attacker establishes independent connections with two parties that
  intend to communicate and relays messages between them. The two parties
  consider that they are directly communicating with each other over a private
  connection, but the entire conversation is in fact controlled by the attacker. In an
  MITM attack, the attacker can intercept all packets exchanged between the two
  parties and insert new ones.
• To defend against MITM attacks, configure DAI on a switch.
     ▫ DAI defends against MITM attacks using a DHCP snooping binding table.
       When the switch receives an ARP packet, it compares the source IP address,
       source MAC address, VLAN ID, and interface number of the ARP packet
       with those in DHCP snooping binding entries. If the ARP packet matches a
       binding entry, the switch considers the ARP packet valid and allows the
       packet to pass through. If the ARP packet does not match any binding
       entry, the switch considers the ARP packet invalid and discards the packet.
     ▫ DAI is available only when DHCP snooping is configured. A DHCP snooping-
       enabled switch automatically generates DHCP snooping binding entries
       when DHCP users go online. If a user is configured with a static IP address,
       you need to manually configure a static binding entry for the user.
     ▫ When an attacker connected to the DAI-enabled switch sends bogus ARP
       packets, the switch detects the attack based on the binding entries and
       discards the bogus ARP packets. If the packet discarding alarm function is
       also enabled on the DAI-enabled switch, the switch will generate an alarm
       when the number of ARP packets discarded due to failure to match any
       binding entry exceeds the alarm threshold.
• As networks continue to increase in scale, a growing number of attackers are
  forging source IP addresses to initiate network attacks (IP address spoofing
  attacks). Some attackers forge IP addresses of authorized users to obtain
  network access rights and access networks. As a result, authorized users are
  unable to access networks or sensitive information may be intercepted.
     ▫ IPSG only checks the IP packets from hosts. It does not check non-IP
       packets such as ARP packets.
• In addition to using NAC to authenticate access users and control their rights, a
  campus network also needs to authenticate and control rights of administrators
  (also called login users) who can log in to devices through FTP, HTTP, SSH,
  Telnet, or console ports.
• DHCP   dynamically configures and uniformly manages IP addresses of hosts.
  DHCP   is defined in RFC 2131 and uses the client/server communication mode. A
  DHCP   client requests configuration information from a DHCP server, and the
  DHCP   server returns the configuration information allocated to the DHCP client.
     ▫ Instead of statically specifying an IP address for a host, DHCP enables a
       host to obtain an IP address dynamically.
     ▫ DHCP can allocate other configuration parameters, such as the startup
       configuration file of a client, so that the client can obtain all the required
       configuration information by using only one message.
     ▫ DHCP supports dynamic and static IP address allocation. A network
       administrator can select different address allocation modes for hosts as
       required.
          ▪ Dynamic allocation: DHCP allocates an IP address with a limited
            validity period (known as a lease) to a client. This mechanism applies
            to scenarios where hosts temporarily access the network or the
            number of idle IP addresses is less than the total number of hosts that
            do not require permanent connections.
          ▪ Static allocation: DHCP allocates fixed IP addresses to clients.
            Compared with manual IP address configuration, DHCP static
            allocation prevents manual configuration errors and enables unified
            maintenance and management.
• DHCP has the following benefits:
     ▫ Reduced client configuration and maintenance cost.
     ▫ Centralized management of limited IP addresses.
• The plug-and-play of a device is implemented by establishing a NETCONF session
  between the device and the controller, so that the controller can deliver
  configurations to the device.
• LLDP is a standard Layer 2 topology discovery protocol defined in IEEE 802.1ab.
  LLDP collects local device information including the management IP address,
  device ID, and port ID and advertises the information to neighboring devices.
  Neighboring devices save the received information in their management
  information bases (MIBs). The NMS can query required information in MIBs to
  determine link status.
2. A
• After servers are virtualized, services are encapsulated in VMs. VMs can be live
  migrated to any host in a cluster. One of the features of live migration is that the
  network status does not change. As a result, the IP addresses of service VMs may
  be in different network locations. Therefore, a large Layer 2 network is required
  to solve this problem.
• VXLAN is used to meet requirements of DCNs. On traditional enterprise campus
  networks, VXLAN is used to construct virtual networks instead of solving some
  urgent problems.
• VXLAN solves the following problems on traditional networks:
     ▫ For VM quantity limited by entry specifications of devices.
           ▪ VXLAN encapsulates original data packets sent from VMs in the same
             domain into UDP packets, with the IP and MAC addresses used on the
             physical network in outer headers. Devices on the VXLAN network are
             aware of only the encapsulated parameters but not the inner data.
           ▪ Except VXLAN edge devices, other devices on the network do not need
             to identify MAC addresses of VMs. This reduces the burden of learning
             MAC addresses and improves device performance.
     ▫ For limited network isolation capabilities.
           ▪ VXLAN uses a VXLAN Network Identifier (VNI) field similar to the
             VLAN ID field to identify users. The VNI field has 24 bits and can
             identify up to 16 million VXLAN segments, effectively isolating a large
             number of tenants.
     ▫ For limited VM migration scope.
           ▪ VMs using IP addresses in the same network segment are in a Layer 2
             domain logically, even if they are on different physical Layer 2
             networks. VXLAN technology constructs a virtual large Layer 2
             network over a Layer 3 network.
• Underlay network: a physical network, which serves as the basic layer of the
  upper-layer logical network.
• Overlay network: a logical network established on the underlay network using
  tunneling technologies.
• The virtualization technology is introduced to create multiple virtual networks
  (VNs) on a physical network on a campus network. Different VNs are used for
  different services, such as OA, videoconferencing, and security protection.
• The preceding packet format is a standard VXLAN packet format. Huawei
  CloudEngine S series switches use customize reserved fields based on the
  standard one.
• A pair of VTEP IP addresses identifies a VXLAN tunnel.
• The source VTEP encapsulates packets and sends the encapsulated packets to the
  destination VTEP through the VXLAN tunnel. After receiving the encapsulated
  packets, the destination VTEP decapsulates the packets.
• Distributed gateways:
     2. After receiving the packet, SW1 determines the BD ID, destination VXLAN
        tunnel, and VNI of the traffic based on VAP information. In addition, SW1
        learns the MAC address of PC1 and records the BD ID and the interface
        that receives the packet in the corresponding MAC address entry.
     3. SW1 performs VXLAN encapsulation for the ARP Request packet and
        forwards the encapsulated packet based on the ingress replication list.
     4. After receiving the VXLAN packet, SW2 decapsulates the packet to obtain
        the original data frame. In addition, SW2 learns the MAC address of PC1
        and records the BD ID and the VTEP address of SW1 in the corresponding
        MAC address entry.
     5. SW2 floods the ARP packet in the local BD. PC2 then receives the packet
        and learns the ARP information of PC1.
     6. PC2 sends a unicast ARP Reply packet.
     7. SW2 has the MAC address entry of PC1; therefore, SW2 unicasts the packet
        and learns the source MAC address of PC2 in the MAC address entry.
     8. SW2 encapsulates the ARP Reply packet with a VXLAN header and sends it
        to the remote VTEP at 1.1.1.1.
     9. After SW1 receives the VXLAN packet, it decapsulates the packet and
        records the source MAC address of PC2 in the MAC address table. The
        outbound interface is the remote VTEP.
• By doing this, PC1 and PC2 learn ARP entries of each other, and SW1 and SW2
  learn MAC addresses of each other.
• PC1 wants to communicate with Server2. When finding that Server2 is on a
  different subnet, PC1 sends the packet to the gateway.
• PC1 sends a data packet to Server2. The destination MAC address of the data
  packet is 00AB-09FF-1111 (gateway MAC address). After receiving the data
  packet, SW1 searches the Layer 2 forwarding table and finds that the outbound
  interface is a remote VTEP (Layer 3 gateway). Therefore, SW1 adds a VXLAN
  header (VNI = 1000) to the data packet. Then the packet is sent to SW3.
• After receiving the packet, SW3 decapsulates the VXLAN packet and finds that
  the destination MAC address of the internal original data packet is 00AB-09FF-
  1111, which is the MAC address of its own interface VBDIF10. Then SW3 needs to
  search the Layer 3 forwarding table.
• SW3 searches the routing table and finds that the destination IP address
  192.168.2.1 matches the direct route generated by VBDIF 20. SW3 then searches
  the ARP table for the destination MAC address of the packet and searches the
  MAC address table for the outbound interface of the packet. On SW3, the
  outbound interface for the MAC address corresponding to 192.168.2.1 is the
  remote VTEP at 2.2.2.2. SW3 encapsulates the packet into a VXLAN packet and
  sends it to SW2.
• After receiving the packet, SW2 decapsulates the VXLAN packet and finds that
  the destination MAC address is not the MAC address of any local interface. SW2
  then searches the Layer 2 forwarding table and forwards the packet through the
  local interface based on the MAC address table.
1. SW1 provides Layer 2 sub-interface access, and SW2 uses the VLAN binding
   mode.
     2. SW1 obtains the MAC address of PC1 and creates an entry in the MAC
        address table to record the MAC address, BD ID, and inbound interface.
     3. SW1 generates a BGP EVPN route based on this entry and sends the route
        to SW2. The route carries the RT value of the local EVPN instance and Type
        2 route (MAC route). In the MAC route, the MAC address of PC1 is stored
        in the MAC Address field and the L2VNI is stored in the MPLS Label1 field.
     4. After receiving the BGP EVPN route from SW1, SW2 checks the RT (similar
        to the RT concept in MPLS VPN) carried in the route. If the RT is the same
        as the import RT of the local EVPN instance, SW2 accepts the route.
        Otherwise, SW2 discards the route. After accepting the route, SW2 obtains
        the MAC address of PC1 and the mapping between the BD ID and the
        VTEP IP address (next hop network address in MP_REACH_NLRI) of SW1,
        and generates the MAC address entry of PC1 in the local MAC address
        table. Based on the next hop, the outbound interface of the MAC address
        entry recurses to the VXLAN tunnel destined for SW1.
• A MAC/IP route can carry both the MAC and IP addresses of a host, and
  therefore can be used to advertise ARP entries between VTEPs. The MAC Address
  and MAC Address Length fields identify the MAC address of the host, whereas
  the IP Address and IP Address Length fields identify the IP address of the host.
  This type of MAC/IP route is called the ARP route. ARP advertisement applies to
  the following scenarios:
     ▫ ARP broadcast suppression. After a Layer 3 gateway learns the ARP entries
       of a host, it generates host information that contains the host IP and MAC
       addresses, Layer 2 VNI, and gateway's VTEP IP address. The Layer 3
       gateway then transmits an ARP route carrying the host information to a
       Layer 2 gateway. When the Layer 2 gateway receives an ARP request, it
       checks whether it has the host information corresponding to the
       destination IP address of the packet. If such host information exists, the
       Layer 2 gateway replaces the broadcast MAC address in the ARP request
       with the destination unicast MAC address and unicasts the packet. This
       implementation suppresses ARP broadcast packets.
     ▫ VM migration in distributed gateway scenarios. After a VM migrates from
       one gateway to another, the new gateway learns the ARP entry of the VM
       (after the VM sends gratuitous ARP packets) and generates host
       information that contains the host IP and MAC addresses, Layer 2 VNI, and
       gateway's VTEP IP address. The new gateway then transmits an ARP route
       carrying the host information to the original gateway. After the original
       gateway receives the ARP route, it detects a VM location change and
       triggers ARP probe. If ARP probe fails, the original gateway withdraws the
       ARP and host routes of the VM.
• ARP advertisement is mainly used in the centralized VXLAN gateway+BGP EVPN
  scenario. In BGP EVPN, ARP or IRB advertisement to peers is mutually exclusive.
  Only one of these routes can be configured to advertise. Generally, ARP
  advertisement is selected in the centralized VXLAN gateway+BGP EVPN scenario,
  in the distributed VXLAN gateway+BGP EVPN scenario, IRB routes are advertised.
• In distributed gateway networking, VTEPs function as both Layer 2 and Layer 3
  gateways. In this networking, inter-subnet communication is implemented in
  multiple modes. According to the processing mode of ingress VTEPs, inter-subnet
  communication can be implemented through asymmetric and symmetric
  Integrated Routing and Bridging (IRB).
     ▫ Based on the local IP address, local mask, and peer IP address, PC1 finds
       that the destination device PC2 is on a different network segment from
       itself. Therefore, PC1 determines Layer 3 communication and sends the
       traffic destined for PC2 to the gateway. In the data packet sent by PC1, the
       source MAC address is MAC1 and the destination MAC address is MAC2.
     ▫ After receiving the packet sent from PC1 to PC2, the switch decapsulates
       the packet and finds that the destination MAC address is the MAC address
       of VLANIF 10. Therefore, the switch considers that the packet is sent to
        itself and sends the packet to the routing module for further processing.
     ▫ The routing module parses the packet and finds that the destination IP
       address is 192.168.20.2, which is not the IP address of the local interface.
       Therefore, the routing module needs to forward the packet at Layer 3.
        When the routing module searches the routing table, it matches the direct
        route generated by VLANIF 20 against the packet.
• During asymmetric IRB, VTEPs do not transmit host IP routes between each
  other. That is, VTEP1 and VTEP2 do not transmit the 32-bit host route (generated
  through an ARP entry) of the connected PC. Therefore, VTEP1 searches the
  routing table in step 2, and matches the packet against the direct route
  generated by VBDIF 10.
• In step 5, VTEP2 decapsulates the VXLAN packet and finds that the destination
  MAC address is not the MAC address of the local VBDIF interface corresponding
  to the BD. Therefore, VTEP2 searches the Layer 2 forwarding table for the MAC
  address entry of the corresponding BD based on the VNI carried in the packet
  and then forwards the packet at Layer 2.
• On Huawei devices, Symmetric IRB is used.
• In a BGP EVPN scenario, if you want to control the sending and receiving of EVPN
  routes based on the RT value of the IP VPN instance, run the vpn-target evpn
  command to configure the RT value. In this case, the ERT is carried in EVPN
  routes and sent to the remote BGP EVPN peer, the IRT matches the RT carried in
  an EVPN route to determine which EVPN routes can be added to the routing
  table of the local VPN instance address family.
• After receiving the BGP Update message, VTEP2 checks the RT value (20:1)
  carried in the BGP Update message and compares it with the IRT in the local
  EVPN instance and the IRT (EVPN) in the IP VPN instance. VTEP2 finds that the
  IRT of the EVPN instance bound to BD 20 and IRT of the IP VPN instance bound
  to VBDIF 20 are the same, adds the EVPN routes to the EVPN routing table of BD
  20, and adds the IP routes contained in the EVPN routes to the routing table of
  the IP VPN instance bound to VBDIF 20.
• In symmetric IRB mode, VTEPs transmit 32-bit host routes generated using ARP
  entries. Therefore, VTEP1 matches the 32-bit host routes transmitted from VTEP2
  during route lookup. Even if VTEP1 has the direct route generated by VBDIF 10, it
  still forwards packets based on 32-bit host routes according to the longest match
  rule.
• In step 4, VTEP2 decapsulates the VXLAN packet and finds that the destination
  MAC address of the inner data of a packet is the router MAC address (MAC B) of
  VTEP2. VTEP2 determines that it needs to forward the packet based on the
  routing table, finds the corresponding IP VPN instance based on VNI 1000, and
  searches the routing table of the IP VPN instance for the route, finds the direct
  route generated by VBDIF 10, searches the local MAC address table, and sends
  the packet to PC2.
• The Provider Multicast Service Interface (PMSI) is an optional transitive BGP
  attribute. In VXLAN scenarios, Tunnel Type has a fixed value of 6, which is used
  to carry the VTEP's IP address and L2VNI of the sender.
• Similar to Type 2 IRB routes, Type 5 routes carry the router MAC address of the
  VTEP through the EVPN router's MAC extended community attribute during
  route transmission. In addition, Type 5 routes carry only the L3VNI. Therefore, the
  forwarding process is also called IRB forwarding.
• ARP broadcast suppression is an effective method to relieve the burden of a
  gateway in processing ARP packets. When receiving an ARP Request packet, the
  gateway searches the ARP broadcast suppression table for the mapping between
  the IP address and MAC address of the destination device. If the ARP Request
  packet matches an entry in the table, the gateway replaces the broadcast MAC
  address in the ARP Request packet with the MAC address of the destination
  device. Then, the gateway sends the ARP Request packet through the interface
  corresponding to the destination MAC address.
• An ARP route carries the following valid information: host MAC address, host IP
  address, and L2VNI. An IRB route carries the following valid information: host
  MAC address, host IP address, L2VNI, and L3VNI. As a result, an IRB route
  includes an ARP route and can be used to advertise both the host IP route and
  host ARP entry.
• On a VXLAN network, a bridge domain (BD) is a Layer 2 broadcast domain. After
  a VTEP receives BUM packets, it broadcasts the packets in the BD. To reduce
  broadcast traffic, a network administrator usually configures access-side isolation
  or port isolation to isolate access users in a BD. However, as services become
  more diverse and keep increasing, users have growing needs for intra-BD
  communication. To allow isolated users in a BD to communicate, configure local
  proxy ARP on the VBDIF interface of the BD.
• Generally, the same MAC address is configured for VBDIF interfaces with the
  same interface number on different VTEPs. After the distributed gateway function
  is enabled, VBDIF interfaces with the same IP address and MAC address do not
  report ARP conflicts. In addition, when hosts and VMs are migrated to different
  VTEPs, the gateway does not need to resolve ARP entries again.
• The MAC Mobility extended attribute is used to announce the location change of
  a host or VM when the host or VM is migrated from one VTEP to another VTEP.
• VXLAN allows a virtual Layer 2 or Layer 3 network (overlay network) to be built
  over a physical network (underlay network). The overlay network transmits
  packets between different sites through Layer 3 forwarding paths provided by the
  underlay network.
2. D
• Currently, the intranets of most campus networks are faced with the following
  security issues:
     ▫ Access control for terminals is not implemented. A user can access all
       network resources as long as the user successfully connects to the network.
• A terminal agent (also known as client software) is usually installed on a user
  terminal. It works with an admission server to implement user identity
  authentication, terminal security check, system repair and upgrade, and terminal
  behavior monitoring and audit.
• Admission devices, which can be switches, routers, APs, or other network devices,
  provide the following functions:
     ▫ User identity verification.
     ▫ User authentication. Admission devices can implement the commonly used
       802.1X authentication, MAC address authentication, and Portal
       authentication by working with the client software and admission server.
     ▫ User permission control.
• Admission servers include the security control server, security management
  server, virus signature database server, and patch server.
     ▫ A security control server authenticates users, performs security audit,
       executes security policies, and works with admission devices to deliver user
       permissions.
     ▫ A security management server manages user information (including adding,
       deleting, or modifying user permissions and departments), and defines and
       manages security policies.
     ▫ A virus signature database server controls automatic update of the virus
       signature database in antivirus software on terminals.
     ▫ A patch server controls patch installation and update for terminals'
       operating systems and applications.
• User identity authentication request: A terminal sends the user credential to an
  admission device.
• User identity authentication: The admission device sends the user credential to
  the admission server for authentication.
• User identity verification: The admission server stores user identity information
  and provides user management functions. After receiving the user credential, the
  admission server determines whether the terminal identity is valid and delivers
  the verification result and corresponding policy to the admission device.
• User policy authorization: The admission device executes the policy based on the
  authorization result received from the admission server. For example, the
  admission device permits or denies access from the terminal. The admission
  device can also perform more complex policy-based control on the terminal, for
  example, increasing or decreasing the forwarding priority or limiting the network
  access rate.
• The basic NAC process is as follows:
     1. User can access the network and have the pre-authentication domain
        network permission before authentication, including access to the access
        control server, DHCP, and DNS.
     3. Unauthorized users and users who have not completed authentication are
         allowed to access resources only in the pre-authentication domain or
         isolation domain.
• The EAP packets transmitted between the client and access device are
  encapsulated in EAPoL format and transmitted across the LAN.
• Users can determine the authentication mode between the access device and
  authentication server based on the client support and network security
  requirements.
     ▫ EAP termination mode: The access device terminates EAP packets and
       encapsulates them into RADIUS packets. The authentication server then
       uses the standard RADIUS protocol to implement authentication,
       authorization, and accounting.
     ▫ EAP relay mode: The access device directly encapsulates the received EAP
       packets into EAP over RADIUS (EAPoR) packets, and then transmits these
       packets over a complex network to the authentication server.
• EAPoL defines EAP encapsulation on IEEE 802 (such as 802.3 and 802.11)
  networks. EAPoL only transmits EAP packets between 802.1X clients and access
  devices, and does not implement authentication.
    ▫ This mode simplifies the processing on the access device and supports
      various authentication methods. However, the authentication server must
      support EAP and have high processing capability.
    ▫ The major difference between PAP and CHAP is that passwords in CHAP
      authentication are transmitted in cipher text, whereas passwords in PAP
      authentication are transmitted in plain text. In this aspect, CHAP provides
       higher security and is recommended.
• EAP relay authentication process:
     1. When a user needs to access an external network, the user starts the
        802.1X client, enters the applied and registered user name and password,
        and initiates a connection request. The client then sends an authentication
        request packet (EAPoL-Start) to the access device to start the
        authentication process.
     2. After receiving the authentication request packet, the access device returns
        an EAP-Request/Identity packet, requesting the client to send the
        previously entered user name.
     3. In response to the request sent by the access device, the client sends an
        EAP-Response/Identity packet containing the user name to the access
        device.
     5. After receiving the user name forwarded by the access device, the RADIUS
        server searches the user name table in the local database for the
        corresponding password, encrypts the password with a randomly
        generated MD5 challenge, and sends a RADIUS Access-Challenge packet
        containing the MD5 challenge to the access device.
• Dumb terminal: Compared with other terminals, dumb terminals have limited
  functions and simple interaction modes. In this document, dumb terminals refer
  to terminals whose authentication information such as user names and
  passwords cannot be entered.
• By default, a MAC address without hyphens (-) is used as the user name and
  password for MAC address authentication, for example, 0005e0112233.
• Passwords of MAC address authentication users can be processed using PAP or
  CHAP. The following MAC address authentication process uses PAP as an
  example:
     1. When a terminal accesses the network, the access device detects and
        learns the MAC address of the terminal, triggering MAC address
        authentication.
     2. The access device generates a random value (MD5 challenge), arranges
        the user MAC address, password, and random value in sequence, encrypts
        them using the MD5 algorithm, encapsulates the encryption results into a
        RADIUS authentication request packet, and sends the packet to the
        RADIUS server.
     3. The RADIUS server arranges the user MAC address, password saved in the
        local database, and received random value in sequence, and uses the
        random value to encrypt them using the MD5 algorithm. If the encrypted
        password is the same as that received from the access device, the RADIUS
        server sends an authentication accept packet to the access device,
        indicating that MAC address authentication is successful and the terminal
        is allowed to access the network.
• Different from PAP, CHAP arranges CHAP ID,the user MAC address, and
  random value in sequence, encrypts them using the MD5 algorithm.
• Client: In most cases, a client is a host where an HTTP/HTTPS-capable browser is
  installed.
• Access device: a network device such as a switch or router, which provides the
  following functions:
     ▫ Interacts with the Portal server and authentication server to implement user
       authentication, authorization, and accounting.
• Portal server: a server system that receives authentication requests from clients,
  provides Portal services and authentication pages, and exchanges client
  authentication information with access devices.
     ▫ When Layer 2 authentication is used, the device can learn users' MAC
       addresses and identify the users based on their MAC addresses and IP
       addresses. Layer 2 authentication provides a simple authentication process
       while ensuring high security. However, users must be in the same network
       segment as the access device, causing inflexible networking.
     ▫ When Layer 3 authentication is used, the device cannot obtain the MAC
       address of a client, so it identifies the user based only on the client IP
       address. Layer 3 authentication allows for flexible networking and
       facilitates remote control. However, users can only be identified based on
       their IP addresses, leading to poor security.
     7. After receiving the Portal authentication request, the Portal server sends a
        Portal challenge request packet to the access device. This step is performed
        only when CHAP authentication is used between the Portal server and
        access device. If PAP authentication is used, steps 7 and 8 are not
        performed.
     8. The access device sends a Portal challenge response packet to the Portal
        server.
     9. The Portal server encapsulates the entered user name and password into a
        Portal authentication request packet and sends the packet to the access
        device.
     10. The access device and RADIUS server exchange user information to
         authenticate the user, including:
          ▪ The access device encapsulates the entered user name and password
            into a RADIUS authentication request packet and sends the packet to
            the RADIUS server.
• HTTPS is a secure HTTP and also known as HyperText Transfer Protocol over
  Transport Layer Security (HTTP over TLS) or HyperText Transfer Protocol over
  Secure Socket Layer (HTTP over SSL). HTTPS uses HTTP for communication and
  SSL/TLS for data encryption.
• A URL is a concise representation of the location and access method of a
  resource that can be obtained from the Internet. It is the address of a standard
  resource on the Internet. Each file on the Internet has a unique URL. The URL
  contains information about the location of the file and how a browser should
  process the file.
• When HTTP/HTTPS-based Portal authentication is used, the authentication
  process is as follows:
     1. The Portal server instructs the client to send a Portal authentication
        request to the access device.
     2. The client sends a Portal authentication request to the access device.
     3. After receiving the Portal authentication request, the access device parses
        the packet according to parameter names to obtain parameters such as
        the user name and password, and then sends the obtained user name and
        password to the RADIUS server for authentication. The process is similar to
        the Portal-based Portal authentication.
     4. The access device returns the Portal authentication result to the client and
        adds the user to the local online user list.
• As shown in the figure, an HTTP request is sent in Get mode:
  https://Portal.example.com/login?userName=test&password=Huawei@123. You
  can see that the user name and password are in plain text and are separated
  from the URL by a question mark (?).
• After passing Portal authentication, terminals may be disconnected from the
  wireless network when they move from one wireless signal coverage area to
  another or when the wireless signal is unstable. In this case, users need to enter
  their user names and passwords for identity authentication every time the
  terminals go online, leading to poor network access experience. MAC address-
  prioritized Portal authentication is used to resolve this problem. Generally, there
  is no need to enable MAC address-prioritized Portal authentication on wired
  networks that provide stable signals.
• When the RADIUS server is used, the authentication accept packet also contains
  user authorization information because RADIUS authorization is combined with
  authentication.
• The RADIUS server can assign an authorized ACL to a user in either of the
  following modes:
     ▫ Static ACL assignment: The RADIUS server uses the standard RADIUS
       attribute Filter-Id to assign an ACL ID to the user. In this mode, the ACL
       and corresponding rules are configured on the access device in advance.
     ▫ Dynamic ACL assignment: The RADIUS server uses the Huawei extended
       RADIUS attribute HW-Data-Filter to assign an ACL ID and corresponding
       rules to the user. In this mode, the ACL ID and ACL rules are configured on
       the RADIUS server.
• When an authentication-free rule is configured using an ACL, the ACL number is
  in the range from 6000 to 6031.
• The NAC escape mechanism grants specified network access rights to users when
  the authentication server is Down or to users who fail the authentication or are
  in pre-connection state. The escape solutions vary according to the
  authentication modes. Some escape solutions are shared by all authentication
  modes, while some are supported only in specific authentication modes. For
  details, see "NAC Escape Mechanism" in the product documentation.
• Note:
    ▫ MAC address authentication supports only user logout control by the access
      device and server.
• VAP profile: Configure WLAN parameters in a VAP profile, and bind the VAP
  profile to an AP group or AP. Then, a VAP is generated on the AP to provide
  wireless access services for STAs.
• After a static user is configured, the device preferentially uses the user name and
  password of the static user to authenticate the user when detecting that the user
  information matches the parameters such as the IP address range and domain
  name configured for the static user. If the authentication fails, the device
  performs 802.1X, MAC address, or Portal authentication on the user.
• You can run the static-user username macaddress format command to specify
  the MAC address of a terminal as the user name and password for
  authentication, as well as the user name format. This command has a higher
  priority than the static-user username format-include and static-user
  password cipher password commands.
     1. The control device and access device establish a CAPWAP tunnel with each
        other.
     2. When detecting the access of a new user, the access device creates a user
        association entry to record basic information such as the user and access
        interface.
3. The access device sends a user association request to the control device.
     4. The control device creates a user association entry to save the mapping
        between the user and access device, and returns a user association
        response to notify the access device of successful association.
     6. The control device deletes the user association entry. When the
        authentication succeeds, the control device generates a complete user
        entry, and sends a user authorization request to the access device, and
         delivers the network access policy of the user to the access device.
     7. The access device updates the user association entry, grants the specified
        network access rights to the user, and sends a user authorization response
        to the control device.
• Policy association:
     ▫ By default, access devices can connect to a control device only after passing
       authentication. The control device authenticates access devices using a
       blacklist and whitelist. Blacklisted access devices cannot connect to the
        control device, whereas whitelisted access devices can. The control device
        does not authenticate access devices out of the blacklist and whitelist, and
        you need to manually specify allowed access devices. You can also
        configure none authentication for access devices. As a result of this
        configuration, an access device can connect to the control device regardless
        of whether the access device is in the blacklist or whitelist.
     ▫ For details about how to configure this function, see the product
       documentation.
1. D
2. ACD
• With the construction and promotion of wireless networks, the boundaries of
  enterprise campus networks are disappearing, and office locations of enterprise
  employees become more flexible.
     ▫ The authentication point and policy enforcement point are two device roles.
       Based on the administrator's configuration and device capabilities, a
       physical device can play either or both of the two roles.
• Security group:
     ▫ An administrator can add network objects that have the same access
       requirements to the same security group, and configure a policy for this
       security group. In this way, these network objects obtain the same
       permissions as configured for the security group. For example, an
       administrator can define the following security groups: R&D group (a
       collection of individual hosts), printer group (a collection of printers), and
       database server group (a collection of server IP addresses and ports).
       Compared with the solution in which access control policies are deployed
       for each user, the security group–based access control solution greatly
       reduces the administrator's workload.
• Classification of security groups:
     ▫ A security group can be both a dynamic security group and a static security
       group. That is, it is bound to both multiple authorization rules to represent
       dynamic users and multiple IP addresses or IP network segments to
       represent static resources. The differences are as follows:
• Note: When resource groups are used, a policy enforcement point generates a
  policy by IP address, instead of based on each resource group. As such, the device
  may have a large number of policies.
• When multiple policies are configured to control access from a source security
  group to multiple destination groups, an administrator needs to configure
  priorities of the policies to determine the sequence in which policies are matched.
  For example, if the destination groups are resource groups with overlapping IP
  addresses, the administrator can set a high priority for a policy so that the policy
  can be matched preferentially.
• For unknown users:
     ▫ If a policy enforcement device does not find any security group
       corresponding to an IP address, it considers that the IP address belongs to
       the default security group named unknown, and enforces the matching
       security group policy (default policy: permit).
• The following uses the traffic from the sales group to the server group as an
  example to describe policy matching in the policy control matrix:
     ▫ The device (policy enforcement point) first searches for the policy of
       controlling access from the sales group to the server group. If no such inter-
       group policy is found in the policy control matrix, the device continues
       matching policies.
     ▫ The device then searches for the policy of controlling access from the sales
       group to the any group. If no such inter-group policy is found in the policy
       control matrix, the device continues matching policies.
     ▫ Finally, the device matches traffic with the policy of controlling access from
       the any group to the any group. By default, this policy exists in the policy
       control matrix and defines the permit action. That is, traffic is permitted by
       default if no policy is matched.
• A policy enforcement point can obtain IP-security group entries in either of the
  following ways:
     ▫ The policy enforcement point obtains IP-security group entries during user
       authentication when it is located on the same device as the authentication
       point.
• When planning security group policies, pay attention to the direction of policies.
  Generally, packets are transmitted in both directions between two terminals.
     ▫ For Huawei switches, traffic from switch A to switch B and traffic from
       switch B to switch A match different policies. Whether traffic is permitted or
       denied depends on the source and destination groups of the traffic. If the
       permit action is configured for the A-to-B traffic and the deny action for
       the B-to-A traffic, all packets sent from switch A to switch B are allowed to
       pass through, but the packets sent from switch B to switch A are discarded,
       regardless of which device initiates the request. If no matching policy is
       found, a switch performs the default action — permit.
2. B
• By integrating the user authentication, user management, and policy association
  functions, the CloudCampus Solution provides unified authentication and access
  policy control for both wired and wireless users. Administrators can have a
  consistent user management experience, with simplified O&M for wired and
  wireless networks.
• CPE: Customer Premises Equipment
• Perpetual license + SnS: The perpetual license is sold together with SnS services,
  such as software patches, software upgrades (including new features of new
  releases), and remote support. In the perpetual license + SnS mode, a customer
  needs to pay SnS fee for a certain period of time, in addition to purchasing the
  license upon the first purchase. If the customer does not renew the SnS annual
  fee after it expires, the customer can only use functions provided in the license
  for the current version and cannot use the service functions covered in the SnS
  annual fee.
• Term Based License (TBL) mode: This mode differs from the perpetual license +
  SnS mode in that the licenses purchased by customers have limited validity
  periods. If a customer does not renew the subscription after the license expires,
  the customer can no longer use the software product.
• SnS: refers to Subscription and Support. It consists of two parts: software support
  and software subscription. The complete software charging mode consists of the
  annual software SnS fee and software license fee.
• For more information about the WLAN, such as WLAN planning, SSID planning,
  and radio calibration, see the Small- and Medium-Sized Cloud-Managed Campus
  Network Design or HCIX-WLAN series courses.
• Network layer
• This slide presents the virtualized campus network architecture. The underlay is
  the physical network layer, and the overlay is the virtual network layer
  constructed on top of the underlay using the Virtual Extensible LAN (VXLAN)
  technology.
• On a fabric network, a VXLAN tunnel endpoint (VTEP) can function as either a
  border or edge node:
• Policy association:
2. User identity authentication: The admission device sends the identity credential
   to the admission server for identity authentication.
3. User identity verification: The admission server stores user identity information
   and manages users. After receiving the identity credential of the terminal, the
   admission server verifies the identity of the terminal, determines whether the
   terminal identity is valid, and delivers the verification result and policy to the
   admission device.
• User policy management and permission control are performed based on security
  groups.
• On large- and medium-sized campus networks, access terminals include smart
  terminals (such as PCs and mobile phones) and dumb terminals (such as IP
  phones, printers, and IP cameras). Currently, terminal management on campus
  networks faces the following challenges:
     ▫ The network management system (NMS) can only display the IP and MAC
       addresses of access terminals, but cannot identify the specific terminal type.
        As a result, the NMS cannot provide refined management for network
        terminals.
• Modular design:
• Redundancy design:
• Symmetric design:
     1. Determine the number of access switch ports based on the network scale.
        Typically, one port corresponds to one terminal or one network access
        point (for example, AP).
          ▪ Assign VLANs by logical area. For example, VLANs 100 to 199 are
            used in the core network zone, VLANs 200 to 999 are used in the
            server zone, and VLANs 2000 to 3499 are used on the access network.
          ▪ Assign VLANs by service type. For example, VLANs 200 to 299 are
            used in the web server zone, VLANs 300 to 399 are used in the app
            server zone, and VLANs 400 to 499 are used in the database server
            zone.
     ▫ If users are sensitive to the voice latency, the voice service must be
       preferentially guaranteed. It is recommended that the voice VLAN be
       planned for the voice service. Huawei switches can automatically identify
       voice data, transmit voice data in the voice VLAN, and perform QoS
       guarantee. When network congestion occurs, voice data can be
       preferentially transmitted.
• IP address planning complies with the following principles:
• Note: Core switches obtain basic configurations such as IP addresses using the
  CLI. Once they establish management channels with iMaster NCE-Campus,
  iMaster NCE-Campus will automatically deliver services to them.
• The DHCP server pushes PnP VLAN information to its downstream devices
  through LLDP. Note that:
     ▫ If NETCONF is not enabled on the core switch (DHCP server), the core
       switch cannot be onboarded on iMaster NCE-Campus. In this case, the
       administrator needs to manually configure the PnP VLAN on the core
       switch. Then the switch to be deployed can negotiate with the core switch
        through LLDP to obtain the configured PnP VLAN.
• Note: The PnP VLAN and management VLAN of a switch can be the same or
  different.
     ▫ Area: In the single-area scenario, all devices belong to Area 0. In the multi-
       area scenario, border nodes belong to Area 0, and each edge node and its
        connected border node belong to an area.
     ▫ Network type: You can set the OSPF network type to broadcast, P2MP, or
       P2P.
     ▫ Encryption: You can set the encryption mode between adjacent devices to
       HMAC-SHA256, MD5, or none.
• Before enabling the automatic routing domain configuration function, you need
  to plan network resources required for underlay network automation.
• Before creating virtual networks (VNs), you need to configure global resources,
  including the VLANs, VXLAN Network Identifiers (VNIs), and bridge domains
  (BDs). When creating a VN, iMaster NCE-Campus automatically allocates
  resources from the resource pools.
     ▫ BD: BDs are used to create Layer 2 broadcast domains in a VN. Typically,
       BDs have a one-to-one mapping relationship with service VLANs. Therefore,
       a sufficient number of BD resources need to be planned to match the
       number of service VLANs. The default BD range is 1-4095.
     ▫ VNI: VNIs are similar to VLAN IDs. They are used to identify VXLAN
       segments and range from 1 to 4095 by default.
• In large- and medium-sized campus networks, the virtualization solution is
  classified into the centralized gateway solution and distributed gateway solution
  based on the user gateway location. You can select a gateway solution when
  creating a fabric on iMaster NCE-Campus.
• The centralized gateway solution supports only one border node, whereas the
  distributed gateway solution supports multiple border nodes.
• An Ethernet Network Processor (ENP) card is embedded with the Huawei-
  developed ENP. The card can function as a common LPU to provide data access
  and switching services and also as a WLAN AC to provide wireless access control
  functions. In this way, the card achieves wired and wireless convergence.
• When the campus intranet needs to communicate with an external network, for
  example, the Internet, data center, or another branch, traffic must pass through
  the border node.
• There are three interconnection modes between the fabric network and egress
  device:
     ▫ L3 shared egress:
           ▪ The external gateway connects to and accesses external networks via
             VLANIF or VBDIF interfaces. VNs can access the public network or
             private network specified by another site through the shared VRF
             egress, and service traffic can be diverted to the firewall through the
             shared VRF egress. When configuring a multi-border fabric, you can
             configure multiple core devices in one external network.
           ▪ The L3 shared egress mode is applicable to the scenario where the
             firewall does not need to perform security check on VNs, there are
             low requirements on security control policies between VNs, and traffic
             of all VNs is transmitted in the same security zone.
           ▪ To enable communication between VNs and external networks, you
             must configure return routes to service subnets on the firewall. As a
             result, service subnets of different VNs can communicate with each
             other on the firewall. To isolate VNs on the firewall, configure policies
             based on service network segments in the VNs.
           ▪ As shown in the figure, a shared VRF is created on the border node,
             the shared L3 egress is bound to the VRF, and routes are configured
             to enable the communication with external networks.
• Multiple network service resources can be created, or a network service resource
  can have access addresses for multiple network service resources.
• Policy design:
     ▫ VNs are isolated by default. To enable mutual access between user groups
       within a VN, additional VN interworking configuration is required.
• VN access design:
          ▪ If two VNs belong to the same security zone and have low security
            control requirements, devices on the two VNs can directly
            communicate with each other through a border node. In addition,
            permission control can be implemented based on the free mobility
            policy. To implement communication between VNs, the border node
            needs to import their respective network segment routes that are
            reachable to each other.
          ▪ If two VNs belong to different security zones and have high security
            control requirements, it is recommended that devices on the two VNs
            communicate through an external gateway (a firewall) and that a
            security zone policy be configured on the firewall for permission
            control.
• Each VN corresponds to a VRF.
• Traffic from a terminal enters the VN based on the VLAN to which the terminal
  belongs.
• If 802.1X or MAC address authentication (Layer 2 authentication technologies) is
  used, the authentication point must be on the same network segment as the user
  host. It is recommended that the access device function as the authentication
  point.
• Take the policy direction into account when planning inter-group policies.
  Typically, packets are transmitted in two directions between two terminals.
▫ Tunnel forwarding:
▫ Direct forwarding:
     ▫ Authentication point: edge node for wired users; border node (native AC)
       for wireless users
▫ Forwarding model:
           ▪ Wired traffic: Traffic enters VNs through the edge node, and free
             mobility policies are enforced on the edge node.
▫ User gateway: edge node for wired users; border node for wireless users
     ▫ Authentication point: edge node for wired users; border node (native AC)
       for wireless users
▫ Forwarding model:
           ▪ Wired traffic: Traffic enters VNs through the edge node, and free
             mobility policies are enforced on the edge node.
     ▫ Authentication point: edge node for wired users; standalone AC for wireless
       users
▫ Forwarding model:
           ▪ Wired traffic: Traffic enters VNs through the edge node, and free
             mobility policies are enforced on the edge node.
           ▪ Wireless traffic: Free mobility policies are enforced on the border node
             (the border node needs to subscribe to IP-security group entries). The
             tunnel forwarding mode is recommended. Traffic enters VNs through
             the border node (traffic is forwarded from the standalone AC to the
             border node and then enters VNs).
     ▫ User gateway: edge node for wired users; border node for wireless users
       when tunnel forwarding mode is used
     ▫ Authentication point: edge node for wired users; AC for wireless users
• To ensure reliability, routers and firewalls are usually deployed in redundancy
  mode. It is recommended that devices be deployed in redundancy mode at the
  egress of a large- or medium-sized campus network.
           ▪ For non-Ethernet links, such as EI, CE1, and CPOS links, select routers
             as egress devices.
▫ SD-WAN requirements:
• Return route:
     ▫ Note: No return route needs to be configured for the return traffic in the
       public system. After a return packet matches the session table in the public
       system, the packet is directly forwarded to vsys1 for processing.
• Routes for the traffic from intranet users to the Internet:
• Return routes:
     ▫ Note: No return route needs to be configured for the return traffic in the
       public system. After a return packet matches the session table in the public
       system, the packet is directly forwarded to vsys1 for processing.
• On a virtualized campus network, when the user gateways are located inside the
  fabric, each Layer 3 egress interface for connecting the fabric to an external
  network corresponds to a Layer 3 logical interface on the firewall. Each logical
  interface can be bound to a security zone. If the user gateways are located
  outside a fabric, you need to bind the gateways to security zones based on the
  security policies of these gateways.
• Most security policies are implemented based on security zones. Each security
  zone identifies a network, and a firewall connects networks. Firewalls use security
  zones to divide networks and mark the routes of packets. When packets travel
  between security zones, security check is triggered and corresponding security
  policies are enforced. Security zones are isolated by default.
• As shown in the figure, after security policies are configured, VNs on the intranet
  can communicate with each other, and the external networks can access servers
  in the DMZ. In addition, different security protection policies can be applied to
  traffic in different security zones.
• On a traditional campus network, the intranet is often considered secure, and
  threats mainly come from external networks. Firewalls are often deployed to
  ensure security on campus borders. As security challenges increase, border
  defense at the egress cannot meet requirements. The security model needs to
  shift from passive into proactive and the security scope needs to be expanded
  from external networks to the intranet to solve security problems from the
  source (terminals), improving enterprise-wide information security level.
• Security measures:
     ▫ Enable traffic suppression and storm control.
          ▪ Control broadcast, multicast, and unknown unicast packets to prevent
            broadcast storms. Traffic suppression limits the traffic using the
            configured threshold, and storm control blocks the traffic by shutting
            down interfaces.
     ▫ Enable DHCP snooping and configure uplink interfaces as trusted interfaces.
          ▪ DHCP snooping defends against bogus DHCP server attacks, DHCP
            server DoS attacks, bogus DHCP packet attacks, and other DHCP
            attacks. DHCP snooping allows administrators to configure trusted
            and untrusted interfaces, so DHCP clients can obtain IP addresses
            from authorized DHCP servers. A trusted interface forwards the DHCP
            packets it receives, whereas an untrusted interface discards the DHCP
            ACK packets and DHCP Offer packets received from a DHCP server.
          ▪ An interface directly or indirectly connected to the DHCP server
            trusted by the administrator needs to be configured as a trusted
            interface, and other interfaces are configured as untrusted interfaces.
            This ensures that DHCP clients obtain IP addresses only from
            authorized DHCP servers and prevents bogus DHCP servers from
            assigning IP addresses to DHCP clients.
     ▫ Enable IPSG and DAI.
          ▪ IPSG prevents unauthorized hosts from using IP addresses of
            authorized hosts or specified IP addresses to access or attack the
            network.
• Wireless air interface security design:
     ▫ The Wireless Intrusion Detection System (WIDS) can detect rogue and
       interfering APs, bridges, and STAs, as well as ad-hoc devices.
     ▫ Attack detection: The WIDS and WIPS can also detect attacks such as flood
       attacks, weak initialization vector (IV) attacks, spoofing attacks, brute force
       WPA/WPA2/WAPI pre-shared key (PSK) cracking, and brute force WEP
       shared key cracking in a timely manner. The two systems then record logs,
       statistics, and alarms to notify network administrators of such attacks. The
       WLAN device adds devices that initiate flood attacks and brute force key
       cracking attacks to the dynamic blacklist and rejects packets from such
       devices within the aging time of the dynamic blacklist.
     ▫ PMF: Management frames on a WLAN are not encrypted, which may cause
       security problems. The PMF standard is released by the Wi-Fi Alliance based
       on IEEE 802.11w. It aims to apply security measures defined in WPA2 to
       unicast and multicast management action frames to improve network
        trustworthiness.
• Service category examples:
     ▫ Voice signaling: signaling protocols such as SIP, H.323, H.248, and Media
       Gateway Control Protocol (MGCP).
• Service category examples:
     ▫ Large-volume data services: for example, FTP, database backup, and file
       dump.
     ▫ Bandwidth control on the egress device: Egress devices are also located in
       the DiffServ domain and are configured to trust DSCP or 802.1p priorities of
        packets and implement QoS policies. Due to egress bandwidth limits, you
        need to consider differences when setting bandwidth parameters for WAN
        interfaces of egress devices. Additionally, QoS policies of egress devices vary
        according to the enterprise WAN construction mode.
1. ADF
• Managed service provider (MSP): delivers and manages network-based services,
  applications, and devices. The serviced objects include enterprises, residential
  areas, and other service providers.
• Perpetual license + SnS: The perpetual license is sold together with SnS services,
  such as software patches, software upgrades (including new features of new
  releases), and remote support. In the perpetual license + SnS mode, a customer
  needs to pay SnS fee for a certain period of time, in addition to purchasing the
  license upon the first purchase. If the customer does not renew the SnS annual
  fee after it expires, the customer can only use functions provided in the license
  for the current version and cannot use the service functions covered in the SnS
  annual fee.
• Term Based License (TBL) mode: This mode differs from the perpetual license +
  SnS mode in that the licenses purchased by customers have limited validity
  periods. If a customer does not renew the subscription after the license expires,
  the customer can no longer use the software product.
• SnS: refers to Subscription and Support. It consists of two parts: software support
  and software subscription. The complete software charging mode consists of the
  annual software SnS fee and software license fee.
     ▫ Edge: is a physical network device. Access user traffic enters the fabric from
       an edge node. Generally, VXLAN-capable access or aggregation switches
       function as edge nodes.
• Note: In this lab, the native AC is deployed on the border node to manage APs.
  The border node also serves as the DHCP server to allocate IP addresses to APs.
• Policy Control Matrix:
     ▫ In the policy control matrix, only the communication that is allowed should
       be permitted and other communication should be denied.
• VXLAN-based large- and medium-sized virtualized campus networks have
  complex services. Therefore, the deployment process is complex. The deployment
  process provided in this slide is the general process for you reference.
• The following part of this course focuses on key operations in the deployment
  process.
• When creating sites, pay attention to the following points:
           ▪ Create sites one by one: You can create sites one by one when a small
             number of sites need to be added.
           ▪ Create sites in a batch: You can create sites in a batch when a large
             number of sites need to be added.
▫ Solution:
     ▫ VLAN: Configure a service VLAN pool when you need to configure VLANs
       for connecting to external networks, VLANs for connecting to network
       service resources, CAPWAP management VLAN, and terminal access VLANs.
• iMaster NCE-Campus allows you to specify a BGP route reflector (RR) and
  automatically deploy BGP EVPN configurations on all edge nodes in a fabric.
     ▫ RR cluster ID: specifies the cluster ID of an RR. If there are multiple RRs in a
       fabric, for example, if two RRs are configured on a dual-border network,
       you need to configure a cluster ID for the RRs to prevent BGP routing loops.
        The value is an integer ranging from 1 to 4294967295 or an IPv4 address.
• Role: specifies the role of a device in the fabric, including the border node, edge
  node, and extended node. By default, the role of a device is an extended node.
• Route reflector: In a fabric, border devices are typically used as route reflectors,
  which simplify full-mesh connections required by IBGP and reduce network and
  CPU loads.
• Automatic routing domain configuration: After this function is enabled, the
  underlay network is automatically configured. You can specify sites for automatic
  routing domain configuration and specify OSPF route parameters. Currently, the
  following parameters are supported:
     ▫ Network type: You can specify the OSPF network type to broadcast, P2MP,
       or P2P.
     ▫ Encryption: You can set the encryption mode between adjacent devices to
       HMAC-SHA256, MD5, or None.
• Routing information:
     ▫ After you click Apply, iMaster NCE-Campus creates a static route to the
       external network for the border device. (The static route is delivered to the
       border device only when the external network is invoked by the virtual
       network.)
• Server configuration:
     ▫ Interconnection port: Select the port used by the border node to connect
       to the network service resource.
     ▫ Interconnection IPv4 address: Select the IP address of the port used by the
       border node to connect to the network service resource.
• Wired access: Users on the OA virtual network need to access the network
  through ACC_1 and ACC_2 and they all need to be authenticated. Therefore, in
  the wired access configuration, you need to select the interfaces on which
  authentication has been enabled on the two switches.
• Wireless access: Select the border device in the wireless access configuration. The
  border device is a switch that provides the native AC, through which it manages
  APs and provides the wireless access service.
• When multiple policies are configured to control access from a source security
  group to multiple destination groups, the sequence in which these policies are
  matched can be determined based on the policy priority. For example, if the
  destination groups are resource groups, in which case the destination IP
  addresses may be the same, you need to manually adjust the policy priorities to
  ensure that a specific policy is matched first.
• For example, when creating an account named kris (RD user), deselect Change
  password upon next login. As this user belongs to the RD user group, which
  does not require Portal authentication, deselect Portal in the Available login
  mode area.
• After configuring authorization results, you need to bind the results to created
  sites.
     ▫ Create sites one by one: This mode applies when a small number of sites
       need to be added.
     ▫ Create sites in a batch: This mode applies when a large number of sites
       need to be added.
2. ACD
• In this slide, CloudCampus Network refers to Huawei's CloudCampus Cloud-
  Managed Network Solution for Small- and Medium-Sized Campus Networks.
• Huawei's CloudCampus Solution applies to three deployment scenarios: Huawei
  public cloud, MSP-owned cloud, and on-premises. The on-premises scenario
  applies to large- and medium-sized campus networks. The Huawei public cloud
  and MSP-owned cloud scenarios apply to small- and medium-sized campus
  networks. This document focuses on the Huawei public cloud scenario, which is a
  cloud management scenario.
• There are three layers in the architecture of Huawei CloudCampus Solution for
  small- and medium-sized campus networks: multi-tenant network, cloud
  management platform, and value-added SaaS platform.
• Unless otherwise specified, the Huawei public cloud management mode is used
  as an example in this document.
• With certain IT capabilities, a tenant administrator can deploy and maintain a
  campus network. This scenario is called tenant-managed construction and
  maintenance. The tenant administrator is the main implementer, and the MSP
  administrator only provides simple deployment assistance. The tenant
  administrator can apply to the MSP for the managed construction and
  maintenance services. After being authorized, the MSP constructs and maintains
  the campus network for the tenant. This scenario is called MSP-managed
  construction and maintenance, in which the MSP administrator is the main
  implementer.
• The following lists the differences between the deployment processes in the
  tenant-managed construction and maintenance and MSP-managed construction
  and maintenance scenarios:
     ▫ The tenant administrator logs in to iMaster NCE using their own account
       and deploys services.
• For small- and medium-sized campus networks, the co-termination licensing
  model is recommended, facilitating device management and operations. If the
  license validity periods of devices of different types need to be precisely
  controlled, use the non-co-termination licensing model.
• Hub-spoke: Generally, the enterprise headquarters or DC functions as a hub site.
  Each branch site of an enterprise can communicate with the hub site and can
  communicate with other branch sites through the hub site. This model is
  applicable to scenarios where traffic between all branch sites of an enterprise
  must pass through the headquarters for centralized security monitoring.
• Full-mesh: All sites of an enterprise can communicate with each other. If traffic
  needs to be transmitted between the headquarters and branches or between
  branches, data is directly exchanged without traversing an intermediate node.
  This model is applicable to scenarios where all sites of an enterprise need to
  directly access each other. This model eliminates the delay caused by traffic
  transmission through the headquarters.
     ▫ In addition to selecting devices with high reliability, you can further improve
       the reliability of the devices by using dual power supplies or redundant
         components (e.g. boards).
     ▫ Two devices are deployed in the egress zone of the campus network to
       work in active/standby mode. Currently, in a two-node system, firewalls
       support only the hot standby mirroring mode. ARs support dual-CPE
       networking, and the two devices and two egress links can work
       concurrently.
• An IPsec VPN is a type of static VPN, in which IPsec tunnels are established
  between devices at different sites to create VPN tunnels. Traffic is diverted to the
  VPN tunnels based on the configured static routes so that services between sites
  can be accessed through the VPN tunnels.
• An EVPN is a type of dynamic VPN that can establish tunnels between sites and
  dynamically advertise routes on demand. EVPN establishes GRE tunnels between
  sites to establish VPN tunnels and supports IPsec encryption on GRE tunnels to
  ensure tunnel encryption security. In addition, the EVPN solution offers
  application- and policy-based intelligent traffic steering, allowing high-quality
  links to be selected based on applications and policies for data transmission.
• The single-layer network model is also called the flat network model. In this
  model, WAN sites of an enterprise can be directly connected or connected
  through one or more hub sites. Typically, this model is used by small- and
  medium-sized enterprises as well as large enterprises with fewer than 100 sites.
  The single-layer network model can be further classified into hub-spoke, full-
  mesh, and partial-mesh.
     ▫ This tool can generate network planning files and allows users to export the
       files. Users can import network planning results on the tenant management
       page of iMaster NCE to display AP locations and help tenants install APs.
• Portal authentication can be used on open networks (for example, networks for
  external user access). PSK or PPSK authentication can be used on semi-open
  networks (such as guest room networks of hotels). 802.1X authentication can be
  used on secure networks (such as office networks). MAC address authentication
  can be used for dumb terminals such as printers.
• For an SSID that is not intended for end users, for example, the SSID planned for
  printers and scanners, you can hide this SSID to prevent it from being detected by
  end users.
• Definition of Layer 2 roaming: When a STA moves between APs, the STA
  smoothly switches from the original AP to a new AP. This process is called
  roaming. The SSID, service VLAN, and gateway of the STA remain unchanged
  before and after roaming.
• When the number of APs exceeds the management specifications of the leader
  AP, you can divide multiple management VLANs to manage the network. You are
  advised to plan management VLANs based on floors or physical continuous
  coverage areas to ensure the continuity of the calibration regions.
• Port isolation cannot be configured on the uplink access switch of the cloud AP.
  (The calibration group is negotiated through wired-side broadcasting.)
• During scheduled radio calibration, you can enable intelligent radio calibration
  and use the analyzer to analyze historical data of the WLAN and predict
  interference sources on the network. During network optimization, APs can avoid
  possible interference sources on the network in advance to improve the quality of
  the entire WLAN.
2. B
• Customer benefits: The transformation is to improve efficiency by using
  algorithms. With scenario-based continuous learning and expert experience,
  intelligent O&M frees O&M personnel from complex alarms and noises, making
  O&M more automatic and intelligent.
• ProtoBuf: Protocol Buffers
• SaaS: software as a service
• IaaS: infrastructure as a service
• RSSI: Received Signal Strength Indicator
• KQI: key quality indicator
• Wireless Network Health Evaluation Model:
▫ Access Experience
▫ Roaming Experience
▫ Throughput Experience
     1. APs scan all channels in real time, including co-channel interference, non-
        Wi-Fi interference, and normal usage ratio of channels.
     ▫ The Metric Overview area displays the average packet loss rate, average
       jitter, disorder rate, packet rate, and byte rate of the session.
     ▫ If the device role is correctly set on the resource side and LLDP is enabled,
       the Analysis and Demarcation area displays the full link topology of the
        session from the initiator to the responder. You can view devices such as
        APs, switches, and ACs that the session passes through. When a device is
        faulty, the device is marked in red and displayed as a poor-quality device.
     ▫ Application quality and air interface: You can click a device to view the
       performance metrics of the device and its interface or air interface in the
       session.
           ▪ The device metrics include the MOS value, packet loss rate, maximum
             number of consecutively lost packets, jitter, disorder rate, and
              deterioration time ratio.
     ▫ Connection failure but not a fault: Due to the instability of wireless client
       access (for example, when a client moves or passes through a coverage
        hole), the user authentication failure persists in each time segment, but
        does not affect user experience. The fault is rectified after the user
        automatically accesses the network again.
• Use Case Benefits:
     ▫ High interference issues easily occur on Wi-Fi networks. These issues affect
       user experience and cause problems such as long network delay and
       congested network access. As a result, user experience on the Wi-Fi network
       deteriorates. Using correlation analysis and big data analytics,
       CampusInsight can detect high interference issues on the network, find the
        most possible causes, and provide rectification suggestions.
• Key Technologies:
     ▫ To improve network reliability, redundant devices and links are usually used
       on an Ethernet switching network. However, due to network adjustment,
       configuration modification, upgrade, and cutover, data or protocol packets
       are often forwarded in a ring, which inevitably leads to loops.
     ▫ Topology-based loop path display: Restores the loop path of a Layer 2 loop
       based on the switch neighbor relationship.
• Use Case Benefits:
     2. Guide customers to test port cables in order to figure out whether error
        packets are caused by cable aging or internal crosstalk.
• Use Case Benefits:
     ▫ When the port alternates between Up and Down states, the negotiated rate
       of the port changes multiple times. As a result, the port repeatedly
       performs rate negotiation, causing intermittent disconnection.
     ▫ CampusInsight continuously monitors the Down event of each port and can
       accurately detect any intermittent port disconnection on the network in a
        timely manner, providing expert suggestions.
• Simulation feedback: CampusInsight evaluates the radio score and the number of
  APs waiting for calibration based on the radio and neighbor information of APs,
  displays the calibration simulation effect through the AI algorithm, and provides
  channel adjustment suggestions. The APs with this function must be deployed on
  the floor.
• Intelligent radio calibration: Historical big data is analyzed using the AI algorithm.
  Network devices periodically request big data and the analytics results based on
  the calibration policy to implement intelligent radio calibration.
• Use Case Benefits:
    ▫ This version supports only RSSI-based wireless location, and other location
      methods are not supported.
     ▫ Displays the people distribution heat map based on the specified time
       period.
• WAN
     ▫ WANs provide wider coverage than LANs and metropolitan area networks
       (MANs). The communication subnet of a WAN mainly uses the packet
       switching technology. The communication subnet of a WAN can use the
       public packet switching network, satellite communication network, and
       wireless packet switching network to interconnect the LANs or computer
       systems in different areas for resource sharing.
• WDM uses multiple lasers to transmit multiple beams of lasers with different
  wavelengths over a single optical fiber. The transmission bandwidth of WDM
  devices is high, and the live-network bandwidth can reach up to 8 Tbit/s.
• There are three types of inter-AS MPLS L3VPN solutions: Option A, Option B, and
  Option C.
• Option A applies to small inter-AS MPLS L3VPNs. Option B applies to midsize and
  large inter-AS MPLS L3VPNs. Option C applies to large or super-large inter-AS
  MPLS L3VPNs.
• By service usage:
▫ GRE
▫ DSVPN
     ▫ DSVPN IPsec
• GRE is a Layer 3 tunneling technology. A GRE tunnel is a virtual P2P connection
  that transmits encapsulated data packets.
• The two ends of a GRE tunnel are tunnel interfaces which encapsulate and
  decapsulate data packets. The tunnel interface that sends encapsulated packets is
  called the tunnel source interface, and the one that receives these packets on the
  peer end is called the tunnel destination interface.
     ▫ The IP protocol module checks the destination address in the packet header
       to determine how to forward this packet. If the packet is destined for the
       other end of the GRE tunnel, the IP protocol module sends the packet to
       the tunnel interface.
     ▫ After receiving the packet, the tunnel interface encapsulates the packet
       using GRE and delivers the packet to the IP protocol module.
• GRE encapsulates multicast data to allow data to be transmitted through GRE
  tunnels. Currently, IPsec can encrypt only unicast data. If multicast data, such as
  routing protocol, voice, and video data, needs to be transmitted over IPsec
  tunnels, a GRE tunnel can be established to encapsulate multicast data, and then
  IPsec encrypts the encapsulated packets. In this way, multicast data is encrypted
  and transmitted in the IPsec tunnel.
• GRE over IPsec combines advantages of both GRE and IPsec. It enables a network
  to support multiple upper-layer protocols and multicast packets, as well as
  packet encryption, identity authentication, and data integrity check.
• GRE over IPsec encapsulates packets using GRE, and then IPsec.
▫ Tunnel mode
     ▫ Transport mode
• DSVPN resolves the following defects of GRE over IPsec:
     ▫ If spokes use dynamic addresses, problems may occur when P2P GRE is
       deployed.
• SSL VPN is a VPN remote access technology based on SSL. Mobile users (referred
  to as remote users in SSL VPN) can use SSL VPN to securely and conveniently
  access enterprise intranets and intranet resources, improving work efficiency.
• Before SSL VPN is developed, VPN technologies such as IPsec and L2TP are used
  to enable remote user access. However, these VPN technologies have the
  following disadvantages:
     ▫ TLPs are interfaces on the edge nodes of the network and provide the
       following functions:
▫ DCPs are edge nodes on the network and provide the following functions:
     ▫ The bandwidth used by the FTP service on the backbone network can be
        limited, and a higher priority can be assigned to database access.
• VoIP: voice over IP service
     ▫ In FEC, both FEC redundant packets and data packets are sent to the
       receiver. If an error is found, the receiver directly restores the lost data
       packets by using the FEC redundant packets. In this mode, error correction
       is performed on the receiver. This is why this mode is called FEC.
2. A
• Description of fields in a GRE header:
Field       Description
            Checksum verification bit.
            The value 1 indicates that the Checksum field is inserted into the GRE
C           header.
            The value 0 indicates that the GRE header does not contain the
            checksum field.
            Key bit.
            The value 1 indicates that the Key field is inserted into the GRE header.
K
            The value 0 indicates that the GRE header does not contain the
            keyword field.
            Number of layers where GRE packets are encapsulated. The value of
            this field is increased by 1 after one GRE encapsulation is complete. If
Recursion   the number of encapsulation layers is greater than 3, the packet is
            discarded. This field is used to prevent packets from being encapsulated
            continuously.
Flags       Reserved field. The value must be 0.
Version     Version. The value must be 0.
            Type of the passenger protocol. A common passenger protocol is the
Protocol
            IPv4 protocol, with the value of 0800.
Type
            The protocol number of Ethernet over GRE is 0x6558.
Checksum    Checksum of the GRE header and the payload.
Key         Key used to authenticate the packet at the receive end.
• Keepalive detection functions as follows:
  ▫ After being enabled on the source end of a GRE tunnel, the source end starts a
    timer to periodically send and count keepalive messages. The number of sent
    keepalive messages increases by one each time a keepalive message is sent.
  ▫ The destination end sends a response message to the source end each time it
    receives a keepalive message from the source end.
  ▫ If the source end receives a reply packet before the counter value reaches the
    preset value, it considers the remote end reachable. If the source end does not
    receive any response message before the counter reaches the preset value,
    specifically, the retry count, the source end considers the peer end unreachable
    and resets the counter. Then, the source end terminates the tunnel connection.
    In this case, the source interface still sends Keepalive messages to the remote
    interface. When the remote interface becomes Up, the source interface
    becomes Up and sets up a tunnel with the remote interface.
• You can enable or disable checksum verification on both ends of a tunnel in
  actual applications. If checksum verification is enabled on the local end and
  disabled on the remote end, the local end does not check checksum values of
  received packets, but checks checksum values of packets to be sent. If checksum
  verification is disabled on the local end and enabled on the remote end, the local
  end checks checksum values of received packets, but does not check checksum
  values of packets to be sent.
• This field identifies traffic in a tunnel. Packets of the same traffic use the same
  key. During packet decapsulation, GRE identifies data packets of the same traffic
  based on the key. Packets will pass verification only when the two ends of the
  tunnel use the same Key field. If packets fail the verification, they will be
  discarded. Successful authentication requires that both ends are either configured
  with the same Key field or not configured with the Key field.
• NAS
    ▫ A network access server (NAS) is maintained by an ISP and connects to a
      dialup network. It is the nearest access point for PPP terminals. An NAS is
      used on a traditional dialup network. An ISP deploys an LAC on an NAS to
      provide L2TP services for remote dialup users and to establish tunnel
      connections with the enterprise headquarters.
• LAC
    ▫ An L2TP access concentrator (LAC) provides PPP and L2TP processing
      capabilities on a packet switched network. An LAC establishes an L2TP
      tunnel connection with an L2TP network server (LNS) based on the user
      name or domain name carried in PPP packets to extend PPP negotiation to
      the LNS. Different networking environments can have different devices
      functioning as an LAC.
      ▪ NAS-initiated scenario: On a traditional dialup network, an ISP deploys
         an LAC on an NAS. Alternatively, on the Ethernet of an enterprise
         branch, the ISP deploys a gateway for PPP terminals. The gateway
         functions as both a PPPoE server and an LAC.
      ▪ L2TP client-initiated scenario: In an enterprise branch, an L2TP client
         functioning as an LAC is configured on the gateway to initiate an L2TP
         tunnel establishment request to an LNS. In this case, no dialup is
         required in the remote system to trigger L2TP tunnel establishment.
      ▪ Client-initiated scenario: An employee on the go uses a PC or mobile
         terminal to access the Internet and uses the L2TP dialup software on the
         PC or mobile terminal. In this scenario, the PC or mobile terminal
         functions as an LAC.
    ▫ An LAC can establish multiple L2TP tunnels to isolate data flows. That is, it
      can carry multiple L2TP connections.
• Control message
  ▫ Control messages are used to establish, maintain, and tear down L2TP tunnels
    and sessions. During the transmission of control messages, mechanisms such
    as retransmission of lost messages and periodic detection of tunnel
    connectivity are used to ensure the reliability of control message transmission.
    Traffic control and congestion control on control messages are supported.
  ▫ Control messages are transmitted over an L2TP control channel. The control
    channel encapsulates control messages with L2TP headers and transmits them
    over an IP network.
• Data message
  ▫ Data messages are used to encapsulate PPP frames, which are transmitted
    over tunnels, but such tunnels are unreliable. That is, a lost data message is
    not retransmitted, and traffic control and congestion control on data
    messages are not supported.
  ▫ Data messages carrying PPP frames are transmitted over unreliable data
    channels. PPP frames are encapsulated using L2TP and then transmitted over
    the IP network.
• Establishing an L2TP tunnel
  ▫ After receiving a PPP negotiation request from a remote user, the LAC sends
    an L2TP tunnel establishment request to the LNS. The LAC and the LNS
    exchange L2TP control messages to negotiate the tunnel ID and tunnel
    authentication information. After the negotiation succeeds, an L2TP tunnel is
    established between them and identified by the tunnel ID.
  ▫ If an L2TP tunnel exists, the LAC and the LNS exchange control messages to
    negotiate the session ID. If no L2TP tunnel exists, the LAC and the LNS
    establish an L2TP tunnel first. The L2TP session carries LCP negotiation
    information and user authentication information of the LAC. After
    authenticating such information, the LNS notifies the LAC of the session
    establishment. The L2TP session is identified by a session ID.
  ▫ After the L2TP session is established, the PPP terminal sends data packets to
    the LAC. The LAC encapsulates the L2TP packets based on information such as
    the L2TP tunnel and session ID and sends the packets to the LNS. The LNS
    decapsulates the L2TP packets and sends the packets to the destination host
    based on the routing and forwarding table.
• Employees on the go may need to communicate with the headquarters and
  access intranet resources of the headquarters at any time. Although they can
  access the headquarters gateway through the Internet, the headquarters
  gateway cannot identify and manage access users. To address this issue,
  configure the headquarters gateway as an LNS, so that virtual point-to-point
  connections can be established between the employees on the go and the
  headquarters gateway when the employees use the L2TP dialup software on the
  PC to initiate L2TP connections.
• An enterprise has some branches located in other cities, and its branches use the
  Ethernet and have gateways deployed for branch users to access the Internet.
  The headquarters provides access services for branches. VPDN connections need
  to be established between branches and the headquarters gateway. Any branch
  user is allowed to access the headquarters network, and only the branch
  gateways need to be authenticated. In this case, the headquarters gateway
  functions as the LNS, and the branch gateways function as the L2TP clients.
  Virtual dialup is created on the branch gateways to trigger L2TP tunnel
  connections to the headquarters network. A virtual point-to-point connection is
  established between an L2TP client and the LNS. After IP packets of branch users
  reach an L2TP client, the L2TP client forwards the packets to the virtual dialup
  interface. The virtual dialup interface forwards the packets to the LNS, which
  then forwards the packets to the destination host.
• An enterprise has some branches located in other cities, and its branches use the
  Ethernet and have gateways deployed for branch users to access the Internet.
  Headquarters users need to communicate with branch users, and the
  headquarters uniformly manages access of branch users. Therefore, L2TP is used
  to deploy the headquarters gateway as an LNS. Dialup packets of branch users
  cannot be transmitted directly over the Ethernet. Therefore, PPPoE dialup
  software needs to be deployed as a PPPoE client on the terminal that initiates the
  dialup packets, and the branch gateway functions as a PPPoE server and an LAC
  to forward call requests of branch users to the headquarters.
• The symmetric encryption algorithm is also called traditional cryptographic
  algorithm, in which the encryption key can be calculated from the decryption key.
  The sender and receiver share the same key, which is used for both encryption
  and decryption. Symmetric key encryption is an effective method for encrypting a
  large amount of data. There are many algorithms for symmetric key encryption,
  and all of them aim to convert between cleartext (unencrypted data) and
  ciphertext. Because symmetric key encryption uses the same key for data
  encryption and decryption, data security depends on whether unauthorized users
  obtain the symmetric key. If two communicating parties want to use the
  symmetric key to encrypt data, they must exchange the key securely before
  exchanging the encrypted data.
  ▫ IKE negotiation: The management cost of IPsec SAs established through IKE
    negotiation is low. The encryption and authentication modes are generated
    using the Diffie-Hellman (DH) algorithm, SA information is generated
     periodically, and SAs are dynamically updated. This mode applies to small-,
     medium-, and large-sized networks.
• IKE supports the following encryption algorithms: DES, 3DES, AES-128, AES-192,
  AES-256, SM1, and SM4.
• ISAKMP is defined in RFC 2408, which defines the procedures for negotiating,
  establishing, modifying, and deleting SAs and defines the ISAKMP message
  format. ISAKMP provides a general framework for SA attributes and the methods
  of negotiating, modifying, and deleting SAs, without defining the specific SA
  format.
• ISAKMP messages can be transmitted using UDP or TCP through port 500. In
  most cases, ISAKMP messages are transmitted using UDP.
• Integrity check value (ICV) is used by the receiver for integrity check. Available
  authentication algorithms are MD5, SHA1, SHA2, and SM3.
• Common authentication algorithms used by IPsec include MD5, SHA1, SHA2, and
  SM3. MD5 and SHA1 are not recommended because they are insecure and pose
  security risks.
• Key fields:
  ▫ Authentication Data: This field contains the Integrity Check Value (ICV) and is
    used by a receiver for data integrity check. Available authentication algorithms
    are MD5, SHA1, SHA2, and SM3.
• In transport mode, an AH or ESP header is added between an IP header and a
  transport-layer protocol (TCP, UDP, or ICMP) header to protect the TCP, UDP, or
  ICMP payload. As no additional IP header is added, IP addresses in the original
  packets are visible in the IP header of the post-encrypted packet.
• In tunnel mode, an AH or ESP header is added before the raw IP header and then
  encapsulated into a new IP packet with a new IP header to protect the IP header
  and payload.
• The IPsec mechanism is as follows:
  ▫ The IKE SA is used to encrypt the packets in the second phase of IKE
    negotiation. That is, IPsec SAs are negotiated in the second phase of IKE
    negotiation.
     ▪ The encryption key and authentication key used for secure data
       transmission over IPsec SAs are generated based on the keys generated in
       phase 1 and parameters such as the SPI and protocol. This ensures that
       each IPsec SA has unique encryption and authentication keys.
• In the IP header added during IPsec encapsulation, the source and destination
  addresses are the IP addresses of the local interface and remote interface to
  which an IPsec policy is applied.
• IPsec protects data flows from the GRE tunnel source to the GRE tunnel
  destination. In the IP header added during GRE encapsulation, the source and
  destination addresses are the source and destination addresses of a GRE tunnel.
• L2TP encapsulation and then IPsec encapsulation are performed on packets
  transmitted over an L2TP over IPsec tunnel. In the IP header added during IPsec
  encapsulation, the source and destination addresses are the IP addresses of the
  local interface and remote interface to which an IPsec policy is applied.
• IPsec needs to protect the data flows from the L2TP tunnel source to the L2TP
  tunnel destination. In the IP header added to packets during L2TP encapsulation,
  the source and destination addresses are the source and destination addresses of
  an L2TP tunnel. When a branch connects to the headquarters, the source address
  of the L2TP tunnel is the IP address of the outbound interface on the L2TP access
  concentrator (LAC), and the destination address is the IP address of the inbound
  interface on the L2TP network server (LNS).
• A public IP header is added to packets during L2TP encapsulation, and another
  public IP header is added to packets if L2TP over IPsec in tunnel mode is used,
  resulting in longer packets, which are prone to being fragmented. Therefore,
  L2TP over IPsec in transport mode is recommended.
• The L2TP over IPsec negotiation process and packet encapsulation process are
  similar when traveling employees are remotely connected to the headquarters
  and when branch employees are connected to the headquarters. The difference is
  that, L2TP and IPsec encapsulation is performed on clients when traveling
  employees are remotely connected to the headquarters. The L2TP tunnel source
  address is the private address assigned to a client and can be any address in the
  IP address pool configured on the LNS. The L2TP tunnel destination address is the
  address of the inbound interface on the LNS.
• NAPT enables a public IP address to map multiple private IP addresses through
  ports. In this mode, both IP addresses and transport-layer ports are translated so
  that different private IP addresses with different source port numbers are
  mapped to the same public IP address with different source port numbers.
• NAPT and Easy IP are also known as source NAT because they change only the
  source address and port number of a packet.
• There are four types of NAT:
  ▫ Symmetric NAT
• NAPT and Easy IP are also known as source NAT because they change only the
  source address and port number of a packet.
• In RFC 3489, STUN is a complete NAT traversal solution and its full name is
  Simple Traversal of UDP Through NATs.
• In the new RFC 5389 revision, STUN is positioned to provide a tool for NAT
  traversal rather than a complete solution. The full name of STUN is changed to
  Session Traversal Utilities for NAT. Besides the full name difference, STUN in RFC
  5389 differs from STUN in RFC 3489 in that STUN in RFC 5389 supports NAT
  traversal for TCP.
• A STUN client sends a STUN binding request to the STUN server.
• After receiving the STUN binding request, the STUN server obtains the source IP
  address and port number, constructs a STUN binding response, and sends the
  response to the client.
• The STUN client obtains an IP address and port number from the binding
  response, and compares the obtained IP address and port number with the
  source IP address and port number carried in the binding request. If they are
  different, a NAT device is used in front of the STUN client.
• STUN clients use BGP to learn each other's NAT information (IP addresses and
  port numbers before and after NAT).
• The local STUN client uses the local pre-NAT IP address and port number and the
  pre-NAT IP address and port number of the peer STUN client to construct a
  STUN binding request and sends it to the peer STUN client. In addition, the local
  STUN client uses the local pre-NAT IP address and port number and the post-
  NAT IP address and port number of the peer STUN client to construct a STUN
  binding request and sends it to the peer STUN client. The peer STUN client
  performs the same operations.
• After receiving the STUN binding request, the peer STUN client sends a STUN
  binding response to the local STUN client. The local STUN client performs the
  same operations.
• After the preceding STUN messages are exchanged, a data channel is established
  between the STUN clients so that packets can traverse the NAT devices.
• The SAC signature database file can only be updated through upgrades and
  cannot be manually modified.
• The SAC signature database can be updated in either of the following modes:
  ▫ Online update: The SAC signature database can be updated through the
    security center platform or intranet update server.
  ▫ Local update: The upgrade package is downloaded from the security center
    platform and uploaded to the device through FTP for the update of the SAC
    signature database.
• After a packet enters the device, the device determines whether the
  corresponding application has been identified based on the 5-tuple information
  carried in the packet. If the application has been identified, the device forwards
  the packet at Layer 3 without identifying the application again. If the application
  has not been identified, the device performs the SAC application identification
  process. The device then processes the packet based on the SAC identification
  result and forwards the packet at Layer 3. The SAC application identification
  process is as follows: The device identifies an application based on the ACL rules
  defined in FPI. If the application cannot be identified, the device identifies the
  application based on the DNS entries defined in FPI. If the application still cannot
  be identified, the device identifies the application based on the protocol and port
  mapping table defined in FPI. If the application still cannot be identified, the
  device starts the SA identification process.
• FPI applications are classified into the following types:
  ▫ Predefined and user-defined FPI applications based on the protocol and port
    number: These two types of applications are identified using entries that are
    generated based on the protocol and port number carried in packets. The
    difference is as follows: Packets of a predefined FPI application contain
    common protocols and port numbers, while packets of a user-defined FPI
    application contain the protocols and ports that you define.
  ▫ Predefined and user-defined FPI applications based on the DNS domain name:
    These two types of applications are identified using DNS entries generated
    through association between FPI and DNS. The difference is as follows:
    Packets of a predefined FPI application contain common DNS domain names,
    while packets of a user-defined FPI application contain the DNS domain
    names that you define.
  ▫ User-defined FPI application based on 5-tuple and DSCP information. This
    application is identified based on the user-defined 5-tuple and DSCP
    information using advanced ACL rules.
• Identification process of FPI applications based on the DNS domain name
  ▫ FPI applications based on the DNS domain name are identified using DNS
    entries generated through association between FPI and DNS. The FPI signature
    database contains the mappings between domain names and applications.
    DNS response packets contain the mappings between domain names and IP
    addresses. Based on the mappings, a device generates DNS entries, which
    contain the mappings between IP addresses and applications. The device
    searches for DNS entries based on the IP address carried in the application
    protocol packets to identify the corresponding application.
• SPR classifies services based on the following attributes:
▫ Protocol types: IP, TCP, UDP, GRE, IGMP, IPINIP, OSPF, and ICMP
• When SPR selects routes for services based on the NQA detection result, the CMI
  is calculated using the following formula:
• When SPR selects routes for services based on the IP FPM detection result, the
  CMI is calculated using the following formula:
▫ CMI = D + J + L
2. A
• ONUG: refers to Open Networking User Group. ONUG is an influential user
  organization led by large enterprises and comprised mainly of IT users. It was
  founded by IT technical executives of well-known large enterprises in North
  America and is dedicated to driving IT implementation and network technology
  transformation for large enterprises. ONUG members include large enterprises in
  industries such as finance, insurance, medical care, and retail. ONUG serves as a
  platform for high-end customers in North America to discuss and communicate
  IT requirements.
• Gartner: It is the world's most authoritative IT research and advisory company. Its
  research scope covers all IT industries. It provides objective and fair
  demonstration reports and market research reports for customers in terms of IT
  research, development, evaluation, application, and market, thereby assisting
  customers in market analysis, technology selection, project demonstration, and
  investment decision-making.
• The overall architecture of the SD-WAN solution consists of the network layer,
  control layer, and orchestration layer. The layers are associated with each other
  through standard interfaces and communication protocols.
• In terms of network device functions, the network layer of the SD-WAN Solution
  consists of two types of NEs: CPE and gateway (GW).
• RR site: The CPE at the site functions as an RR and distributes EVPN routes
  between CPE gateways at edge sites based on VPN topology policy.
• If the tenant administrator assigns the role of "gateway + RR" to an egress CPE
  when adding the CPE, the site where the CPE resides is an RR site. If no device at
  a site is assigned the "gateway + RR" role, the site is an edge site.
• An edge site can establish IBGP peer relationships with two RRs that back up
  each other.
• Multiple RRs can be deployed for a tenant. All RRs are connected in full-mesh
  mode on the control plane.
• A gateway has different roles in different service scenarios. For example, a
  gateway connected to a legacy site may be referred to as an interworking
  gateway (IWG), and a gateway connected to the cloud may be called a cloud
  gateway. These gateways can extend functions by interconnecting with each
  other to establish a Point of Presence (PoP) network, where these gateways are
  referred to as PoP gateways.
• ZTP: Multiple ZTP modes are available to enable EDGEs to quickly register with
  iMaster NCE-WAN.
• Visualized O&M for quick fault locating: iMaster NCE-WAN collects network-wide
  data and displays key indicators, helping O&M personnel quickly locate faults.
• Huawei SD-WAN Solution supports the following ZTP modes:
▫ Email-based deployment
▫ DHCP-based deployment
     ▫ USB-based deployment
• Files for USB-based, email-based, and DHCP-based deployment can be generated
  through iMaster NCE-WAN.
• For details about each deployment mode, learn the course Management and
  O&M.
• Management channel:
• Control channel:
• Data channel:
     ▫ EDGEs forward data based on GRE or GRE over IPsec tunnels. The extended
       GRE header carries VN IDs to differentiate tenants or departments, thereby
       transmitting data of multiple VNs over the same tunnel.
• A TNP is a WAN port on a CPE used for connecting to a transport network. The
  key TNP information includes the site ID, CPE router ID, transport network ID,
  public IP address, private IP address, and tunnel encapsulation mode.
• For details about SAC and SPR, learn the course HA Technologies.
• An SD-WAN site can be deployed with a single CPE or dual CPEs. For small sites,
  a single CPE can be deployed. For sites with high reliability requirements, dual
  CPEs are recommended to provide device-level redundancy.
• A maximum of 10 WAN links can be deployed for each CPE at an SD-WAN site.
  During actual deployments, to enhance reliability and facilitate O&M, it is
  recommended that a maximum of three WAN links be deployed for a single CPE
  at a site, and a maximum of six WAN links be deployed for a site with two CPEs.
• LAN side connected to Layer 2 networks:
     ▫ At small sites with a simple intranet structure, and CPEs typically connect to
       the intranet of the site at Layer 2.
     ▫ For small sites, for example, SOHO sites, LAN-side interfaces can be directly
       connected to terminals at the sites.
• In the dual-CPE architecture, VRRP is usually configured for the CPEs to prevent
  the dual-CPE architecture from affecting the LAN side.
     ▫ Multiple switches can be deployed on the LAN side to form a stack. If two
       CPEs are deployed at a site, they can be interconnected directly or through
       the LAN.
• In the Layer 3 interconnection scenario, if only one CPE is deployed, the network
  structure is simple. In such a scenario, only the routing protocol needs to be
  configured on the LAN side based on requirements of LAN-side devices.
• Solution 1 is recommended. In this solution, the interlink and service links are
  independent of each other. When WAN-side links are adjusted, the interlink will
  not be affected, and the service flow direction is clear.
• The TN and RD are used to set up overlay tunnels in enumerated mode.
• The CPE router ID is used to establish BGP peer relationships between different
  sites.
• The public and private IP addresses are used as the source or destination IP
  addresses of control and data channels.
     ▫ Some CPEs are deployed behind the NAT device. To establish data channels
       between CPEs, you need to know the post-NAT public IP address.
           ▪ CPEs typically use the Session Traversal Utilities for NAT (STUN)
             technology to detect public IP addresses.
• Data tunnels are enumerated before being established to ensure that all
  available data tunnels are established.
• Tunnels can be enumerated only when the following conditions are met:
     ▫ The CPE has learned the TNP information of the peer site.
• The management channel is used to establish control channels and deliver basic
  configurations.
     ▫ To prevent site network adjustment from affecting the stability of RRs, you
       are advised to use method 2, that is, independent deployment of RRs.
     ▫ Use high-performance devices as RRs. For details about the devices that can
       function as RRs, see the specifications list.
     ▫ Configure a public IP address for an RR, or deploy a NAT device before the
        RR. Only 1:1 static NAT is supported.
     ▫ An EDGE site can connect to a maximum of two RR sites. Two EDGEs can
       be deployed at each RR site deployed for a tenant. If there are a large
       number of EDGEs, multiple RRs can be deployed, and each RR serves some
       EDGE sites.
     ▫ When one EDGE is connected to two RR sites, the EDGE establishes a BGP
       connection with each RR at the RR sites.
     ▫ If a branch site has a standby link, for example, the branch or the RR has a
       standby link, and the active link is normal, no control channel is established
       for the standby link. When all active links from the branch site to the RR
        are down, the standby link is involved in the establishment of control
        channels.
• A EDGE can be connected to a maximum of two RR sites (four RRs).
• For small networks (for example, a network with fewer than 50 sites), RRs and
  hubs can be deployed in co-located mode.
• RRs require strong BGP connection capabilities (number of BGP peers), large
  number of EVPN connections, and high route reflection capability and efficiency.
  In actual deployments, select the RR models recommended in the specifications
  list, for example, AR6300/AR6280.
• An MSP administrator deploys independent RRs as a service provided by carriers
  or MSPs for enterprise users to access. Two RR service modes are available:
  sharing and exclusive. In sharing mode, one RR is shared by multiple tenants. In
  exclusive mode, one RR is exclusively used by a tenant.
• For details about the product specifications, see the product documentation.
• Area-based networking:
     ▫ Multiple areas are created under a tenant, and multiple hub sites are
       deployed in the HQ/DC. Each area is associated with one or two hub sites.
     ▫ RRs can be independently deployed. Each pair of RRs are associated with
       sites in an area.
     ▫ Traffic between inter-area sites is transmitted through the LAN side of the
       hub.
• Tenant-based networking:
     ▫ The MSP administrator creates multiple tenants, and multiple hub sites are
       deployed in the HQ/DC. Each tenant is associated with one or two hub
       sites.
     ▫ Branch sites are grouped by their geographical areas and are added to
       different tenants.
     ▫ RRs can be independently deployed. Each pair of RRs are associated with
       sites in an area.
     ▫ Traffic between inter-area sites is transmitted through the LAN side of the
       hub.
• Hub-spoke:
• Full-mesh:
  ▫ In the topology, different branches can directly communicate with each other,
     without the need to divert traffic through intermediate nodes.
• Partial-mesh:
• Hierarchical networking:
     ▫ If some departments do not fully use their bandwidth quotas, the idle
        bandwidth resources can be used by other departments with insufficient
        bandwidth.
     ▫ Internet access traffic and traffic for communication with legacy sites needs
       to be controlled separately.
• Traffic cannot be transmitted to the Internet through multiple links in load
  balancing mode. The links can work only in active/standby mode based on their
  priorities.
• If local Internet access is enabled, the default route on the underlay WAN needs
  to be configured. The default route can be a static route (mainly for Internet
  access through the Internet network interface) or BGP/OSPF route (mainly for
  Internet access through the MPLS network interface).
• A dedicated link is established between user-side interfaces on both the legacy
  CPE and SD-WAN CPE. The dedicated link runs a protocol such as BGP or OSPF
  to exchange routes between the legacy MPLS network and SD-WAN network. In
  this way, users on the two networks can communicate with each other through
  the dedicated link.
• Multiple traffic models are supported in this scenario, and you can choose one
  based on your service requirements.
     ▫ Distributed local access: This model applies if all SD-WAN sites can access
       legacy sites over the underlay MPLS network through local breakout. In this
       model, traffic of each site is directly forwarded through the local site,
       without the need of being forwarded through overlay tunnels.
     ▫ Centralized local access: If some SD-WAN sites cannot access legacy sites
       through local breakout, you can configure a site that can communicate
       with the legacy sites as the centralized access site. Traffic from other SD-
       WAN sites is sent to the centralized access site through overlay tunnels, and
       then forwarded to the legacy sites through local breakout.
     ▫ Hybrid local access: The SD-WAN Solution enables multi-link sites using the
       distributed local access model to use local access preferentially, with
       centralized local access as a backup. This enhances reliability. Traffic from a
        site that uses the distributed local access model is preferentially transmitted
        to a legacy site through local breakout. If the MPLS link for local access
        fails, traffic is automatically switched to the overlay tunnel of another link
        and transmitted to the centralized access site. The centralized access site
        then forwards the traffic to legacy sites.
• Active-active hub site networking based on service network segments:
• The active and standby controllers use the same southbound and northbound IP
  addresses. In public network scenarios, the same NAT address must also be
  configured.
1. ABCD
2. A
• Internet:
     ▫ Dial-up connection: low bandwidth and low tariff. The access is not
       restricted by geographical locations. It applies to individual users.
• In the past, production and office services were carried over multiple independent
  private networks. Repeated network construction resulted in high investment
  costs and complex O&M of multiple networks. By deploying multiple slices on
  one IP bearer network, production services such as remote industrial control and
  video surveillance are directly isolated from office services, delivering 100%
  bandwidth guarantee for mission-critical services.
• iFIT integrates the RFC 8321 coloring technology and in-band detection
  technology to directly measure service packets. It works with second-level
  telemetry data collection and iMaster NCE for unified management, computation,
  and visualization. In this way, it implements real-time visualization and proactive
  monitoring of network quality SLAs and fast fault demarcation and locating.
• Quick Network Adjustment upon Cloud Changes, Integrated Service Provisioning,
  Cloud-based Data Sharing.
1. B
• The Ps for intra-city data center are directly connected using WDM or bare
  optical fibers. The link bandwidth can reach 10 Gbit/s. To reduce costs, consider
  connecting local data centers with remote data centers through carriers' MSTP
  links.
• For more information about MPLS, see HCIP-Datacom-Advanced Routing &
  Switching Technology.
• Based on MPLS and IPv6 forwarding technologies, SR Policies can be classified
  into SR-MPLS and SRv6 Policies.
• Traditional switching networks, such as asynchronous transfer mode (ATM) and
  frame relay (FR) networks, are integrated with IP or MPLS networks. As a result,
  Layer 2 virtual private network (L2VPN) emerges. L2VPN includes Virtual Pseudo
  Wire Service (VPWS) and Virtual Private LAN Service (VPLS):
     ▫ VPWS is a P2P L2VPN technology that emulates the basic behaviors and
       characteristics of services such as ATM and frame relay.
     ▫ VPLS provides P2MP L2VPN services so that sites are connected as if they
       were on the same LAN.
• L3VPN is also called Virtual Private Routing Network (VPRN), including RFC
  2547-based BGP/MPLS IP VPN as well as IPsec VPN and GRE VPN carried over
  IPsec or GRE tunnels.
• A traditional L2VPN does not have any control plane and does not transmit
  service route information (MAC addresses). It uses BGP as the signaling protocol
  to establish VCs.
• For details about VPN classification, see the book SRv6 Network Programming:
  Ushering in a New Era of IP Networks.
• VCs are also called pseudo wires (PWs) in some documents.
• GRE: GRE can be applied to both L2VPN and L3VPN. Generally, the bearer WAN
  for MPLS VPN uses LSPs as public network tunnels. If the bearer WAN (P devices)
  has only IP functions but not MPLS functions, and the PEs at the network edge
  have MPLS functions, the LSPs cannot be used as public network tunnels. In this
  case, GRE tunnels can be used to replace LSPs to provide L3VPN or L2VPN
  solutions on the bearer WAN.
• You can configure tunnel policies or tunnel policy selectors for tunnel
  management. This course uses tunnel policy configuration as an example. Tunnel
  policy selectors apply to inter-AS VPN scenarios. For details, see the product
  documentation for NetEngine products.
• The configuration of tunnel policy parameters involves many details. For example,
  CR-LSP-based tunnels include RSVP-TE tunnels and SR-MPLS TE tunnels. The
  system determines the priorities of these tunnels based on their up time. For
  details, see "VPN Tunnel Management Configuration" in the product
  documentation for Huawei NetEngine routers.
• For the application of other types of VPN, such as VPNv6, L2VPN, and EVPN, see
  the product documentation for NetEngine routers.
• For details, see "SR Policy" in 2. Terminology in RFC 8402.
• SR Policy traffic diversion can be based on the binding SID, color, and DSCP value.
  Details are not provided here.
• Different collection protocols may be used in different solutions. For example,
  PCEP is used to collect TE tunnel information on Huawei MPLS networks, and
  BGP SR Policy is used to collect TE tunnel information on SRv6 networks.
• Bandwidth-balanced path: path with more remaining bandwidth among all paths
  that meet the constraints and have the same cost.
• Maximum-availability path: path with the maximum availability among all paths
  that meet the constraints.
• For the basic packet forwarding process, see HCIP-Datacom-Core Technology-01
  Introduction to Network Devices.
• Packet Forwarding Engine (PFE): After a router is powered on, it runs a routing
  protocol to learn the network topology and generate a routing table. If the
  interface board registers successfully, the main control board can generate
  forwarding entries according to the routing table and deliver entries to the
  interface board. In this manner, the router can forward packets according to the
  forwarding table. The component that forwards data packets is a chip located on
  an interface board and is called a packet forwarding engine (PFE).
• Weighted Random Early Detection (WRED): The system discards packets based
  on the drop policies configured for data packets or queues with different
  priorities. WRED is a congestion avoidance mechanism used to discard packets to
  prevent queues from being congested. For details, see the product
  documentation for Huawei NetEngine products.
• FIFO: FIFO does not classify packets. FIFO allows packets to be queued and
  forwarded in the same order as they arrive at an interface.
• SP: Queues are scheduled strictly according to their priorities. Packets in queues
  with a low priority can be scheduled only after all packets in queues with a
  higher priority are scheduled.
• WFQ: The egress bandwidth is allocated to each flow according to the queue
  weight.
• Other scheduling algorithms, such as RR polling, WRR weighted polling, and DRR
  differential polling, are not described here.
• PQ queue
    ▫ PQ queues use the SP scheduling algorithm. That is, the packets in the
      queue with the highest priority are scheduled first. In this way, an absolute
      priority can be provided for different service data, the delay of delay-
      sensitive applications such as VoIP can be guaranteed, and the use of
      bandwidth by high-priority services can be absolutely prioritized.
• WFQ queue
• The ingress PE encapsulates the VPN SID and SRv6 Policy information into the
  service flow on the network slice with the slice ID being 2, and inserts an
  extension header with the Hop By Hop Slice ID being 2 between the IPv6 header
  and SRH of each packet.
• Each transit node queries the SRv6 SID in the SRH hop by hop to obtain the
  physical outbound interface, and then queries the specific "resource reservation"
  sub-interface of the physical outbound interface based on the slice ID. The Hop
  By Hop Slice ID remains unchanged throughout this process.
• The egress PE pops the Hop By Hop extension header and forwards the packet to
  the AC interface of the corresponding VPN instance based on the VPN SID.
• By default, the slice ID is 0, and the IPv6 Hop By Hop extension header does not
  need to be inserted. The packet format on the forwarding plane is the same as
  that of traditional L3VPN over SRv6 Policy.
• Because different WAN VPN technologies use different terms, this section briefly
  describes various protection mechanisms, but does not describe specific
  protection technologies.
• MPLS TE E2E protection is classified into HSB and ordinary backup. In HSB
  protection, the backup path and primary path are created at the same time.
• TWAMP Light does not involve control plane negotiation, and test packets are
  also based on UDP. The implementation and configuration are simple, and the
  reflector does not need to know the session status.
• iFIT measures E2E service packets to obtain performance indicators of an IP
  network, such as the packet loss rate and delay. iFIT adds a color flag to the
  packet header in the service flow. Telemetry is used to periodically collect
  information. Features such as E2E delay measurement and packet loss
  measurement are supported.
• An End.DT4 SID (PE endpoint SID) identifies an IPv4 VPN instance on a network.
• For MPLS packets, the iFIT header is inserted between the MPLS label and MPLS
  payload.
1. A
2. ABCD
• In essence, MPLS is a tunneling technology used to guide data forwarding and
  has complete tunnel creation, management, and maintenance mechanisms. For
  the preceding mechanisms, networks are driven by network operation and
  management requirements, not by applications.
• Traditionally, IP data packet forwarding is implemented based on IP addresses
  reachable to the destination over the shortest path. To meet the reliability
  requirements of services such as voice, online gaming, and video conferencing,
  the FRR technology is introduced. To meet the high bandwidth requirements of
  private line services such as group customer services, the TE technology is
  introduced. These technologies all represent network adaptation to services.
• The solution to this issue is to enable services to drive networks and define the
  network architecture. Specifically, after an application raises requirements (e.g.
  latency, bandwidth, and packet loss rate), a controller is used to collect
  information (e.g. network topology, bandwidth usage, and latency) and compute
  an explicit path according to the requirements.
• https://datatracker.ietf.org/doc/rfc8402/
• The label values used in this course are only examples. For details about the label
  allocation scope, see the corresponding product documentation.
• For SR-capable IGP instances, all IGP-enabled outbound interfaces are allocated
  with SR adjacency labels, which are propagated to the entire network through an
  IGP.
• In Huawei's early solutions, an IGP can also be used to collect network topology
  information. Due to IGP area-related restrictions, BGP-LS is mainly used at
  present.
• Before SR-MPLS TE tunnel creation, IS-IS/OSPF neighbor relationships must be
  established between forwarders to implement network layer connectivity,
  allocate labels, and collect network topology information. In addition, the
  forwarders need to report label and network topology information to a controller
  for path computation. If no controller is available, CSPF can be enabled on the
  ingress of the SR-MPLS TE tunnel so that forwarders can compute paths using
  CSPF.
• For SR-MPLS TE tunnel configuration on a forwarder, in addition to manually
  specifying an explicit path, you can also use the function of path computation by
  the ingress.
• Currently, the mainstream solution is strict-path forwarding based on adjacency
  labels.
• https://datatracker.ietf.org/doc/draft-ietf-spring-segment-routing-policy/
     ▫ BGP routes delivered by the controller carry the color community attribute,
       and this attribute can be transmitted. The ingress finds a matching BGP
       route and recurses it to an SR Policy based on the color and endpoint
       information.
• Huawei SR-MPLS Policy solution also uses PCEP for tunnel status query.
• An SR Policy can contain multiple candidate paths (e.g. CP1 and CP2). Each of
  the paths is uniquely determined by the triplet <protocol, origin, discriminator>.
• CP1 is the primary path because it is valid and has the highest preference. The
  two SID lists of CP1 are delivered to the forwarder, and traffic is balanced
  between the two paths based on weights. For SID-List <SID11...SID1i>, traffic is
  balanced according to W1/(W1+W2). In the current mainstream implementation,
  a candidate path has only one segment list.
• Source of BSIDs: SRLB or SRGB
• Each candidate path of an SR Policy has a BSID. The BSIDs of different candidate
  paths of the same SR Policy are generally the same. The BSIDs of different SR
  Policies must be different. Generally, the BSID range needs to be planned and
  cannot be shared with other services.
• The headend of an SR Policy forwards packets over the SR Policy based on the
  BSID. For example, when the headend receives a packet carrying a BSID, it uses
  the corresponding SR Policy to forward the packet.
• BSIDs are used in label-based traffic steering scenarios, especially label stitching
  scenarios and tunnel protocol interworking scenarios, such as LDP over SR.
     1.    Controller planning: You can plan the color attribute and the mapping
          between the color attribute and SR tunnels' SLA requirements (path
          computation constraints) on the controller based on the SLA requirements
          of services.
     3. Create a BGP session between the ingress and egress to advertise BGP VPN
        route information.
• In VPN FRR, service convergence time depends on only the time required to
  detect remote PE failures and change tunnel status, making service convergence
  time irrelevant to the number of VPN routes on the bearer network.
• In this example, VPN FRR primary and backup paths exist from PE1 to PE3. They
  are not all displayed in the figure.
• Fault detection in hot standby and VPN FRR scenarios depends on detection
  mechanisms such as BFD or SBFD.
• Because the state machine has only Up and Down states, the initiator can send
  packets carrying only the Up or Down state and receive packets carrying only the
  Up or Admin Down state. The initiator starts by sending an SBFD packet carrying
  the Down state to the reflector. The destination and source port numbers of the
  packet are 7784 and 4784, respectively; the destination IP address is a user-
  configured address on the 127 network segment; the source IP address is the
  locally configured LSR ID.
• The reflector runs no SBFD state machine or detection mechanism. For this
  reason, it does not proactively send SBFD Echo packets. Instead, it only reflects
  back received SBFD packets. The destination and source port numbers in the
  looped-back SBFD packet are 4784 and 7784, respectively; the source IP address
  is the locally configured LSR ID; the destination IP address is the source IP
  address of the initiator.
• PCEP was first proposed in the optical transport field. It is seldom deployed on
  enterprises' production networks due to its few applications on IP networks,
  difficult interoperability between vendors, and poor performance. Therefore, BGP
  SR-Policy is recommended on an SR-MPLS network.
• Before configuring an SR-MPLS BE tunnel, you need to enable MPLS on each
  device in the SR-MPLS domain. The configuration procedure is as follows:
     ▫ Run the mpls lsr-id lsr-id command to configure an LSR ID for the local
       device.
                − LSRs do not have default LSR IDs, and such IDs must be
                  manually configured.
▫ Enable SR globally.
     ▫ Run the ipv4-family command to enter the VPN instance IPv4 address
       family view.
          ▪ Run the mpls lsr-id lsr-id command to configure an LSR ID for the
            local device.
• An explicit path is a vector path comprised of a series of nodes that are arranged
  in the configuration sequence. The path through which an SR-MPLS TE LSP
  passes can be planned by specifying next-hop labels or next-hop IP addresses on
  an explicit path. Generally, the IP addresses involved in an explicit path are
  interface IP addresses. An explicit path that is in use can be updated. To configure
  an explicit path, perform the following steps:
     ▫ Run the next sid label label-value type { adjacency | prefix | binding-sid }
       command to specify a next-hop SID for the explicit path.
           ▪ Run the index index sid label label command to specify a next-hop
             SID for the segment list.
                 − You can run the command multiple times. The system generates
                   a label stack for the segment list by index in ascending order. If
                   a candidate path in an SR-MPLS Policy is preferentially selected,
                   traffic is forwarded using the segment list of the candidate path.
                   A maximum of 10 SIDs can be configured for each segment list.
• The color attribute is added to a route through a route-policy. This enables the
  route to recurse to an SR-MPLS Policy based on the color value and next-hop
  address in the route.
▫ Configure a route-policy.
1   Huawei Confidential
    Objectives
2   Huawei Confidential
    Contents
1. SRv6 Overview
2. SRv6 Fundamentals
3   Huawei Confidential
  IP/MPLS Network Introduction
  ⚫       As a Layer 2.5 technology that runs between Layer 2 and Layer 3, MPLS adds connection-oriented attributes to connectionless IP networks. Traditional
          MPLS label-based forwarding improves the forwarding efficiency of IP networks. However, as hardware capabilities continue to improve, MPLS no
          longer features distinct advantages in forwarding efficiency. Nevertheless, MPLS provides good QoS guarantee for IP networks through connection-
          oriented label forwarding and also supports TE, VPN, and FRR.
  ⚫       IP/MPLS networks have gradually replaced dedicated networks, such as ATM, frame relay (FR), and X.25. Ultimately, MPLS is applied to various
          networks, including IP backbone, metro, and mobile transport, to support multi-service transport and implement the Internet's all-IP transformation.
IP/MPLS network
                                                                                                                  Ethernet         MPLS             IP
                                                                                                                   Header         Header          Packet
4 Huawei Confidential
• In the initial stage of network development, multiple types of networks, such as X.25, FR,
  ATM, and IP, co-existed to meet different service requirements. These networks could not
  interwork with each other, and on top of that, also competed, with mainly ATM and IP
  networks taking center stage. ATM is a transmission mode that uses fixed-length cell
  switching. It establishes paths in connection-oriented mode, and can provide better QoS
  capabilities than IP. The design philosophy of ATM involves centering on networks and
  providing reliable transmission, and its design concepts reflect the reliability and
  manageability requirements of telecommunications networks. This is the reason why ATM
  was widely deployed on early telecommunications networks. The design concepts of IP
  differ greatly from those of ATM. To be more precise, IP is a connectionless communication
  mechanism that provides the best-effort forwarding capability, and the packet length is not
  fixed. On top of that, IP networks mainly rely on the transport-layer protocols (e.g., TCP) to
  ensure transmission reliability, and the requirement for the network layer involves ease of
  use. The design concept of IP networks embodies the "terminal-centric and best-effort"
  notion of the computer network, enabling IP to meet the computer network's service
  requirements. The competition between the two can essentially be represented as a
  competition between telecommunications and computer networks. As the network scale
  expanded and network services increased in number, ATM networks became more
  complex than IP networks, while also bearing higher management costs. Within the context
  of costs versus benefits for telecom carriers, ATM networks were gradually replaced by IP
  networks.
• Although IP is more suitable for the development of computer networks than ATM,
  computer networks require a certain level of QoS guarantee. To compensate for the IP
   network's insufficient QoS capabilities, numerous technologies integrating IP and ATM, such
   as local area network emulation (LANE) and IP over ATM (IPoA), have been proposed.
   However, these technologies only addressed part of the issue, until 1996 when MPLS
   technology was proposed to provide a better solution to this issue.
Issues with MPLS LDP and RSVP-TE
                                   MPLS LDP                                                               RSVP-TE
R2 R2
R1 R1
                                                                R3                                                                       R3
                                       R4                                                                      R4
 6           Huawei Confidential
  SR Origin and Solution
  ⚫       The SDN concept has a great impact on the network industry, and many protocols used for SDN implementation emerge in the
          industry, including OpenFlow, Protocol Oblivious Forwarding (POF), Programming Protocol-independent Packet Processors (P4), and
          SR. Compared with revolutionary protocols, SR considers compatibility with the existing network and smooth evolution, and also
          provides programmability. It is a de facto SDN standard.
7 Huawei Confidential
• SR resolves many issues on IP/MPLS networks through two solutions: SR-MPLS (based on
  MPLS forwarding) and SRv6 (based on IPv6 forwarding).
  From MPLS to SRv6
  ⚫       MPLS causes isolated network islands. SRv6 provides a unified forwarding plane and has advantages such as
          simplified protocols, high scalability, and programmability.
      Forwarding
         plane           Push             Swap         Pop             Push          Continue       Next
                            MPLS 2004     MPLS 1368                           MPLS 222
                            MPLS 1949     MPLS 1949                           MPLS 111   MPLS 111                        IPv6 + SRH IPv6 + SRH
              Payload         Payload      Payload      Payload   Payload     Payload    Payload     Payload   Payload    Payload     Payload    Payload
                                                                                                                                    ✓ Simplified protocols
                                                                                                                                    ✓ High scalability
                                                  Control plane simplification            Forwarding plane simplification           ✓ Programmability
8 Huawei Confidential
• Although MPLS plays an important role in the all-IP transformation of networks, it causes
  isolated network islands. On the one hand, it increases the complexity of cross-domain
  network interconnection. For example, solutions such as the MPLS VPN Option A/B/C
  solution are complex to deploy and involve difficult E2E service deployment. On the other
  hand, as the Internet and cloud computing develop, more and more cloud data centers are
  built. To meet tenants' networking requirements, multiple overlay technologies have been
  proposed, among which VXLAN is a typical example. In the past, quite a few attempts were
  made to provide VPN services by introducing MPLS to data centers. However, these
  attempts all wound up in failure due to multiple factors, including numerous network
  boundaries, complex management, and insufficient scalability. As such, the traffic from an
   end user to a service in a data center may typically need to pass through the VLAN, IP
   network, IP/MPLS network, and VXLAN network.
            Thanks to the network programming capability, SRv6 can not only better implement path programming to meet service SLAs but also connect networks and applications
             to build intelligent cloud-networks.
                                                                                                              Promotion of
                                                                                 Controller                  cloud-network
                                                        Compatibility                                         convergence
                                                        with existing
                                                          networks
                                                                                                                                           Path programming to
                                        Common IPv6 router                                                                                  meet service SLAs
                                                                                                                SRv6 router
                                       Data
                                     download
                                                                                                         Improved inter-
                                       Video                                                              AS experience
                                                    Ingress
AS 65000 AS 65001
    9        Huawei Confidential
     Contents
1. SRv6 Overview
     2. SRv6 Fundamentals
           ◼   Basic Concepts of SRv6
Ingress D
    11   Huawei Confidential
                                                                                                                         SRv6 SRH          SRv6 Node          SRv6 Forwarding
  IPv6 SRH
  ⚫    RFC 8754 defines the IPv6 SRH added to IPv6 packets. The SRH format is as follows:
Destination Address  Segment List: an ordered list of SRv6 segment identifiers (SIDs).
                                                    Routing
                                                                                                    Segments Left (SL): number of remaining SRv6 segments. The SL value
           Next Header            Hdr Ext Len                      Segments Left                     is decremented and the destination IP address (DIP) is changed to an
                                                    Type=4                                           active SID to complete traffic forwarding segment by segment.
             Last Entry               Flags                    Tag                                  Tag: tags a packet as part of a class or group of packets to implement
                         Segment List [0] (128-bit IPv6 address)                                     group-based policies.
                                                                                    Active
                         Segment List [1] (128-bit IPv6 address)                                    SRH TLVs (e.g. NSH metadata, HMAC TLV, and Padding TLV): can be
                                                                                   segment           used as global parameters of SIDs in segment lists.
                         Segment List [2] (128-bit IPv6 address)
                                 Optional TLV objects (variable)
                                          IPv6 Payload
12 Huawei Confidential
• The biggest difference between SRv6 and SR-MPLS lies in the IPv6 SRH. SRv6 uses IPv6
  extension headers to implement Segment Routing.
SRv6 Segment
⚫    SRv6 segments are expressed using IPv6 addresses and usually called SRv6 SIDs.
⚫    As shown in the figure, an SRv6 SID usually consists of three fields: Locator, Function, and Arguments. They are
     expressed in the Locator:Function:Arguments format. Note that the total length (Locator + Function + Arguments) is
     less than or equal to 128 bits. If the total length is less than 128 bits, the reserved bits are padded with 0.
⚫    If the Arguments field does not exist, the format is Locator:Function. The Locator field occupies the most significant
     bits of an IPv6 address, and the Function field occupies the remaining part of the IPv6 address.
    13   Huawei Confidential
                                                                                                            SRv6 SRH   SRv6 Node   SRv6 Forwarding
                                                        IPv6 prefix
  ⚫    The Locator field identifies the location of a network node, and is used for other nodes to route and forward packets to this
       identified node so as to implement network instruction addressing.
  ⚫    A locator has two important characteristics: routable and aggregatable. After a locator is configured for a node, the system
       generates a locator route and propagates the route throughout the SR domain using an IGP, allowing other nodes to locate the
       node based on the received locator route information. In addition, all SRv6 SIDs advertised by the node are reachable through the
       route.
  ⚫    In the following example, a locator with the 64-bit prefix 2001:DB8:ABCD:: is configured for a Huawei device.
14 Huawei Confidential
• The locator is routable and therefore usually unique in an SR domain. In some scenarios,
  such as an anycast protection scenario, multiple devices may be configured with the same
  locator.
                                                                                                                    SRv6 SRH        SRv6 Node    SRv6 Forwarding
Opcode Optional
  ⚫    The Function field identifies the forwarding behavior to be performed. In SRv6 network programming, forwarding behaviors are
       identified using different functions. For example, RFC defines End, End.X, End.DX4, and End.DX6 behaviors.
  ⚫    An End.X SID is similar to an adjacency SID in SR-MPLS and is used to identify a link. A configuration example is as follows:
                        ⚫        The opcode corresponding to the function is ::1. In this example, the Arguments field is not carried, and the
                                 SRv6 SID is 2001:db8:abcd::1.
                        ⚫        This function guides packet forwarding from the specified interface (G3/0/0) to the corresponding neighbor
                                 (2001:DB8:200::1).
15 Huawei Confidential
• In some scenarios, an SRv6 endpoint behavior may require additional actions. In this case,
  the Arguments field must be encapsulated. For example, in an EVPN VPLS scenario where
  CE multi-homing is deployed for BUM traffic forwarding, the Function field is set to
  End.DT2M, and the Arguments field is used to provide local ESI mapping to implement split
  horizon.
• The Function and Arguments fields can both be defined by engineers, resulting in an SRv6
  SID structure that improves network programmability. In most scenarios, the Arguments
  field is not configured.
                                                                                                                   SRv6 SRH        SRv6 Node         SRv6 Forwarding
16 Huawei Confidential
• In addition to L3VPN services, SRv6 can carry L2VPN services. L2VPN-related SIDs are as
  follows:
          ▫ End.DT2U: Indicates a Layer 2 cross-connect endpoint SID that requires unicast MAC
            table lookup and identifies an endpoint. If a bypass tunnel exists on the network, an
                  End.DT2UL SID is generated automatically. This SID can be used to guide unicast
                  traffic forwarding over the bypass tunnel when a CE is dual-homed to PEs. The
                  corresponding function is to remove the IPv6 header (along with all its extension
                  headers), search the MAC address table for a MAC entry based on the exposed
                  destination MAC address, and then forward the remaining packet data to the
                  corresponding outbound interface based on the entry. This SID can be used in EVPN
                  VPLS unicast scenarios.
     ▫ End.DT2M: Indicates a Layer 2 cross-connect endpoint SID that requires broadcast-
       based flooding and identifies an endpoint. The corresponding function is to remove
        the IPv6 header (along with all its extension headers) and then broadcast the
        remaining packet data in the Bridge Domain (BD). This SID can be used in EVPN VPLS
        BUM scenarios.
     ▫ End.OP (OAM Endpoint with Punt): Indicates an OAM SID. The corresponding
       function is to send OAM packets to the OAM process. This SID is mainly used in
        ping/tracert scenarios.
                                                                                                              SRv6 SRH        SRv6 Node       SRv6 Forwarding
 V: searches a specified table for packet forwarding based on virtual local area network (VLAN) information.
 U: searches a specified table for packet forwarding based on unicast MAC address information.
    18       Huawei Confidential
                                                                                                                             SRv6 SRH           SRv6 Node         SRv6 Forwarding
⚫      An End.X SID is a Layer 3 cross-connect endpoint SID that identifies a link. It is similar to an adjacency SID in SR-MPLS. After an End.X SID is generated on a node, the node
       propagates the SID to all the other nodes in the SRv6 domain through an IGP. Although the other nodes can all obtain the SID, only the node generating the SID knows how to
       implement the instruction bound to the SID.
⚫      An End.DT4 SID is a PE-specific endpoint SID that identifies an IPv4 VPN instance. The instruction bound to the End.DT4 SID is to decapsulate packets and search the routing
       table of the corresponding IPv4 VPN instance for packet forwarding. The End.DT4 SID is equivalent to an IPv4 VPN label and used in L3VPNv4 scenarios. It can be either
       manually configured or automatically allocated by BGP within the dynamic SID range of the specified locator.
                Loopback1                                                                                                                                    Loopback1
                10.1.4.4/32                                                                                                                                  10.1.5.5/32
       CE1           AS 65001                                                                                                                               AS 65002         CE2
     19      Huawei Confidential
                                                                                                         SRv6 SRH       SRv6 Node      SRv6 Forwarding
  SRv6 Flavors
  ⚫    Flavors are additional behaviors defined for SRv6 segment enhancement. These behaviors are optional and used to
       enhance SRv6 segment-based actions in order to meet diverse service requirements.
  ⚫    SRv6-Network-Programming defines the following additional behaviors: penultimate segment pop of the SRH (PSP),
       ultimate segment pop of the SRH (USP), and ultimate segment decapsulation (USD).
20 Huawei Confidential
• Different flavors can be combined. For example, if an End SID carries PSP and USP flavors,
  the PSP action is performed on the penultimate node, and the USD action is performed on
  the ultimate node.
                                                                                                  SRv6 SRH      SRv6 Node     SRv6 Forwarding
21 Huawei Confidential
• static static-length: specifies the static segment length in the Function field. This length
  determines the number of static opcodes that can be configured in the specified locator.
• args args-length: specifies the length of the Arguments field. The Arguments field is located
  at the end of a SID. If args args-length is configured, the Arguments field is reserved and
  will not be occupied by configured static SIDs or generated dynamic SIDs.
                                                                                           SRv6 SRH      SRv6 Node     SRv6 Forwarding
   An End.X SID identifies a Layer 3 adjacency of an SRv6 node. Therefore, you need to specify an interface and the next hop
   address of the interface during the configuration.
  22   Huawei Confidential
                                                                                                                      SRv6 SRH         SRv6 Node        SRv6 Forwarding
⚫    In this example, the locator 2001:DB8:ABCD:: is configured, and its length is 64 bits. The static segment occupies 32 bits, the dynamic segment 32 bits,
     and the Args field 0 bits. The value range is as follows:
            Static segment: The start value is 2001:DB8:ABCD:0000:0000:0000:0000:0001, and the end value is 2001:DB8:ABCD:0000:0000:0000:FFFF:FFFF.
 Dynamic segment: The start value is 2001:DB8:ABCD:0000:0000:0001:0000:0000, and the end value is 2001:DB8:ABCD:0000:FFFF:FFFF:FFFF:FFFF.
                  Statically configuring End and End.X SIDs is recommended. Dynamically allocated SIDs will change after a device
                  restart, adversely affecting maintenance.
    23       Huawei Confidential
                                                                                           SRv6 SRH         SRv6 Node         SRv6 Forwarding
SRv6 Node
⚫    RFC 8754 defines three types of SR nodes:
            SR source node: a source node that encapsulates packets with SRv6 headers.
            Transit node: an IPv6 node that forwards SRv6 packets but does not perform SRv6 processing.
            SRv6 segment endpoint node: a node that receives and processes SRv6 packets in which the destination IPv6
             address is a local SID or local interface address of the node.
                                                                                                                       CE2: End.DT4
    CE1                               R1               R2                         R3                         R4         FC04::400       CE2
    24       Huawei Confidential
                                                                                                       SRv6 SRH        SRv6 Node         SRv6 Forwarding
                                                                                                                                   CE2: End.DT4
     CE1                               R1                                    R2               R3                        R4          FC04::400       CE2
                                                        IPv6 Header
                                                        SRH (SL = 2)                                                 FC01:: /96     Locator
                                            FC04::400      FC04::4        FC03::3                                     FC01::1      End SID
                                                          Payload
SRv6 Policy
    25     Huawei Confidential
                                                                                                                 SRv6 SRH   SRv6 Node   SRv6 Forwarding
                                  Encapsulates an outer IPv6 header and SRH for a received IP packet, and
              H.Encaps
                                  searches the corresponding routing table for packet forwarding.
                                                                                                                                        IPv6 Header
                                  Encapsulates an outer IPv6 header and reduced SRH for a received IP       IPv6 Header     Encaps
                                                                                                                                            SRH
            H.Encaps.Red          packet, and searches the corresponding routing table for packet            Payload
                                  forwarding.                                                                                           IPv6 Header
                                                                                                                                          Payload
                                  Encapsulates an outer IPv6 header and SRH for a received Layer 2 frame,
             H.Encaps.L2
                                  and searches the corresponding routing table for forwarding.
                                  Encapsulates an outer IPv6 header and reduced SRH for a received Layer
           H.Encaps.L2.Red
                                  2 frame, and searches the corresponding routing table for forwarding.
26 Huawei Confidential
• The difference between a reduced SRH and an SRH is that the segment list in a reduced SRH
  does not contain the first segment in an existing IPv6 DA.
                                                                                                     SRv6 SRH         SRv6 Node         SRv6 Forwarding
Transit Node
⚫    A transit node is an IPv6 node that does not participate in SRv6 processing on the SRv6 packet forwarding path. That is, the transit
     node just performs ordinary IPv6 packet forwarding.
⚫    After receiving an SRv6 packet, the node parses the IPv6 DA field in the packet. If the IPv6 DA is neither a locally configured SRv6
     SID nor a local interface address, the node considers the SRv6 packet as an ordinary IPv6 packet and searches the routing table for
     packet forwarding without processing the SRH.
⚫    A transit node can be either an ordinary IPv6 node or an SRv6-capable node.
                            Source Node        Transit Node                      Endpoint Node                  Endpoint Node
                                FC01:: /96        FC02:: /96                            FC03:: /96                 FC04:: /96
                                 FC01::1           FC02::2                               FC03::3                    FC04::4
                                                                                                                                 CE2: End.DT4
    CE1                            R1                R2                                     R3                         R4         FC04::400       CE2
                                                                         IPv6 Header
                                                                                                                    FC01:: /96    Locator
                                                                         SRH (SL = 2)
                                                                                                                     FC01::1      End SID
                                                             FC04::400      FC04::4        FC03::3
                                                                           Payload
    27    Huawei Confidential
                                                                                                                  SRv6 SRH          SRv6 Node          SRv6 Forwarding
  Endpoint Node
  ⚫    An endpoint node is a node that receives an SRv6 packet destined for itself (a packet of which the IPv6 destination address is a local SID).
  ⚫    For example, R3 searches its local SID table based on the IPv6 DA FC03::3 of the packet and finds a matching End SID. Then, R3 decrements the SL
       value by 1, uses the SID whose SL value is 1 as the destination IPv6 address, searches the routing table, and forwards the packet.
  ⚫    There may be multiple endpoint nodes on the data forwarding path. Each endpoint node provides services such as packet forwarding, encapsulation,
       and decapsulation.
28 Huawei Confidential
• Each SRv6 node maintains a local SID table that contains all SRv6 SIDs generated on the
  node, and an SRv6 FIB can be generated based on the table. The local SID table provides the
  following functions:
⚫    In the initial phase of SRv6 deployment, SRv6 BE can be used to quickly provision services based on IPv6 route
     reachability, offering unparalleled advantages. During future evolution, transit nodes can be upgraded on demand
     and SRv6 Policy can be deployed to meet the requirements of high-value services.
    29       Huawei Confidential
                                                                                                         SRv6 SRH      SRv6 Node         SRv6 Forwarding
    30     Huawei Confidential
                                                                                                                        SRv6 SRH        SRv6 Node           SRv6 Forwarding
    Source Node                        Transit Node                     Endpoint Node                    Endpoint Node                      Endpoint Node
         FC01:: /96                       FC02:: /96                       FC03:: /96                        FC04:: /96                        FC05:: /96
          FC01::1                          FC02::2                          FC03::3                           FC04::4                           FC05::5
            R1                               R2                               R3                                R4                                R5
                      DIPv6: FC05::5                   DIPv6: FC05::5                   DIPv6: FC05::5                     DIPv6: FC05::5
                      SIPv6: FC01::1                   SIPv6: FC01::1                   SIPv6: FC01::1                     SIPv6: FC01::1
                          Payload                         Payload                          Payload                            Payload
                                                                                                                                                       FC01:: /96   Locator
                                                                                                                                                        FC01::1     End SID
    31       Huawei Confidential
                                                                                                                          SRv6 SRH         SRv6 Node            SRv6 Forwarding
              R1                             R2                               R3                                R4                                   R5
                      DIPv6: FC03::3                   DIPv6: FC03::3                   DIPv6: FC04::4     DIPv6: FC05::5             DIPv6: FC05::5
                      SIPv6: FC01::1                   SIPv6: FC01::1                   SIPv6: FC01::1     SIPv6: FC01::1             SIPv6: FC01::1
                        SRH (SL = 2)                    SRH (SL = 2)                     SRH (SL = 1)       SRH (SL = 0)                 Payload
                          FC05::5                         FC05::5                          FC05::5            FC05::5
                          FC04::4                         FC04::4                          FC04::4            FC04::4        If the type of the SID whose SL value is 0
                          FC03::3                         FC03::3                          FC03::3            FC03::3            is End, End.X, or End.DT, the SRH is
                                                                                                                             removed on the penultimate segment by
                          Payload                         Payload                          Payload            Payload
                                                                                                                                               default.
                                                                                                                                                             FC01:: /96   Locator
                                                                                                                                                              FC01::1     End SID
  ⚫     Different from SR-MPLS label processing, SRv6 SRH processing is implemented from the bottom up, and segments in the SRv6 SRH are not popped
        after being processed by a node. Therefore, the SRv6 header can be used for path backtracking.
32 Huawei Confidential
• In MPLS, different removal options are defined using the Implicit-Null and Non-null options.
  Penultimate hop popping (PHP) in the MPLS data plane refers to the process in which the
  outermost label of the MPLS label stack is removed by an LSR before the packet reaches the
  adjacent label edge router (LER). If PHP is not enabled on the MPLS network, the LER is
  responsible for removing the label.
• These behaviors are defined as two functions in SRv6: PSP and USP.
     Contents
1. SRv6 Overview
     2. SRv6 Fundamentals
           ▫ Basic Concepts of SRv6
           ◼   SRv6 Policy Path Establishment and Traffic Steering
34 Huawei Confidential
• https://datatracker.ietf.org/doc/draft-ietf-spring-segment-routing-policy/
  SRv6 Policy Identification
  ⚫    An SRv6 Policy is identified by the tuple <headend, color, endpoint>.
  ⚫    For an SRv6 Policy with a specified headend, it is identified only using <color, endpoint>.
              Headend: node where an SRv6 Policy is originated. Generally, it is a globally unique IP address.
              Color: 32-bit extended community attribute. It is used to identify a type of service intent (e.g. low delay).
              Endpoint: destination address of an SRv6 Policy. Generally, it is a globally unique IPv6 address.
  ⚫    On the specified headend, the color and endpoint are used to identify the forwarding path of the corresponding
       SRv6 Policy.
                                                                                  Color 15
                             SRv6 Policy 1 <color 15, endpoint 1>
                                                                                  Color 20                           Endpoint 1
                                                                                  Color 20                           Endpoint 2
                              SRv6 Policy 3 <color 25, endpoint 2>
                                                                                  Color 25
35 Huawei Confidential
▫ The endpoint node in SRv6 refers to the type of the device that processes the SRH.
           ▫ The endpoint in an SRv6 Policy refers to the policy's egress, which is generally
             expressed using an IPv6 address.
                                                                                             SRv6 Policy Path Establishment      Traffic Steering to SRv6 Policies
⚫ A candidate path is an SRv6 Policy's basic unit that is manually configured or is sent to the headend through BGP IPv6 SR Policy.
⚫ Weights can be configured for segment lists to control load balancing among SRv6 paths.
                                                                    Segment list 1
                                                                                            SR Policy P1 <headend, color, endpoint>
                                                                       Weight                Candidate-path CP1 <protocol, origin, discriminator>
                                    Primary path                                             Preference 200
           SRv6 Policy             Candidate path 1                 Segment list 2             Weight W1, SID-List1 <SID11...SID1i>
                                                                                               Weight W2, SID-List2 <SID21...SID2j>
                                    Preference 200                     Weight                Candidate-path CP2 <protocol, origin, discriminator>
       <Headend, color,
                                                                                             Preference 100
          endpoint>
                                                                                               Weight W3, SID-List3 <SID31...SID3i>
                                   Candidate path 2                 Segment list 1             Weight W4, SID-List4 <SID41...SID4j>
36 Huawei Confidential
• An SR Policy can contain multiple candidate paths (e.g. CP1 and CP2). Each of the paths is
  uniquely determined by the triplet <protocol, origin, discriminator>.
• CP1 is the primary path because it is valid and has the highest preference. The two SID lists
  of CP1 are delivered to the forwarder, and traffic is balanced between the two paths based
  on weights. For the SID list <SID11...SID1i>, traffic is balanced according to W1/(W1+W2).
                                                                                                             SRv6 Policy Path Establishment         Traffic Steering to SRv6 Policies
 The controller uses BGP IPv6 SR Policy to deliver SRv6 Policy information (e.g. headend, color, and endpoint) to the headend.
  ⚫    Huawei's SRv6 Policy solution also uses NETCONF to deliver other configurations, such as service interfaces and route-policies (with the color
       attribute).
⚫ In addition to delivering SRv6 Policies through iMaster NCE-IP, you can also manually deploy SRv6 Policies.
                                                                                                                                                               Extended IS-IS
                                        1. BGP-LS
                                                                                                                                  Color
                                        2. BGP IPv6 SR Policy
                                       3. NETCONF
                                                                          Headend                                                                                 Endpoint
37 Huawei Confidential
                                                       SRv6 Policy
                                                       BGP-LS peer
38 Huawei Confidential
• BGP-LS connection:
           ▫ BGP-LS supports the collection of SR Policy status information, based on which the
             controller displays tunnel status. https://datatracker.ietf.org/doc/draft-ietf-idr-te-lsp-
             distribution/
           ▫ BGP-LS also needs to be deployed on the headend to advertise the SRv6 Policy status.
                                                                                                SRv6 Policy Path Establishment            Traffic Steering to SRv6 Policies
39 Huawei Confidential
           ▫ The controller delivers SRv6 Policy information to forwarders for SRv6 Policy
             generation.
           ▫ BGP routes delivered by the controller carry the color community attribute, which
             can be transmitted. The headend finds a matching BGP route and recurses it to the
             corresponding SRv6 Policy based on the color and endpoint information.
  ⚫    In an AS, devices can use an extended IGP (extended OSPFv3 or IS-IS) to obtain intra-AS SID information. In inter-AS scenarios, however, BGP egress
       peer engineering (EPE) needs to be used to transmit SID information.
                                                                        FC02::1C                   FC03::1C
                       FC01::1                          FC02::2                                                    FC03::3                                 FC04::4
40 Huawei Confidential
• BGP EPE can allocate BGP peer SIDs to inter-AS paths. Peer SIDs are classified into the
  following types:
           ▫ A Peer-Node SID identifies a peer node. The peers at both ends of each BGP session
             are allocated with Peer-Node SIDs. An EBGP peer relationship established based on
             loopback interfaces may traverse multiple physical links. In this case, the Peer-Node
             SID of a peer is mapped to multiple outbound interfaces. Peer-Node SIDs are End
             SIDs.
• BGP EPE allocates SIDs only to BGP peers and links, but cannot be used to construct a
  forwarding path. BGP peer SIDs must be used with IGP SIDs to form an E2E path.
                                                                                                               SRv6 Policy Path Establishment        Traffic Steering to SRv6 Policies
               A stitching node, which is generally an ABR or ASBR, is responsible for processing the binding SID and adding SRH information.
           Assume that the
             stack depth                  CE1                PE1          AS 65001              ASBR1                      ASBR2       AS 65002 PE2                         CE2
           supported by the                                                                                        FC03::1C
              device is 4.                                                 The stack depth                                                           FC04::100
                                                                       cannot accommodate             FC02::1C           FC03::100
                                      CE1->CE2             FC01::1      the E2E segment list. FC02::2                      FC03::3             FC04::4                   CE1->CE2
                                                             IPv6 Header         IPv6 Header          IPv6 Header          IPv6 Header            IPv6 Header
        SRH (SL = 3)
                                                             SRH (SL = 3)         SRH (SL = 2)        SRH (SL = 1)         SRH (SL = 0)           SRH (SL = 0)
      Segment List (0)
                                  Stack                       FC04::100            FC04::100           FC04::100             FC04::4               FC04::100
      Segment List (1)
                                  depth                       FC03::100            FC03::100           FC03::100                                   FC03::100
      Segment List (2)                                                                                                     SRH (SL = 0)
                                                              FC02::1C             FC02::1C             FC02::1C                                    FC02::1C
      Segment List (3)                                                                                                      FC04::100
                                                               FC02::2              FC02::2              FC02::2                                     FC02::2
                                                                                                                            FC03::100
      FC02::1C        End.X SID                               CE1->CE2             CE1->CE2            CE1->CE2              FC02::1C               CE1->CE2
       FC03::4        End SID                                                                                                 FC02::2
                                                                                    Internal            Internal
                                                                                                                                                     Internal
      FC03::100       Binding SID                           Sent from PE1        processing on       processing on          CE1->CE2
                                                                                                                                                processing on PE2      Insert mode
      FC04::100                                                                      ASBR1               ASBR1
                      End.DT4 SID
                                                                                                                         Sent from ASBR2
41 Huawei Confidential
• On a large network, the SRv6 SRH may be of a large size. Considering device limitations and
  forwarding efficiency, the number of SIDs in the SRH must be limited.
• Generally, there are two methods for reducing the SRH size:
                         ▪ Huawei mainly uses the G-SRv6 solution for SRv6 header compression,
                           reducing the SRH size and improving the forwarding efficiency without
                           sacrificing SID information.
                         ▪ Binding SIDs are used to stitch different SRv6 paths together, so that the SRH
                           of each SRv6 path is not too large.
▫ End.B6.Insert
▫ End.B6.Insert.Red
▫ End.B6.Encaps
            ▫ End.B6.Encaps.Red
                                                                                                               SRv6 Policy Path Establishment          Traffic Steering to SRv6 Policies
              Encaps mode: inserts an outer IPv6 header and SRH before the original IPv6 header so that the packet is forwarded based on the outer IPv6 header and SRH.
      Assume that the
        stack depth                                                                                                           ASBR2
                                          CE1              PE1          AS65001                ASBR1                                     AS65002    PE2                       CE2
      supported by the
         device is 4.                                                                                                 FC03::1C
                                                                        The stack depth                                                                   FC04::100
                                                                     cannot accommodate          FC02::1C                  FC03::100
                                      CE1->CE2           FC01::1     the E2E segment list. FC02::2                          FC03::3                FC04::4                 CE1->CE2
                                                            IPv6 Header         IPv6 Header            IPv6 Header             IPv6 Header          IPv6 Header
        SRH (SL = 3)                                        SRH (SL = 3)                               SRH (SL = 1)                                 SRH (SL = 0)
                                                                                SRH (SL = 2)                                    SRH (SL = 0)
      Segment List (0)
                                  Stack                      FC04::100           FC04::100              FC04::100                 FC04::4            FC04::100
      Segment List (1)
                                  depth                      FC03::100           FC03::100              FC03::100                                    FC03::100
      Segment List (2)                                                                                                         IPv6 Header
                                                             FC02::1C            FC02::1C                FC02::1C                                     FC02::1C
      Segment List (3)
                                                              FC02::2             FC02::2                 FC02::2               SRH (SL = 0)           FC02::2
                                                                                                                                 FC04::100
      FC02::1C        End.X SID                              CE1->CE2            CE1->CE2               CE1->CE2                                     CE1->CE2
                                                                                                                                 FC03::100
       FC03::4        End SID                                                     Internal            Internal                    FC02::1C
                                                                                                                                                     Internal
      FC03::100       Binding SID                          Sent from PE1       processing on       processing on                   FC02::2
                                                                                                                                                processing on PE2
      FC04::100                                                                    ASBR1               ASBR1
                      End.DT4 SID                                                                                                CE1->CE2                               Encaps mode
                                                                                                                             Sent from ASBR2
42 Huawei Confidential
• The End.B6.Encaps instruction used in Encaps mode can be disassembled into End + B6 +
  Encaps, where B6 indicates the application of an SRv6 Policy and Encaps indicates the
  encapsulation of an outer IPv6 header and SRH. This instruction includes the following
  operations: decrements the SL value of the inner SRH by 1, copies the SID to which the SL
  field is pointing to the DA field of the inner IPv6 header, encapsulates an IPv6 header and
  SRH (including segment lists), sets the source address to the address of the current node
  and the destination address to the first SID of the involved SRv6 Policy, sets other fields in
  the outer IPv6 header, looks up the corresponding table, and forwards the new IPv6 packet
  accordingly.
                                                                                                         SRv6 Policy Path Establishment          Traffic Steering to SRv6 Policies
43 Huawei Confidential
• Configure a tunnel policy on PE1. After receiving the BGP routes (Net1 and Net2), PE1
  recurses the routes to different SRv6 Policies based on the color values (0:15 and 0:20) and
  the next hop (PE2). Before forwarding packets to specified subnets (Net1 and Net2), PE1
  adds specific SRv6 SID stacks to the packets.
• The color attribute in route entries can be modified before the local router (for example,
  PE2) sends routes or after the peer router (for example, PE1) receives routes.
• You can also directly configure the color attribute for the VPN instance of the originating
  router (for example, PE1), so that all traffic of the VPN instance is forwarded over the
  specified SRv6 Policy.
                                                                                                           SRv6 Policy Path Establishment         Traffic Steering to SRv6 Policies
44 Huawei Confidential
• In DSCP-based traffic steering, the color attribute in route entries is mainly used to find a
  matching mapping policy.
• The color attribute in route entries can be modified in the outbound direction of the
  originating router (for example, PE2) or in the inbound direction of the receiving router (for
  example, PE1).
• You can also directly configure the color attribute for the VPN instance of the originating
  router (for example, PE1), so that all traffic of the VPN instance is forwarded over the
  specified SRv6 Policy.
     Contents
1. SRv6 Overview
     2. SRv6 Fundamentals
          ▫ Basic Concepts of SRv6
46 Huawei Confidential
     ▫ After receiving a common unicast packet from CE1, PE1 searches the routing table of
        the corresponding VPN instance and finds that the outbound interface of the route is
        an SRv6 Policy interface. PE1 then inserts an SRH carrying the SID list of the SRv6
        Policy and encapsulates an IPv6 header into the packet. After completing these
        operations, PE1 forwards the packet to P1.
▫ The transit node P1 forwards the packet hop by hop based on SRH information.
     ▫ After receiving the packet, the endpoint PE2 searches the My Local SID Table and
       finds an End SID that matches the IPv6 DA FC03::3 in the packet. According to the
       instruction bound to the SID, PE2 decrements the SL value of the packet by 1 and
       updates the IPv6 DA to the VPN SID FC03::300.
     ▫ Based on the VPN SID FC03::300, PE2 searches the My Local SID Table and finds a
        matching End.DT4 SID. According to the instruction bound to the SID, PE2
        decapsulates the packet, removes the SRH and IPv6 header, searches the routing
        table of the VPN instance corresponding to the VPN SID FC03::300 according to the
        DA in the inner packet, and forwards the packet to CE2.
 L3VPN over SRv6 BE
  ⚫    When SRv6 BE is used to carry VPN traffic, the P between PEs does not function as an endpoint.
                                                                                           MP-BGP Route
                                                                                  Prefix NHP Ext-Community
                                                                                  Net2 PE2 FC03::300 + RT
                            IGP Route                                                                                                          IGP Route
                          Prefix NHP                                                         65000                                           Prefix NHP
                          Net2 PE1                                                                                                           Net2 CE2
                                                 FC01:: /96                                 FC02:: /96                      FC03:: /96
                                                     FC01::1                                 FC02::2                         FC03::3
                                                                                                                                                                  Net2
                                                                                                                                          End.DT4
      VPNA                                                                                                                               FC03::300                VPNA
                   CE1                                PE1                                      P1                              PE2                         CE2
                              DIP: Net2                        DIPv6: FC03::300                          DIPv6: FC03::300                DIP: Net2
                              SIP: CE1                          SIPv6: FC01::1                            SIPv6: FC01::1                 SIP: CE1
                                                                                      Forwards the
                               Payload                            DIP: Net2         packet based on          DIP: Net2                    Payload
                                                                  SIP: CE1           the outer IPv6          SIP: CE1
                                   Forwards the IP                                      header.                                  PE2 removes the
                                                                   Payload                                   Payload
                                   packet over an                                                                             outer IPv6 header and
                                     SRv6 Policy.                                                                                                          FC01:: /96    Locator
                                                                                                                               forwards the packet
                                                                                                                               as a common IP one.          FC01::1      End SID
48 Huawei Confidential
     ▫ After receiving a common unicast packet from CE1, PE1 searches the routing table of
        the corresponding VPN instance and finds that the outbound interface of the route is
        an SRv6 Policy interface. PE1 then inserts an SRH carrying the SID list of the SRv6
        Policy and encapsulates an IPv6 header into the packet. After completing these
        operations, PE1 forwards the packet to P1.
▫ The transit node P1 forwards the packet hop by hop based on SRH information.
     ▫ After receiving the packet, the endpoint PE2 searches the My Local SID Table and
       finds an End SID that matches the IPv6 DA FC03::3 in the packet. According to the
       instruction bound to the SID, PE2 decrements the SL value of the packet by 1 and
       updates the IPv6 DA to the VPN SID FC03::300.
     ▫ Based on the VPN SID FC03::300, PE2 searches the My Local SID Table and finds a
        matching End.DT4 SID. According to the instruction bound to the SID, PE2
        decapsulates the packet, removes the SRH and IPv6 header, searches the routing
        table of the VPN instance corresponding to the VPN SID FC03::300 according to the
        DA in the inner packet, and forwards the packet to CE2.
 Native IPv6 over SRv6 Policy
  ⚫    Common IPv6 data can also be carried using SRv6.
                                                                                       BGP Route
                                                                          Prefix NHP Ext-Community
                                                                          Net2 PE2 0:15
                            IGP Route                                                                                                    IGP Route
                         Prefix NHP                                                     65000                                          Prefix NHP
                         Net2 PE1           FC01:: /96                                 FC02:: /96                    FC03:: /96        Net2 CE2
                                              FC01::1                                   FC02::2                       FC03::3
                                                                                                                                                                 Net2
                                                                            Color 15
                  CE1                          PE1                                        P1                           PE2                              CE2
                           DIPv6: Net2                   DIPv6: FC02::2         Forwards the        DIPv6: FC03::3      DIPv6: Net2            DIPv6: Net2
                           SIPv6: CE1                      SIPv6: CE1         packet based on         SIPv6: CE1         SIPv6: CE1            SIPv6: CE1
                                                                               the outer IPv6
                             Payload                      SRH (SL = 2)                               SRH (SL = 1)       SRH (SL = 0)                Payload
                                                                                  header.
                                                             Net2                                       Net2               Net2
                                                            FC03::3                                    FC03::3            FC03::3                        Removes the SRH and
                        Forwards the IPv6 packet            FC02::2                                    FC02::2            FC02::2                       forwards the packet as
                         over the corresponding                                                                                                           a common IPv6 one.
                        SRv6 Policy based on the            Payload                                    Payload            Payload
                               color value.
                                                                                                                                                    FC01:: /96    Locator
                                                                                                                                                     FC01::1      End SID
50 Huawei Confidential
           ▫ SRv6 and SRv6 VPN are configured on each PE, and IPv6 or SRv6 is enabled on the
             transit node.
           ▫ CE-to-PE route advertisement: CE2 advertises its route to PE2. Either a static route or
             a routing protocol (RIP, OSPFv3, IS-IS, or BGP) can be deployed between the CE and
             PE.
           ▫ Inter-PE route advertisement: Configure a BGP export policy on PE2 (or a BGP import
             policy on PE1), and set a color value for the route (next hop: PE2). Then, configure
             PE2 to send routing information to PE1 through a BGP peer relationship.
           ▫ After PE1 receives the IPv6 route, if the next hop in the route is reachable and the
             route matches the BGP import policy, PE1 performs a series of actions, including
             route recursion to an SRv6 path and route selection.
           ▫ PE-to-CE route advertisement: CE1 can learn the IPv6 route from PE1 through a static
             route or a routing protocol (RIP, OSPFv3, IS-IS, or BGP). The route advertisement
             process is similar to that from CE2 to PE2.
• Packet forwarding process:
     ▫ After receiving a unicast IPv6 packet from CE1, PE1 searches the IPv6 routing table
        and finds that the outbound interface of the route is an SRv6 Policy interface. PE1
        then inserts an SRH carrying the SID list of the SRv6 Policy and encapsulates an IPv6
        header into the packet. After completing these operations, PE1 forwards the packet
        to P1.
▫ The transit node P1 forwards the packet hop by hop based on SRH information.
     ▫ After receiving the packet, the endpoint PE2 searches the My Local SID Table and
       finds an End SID that matches the IPv6 DA FC03::3 in the packet. According to the
       instruction bound to the SID, PE2 decrements the SL value of the packet by 1 and
       updates the IPv6 DA to the End SID Net2.
     ▫ Based on the End SID Net2, PE2 searches the My Local SID table, finds a matching
        End SID, removes the SRH and IPv6 header, and forwards the packet to CE2.
     Contents
1. SRv6 Overview
2. SRv6 Fundamentals
 IPTV services: If fault recovery is completed within milliseconds, IPTV services may encounter transient pixelation.
    Voice user
    experience
                                    Impact of network faults on voice services              Impact of network faults on IPTV services
                                                                                                                         GOP
                 Imperceptible
                                           Slightly
                                         perceptible       Obviously                                I    B      B      P     B      B      P     B      B      I
                                                          perceptible          Session
                                                                             interrupted
             0                  50 ms            500 ms                 2s
             Reference standard:                                                            I-frame damage caused by packet loss is the key cause of erratic display.
             YD/T 1071-2000 <IP Telephone Gateway technical specification>
    53        Huawei Confidential
Overview of Multi-Layer Reliability Solutions
⚫    WAN bearer networks require high reliability to be provided at device, network, and service layers to achieve E2E
     high availability of 99.999% and fast protection switching of all services within 50 ms.
                                                                               • VPN FRR
                                  EVPN L3VPN
Service layer
    54      Huawei Confidential
Overview of Reliability Technologies for Multi-Layer Networks
                                                 Detection           Protection
                            Detection Object    Technology           Technology
                                                                       VPN FRR
                                   VPN         BFD for locator
                                                                         HSB
                                                  SBFD for
                                   LSP           SRv6 Policy
                                                                   Mixed VPN FRR
                                                                      Midpoint
                                                 BFD for IGP          protection
                                   IGP
TI-LFA
 55   Huawei Confidential
Usage Scenarios of Reliability Technologies for Multi-Layer Networks
                                                  CE                  PE                     P                 P             PE                 CE
                                                                2               4                    6                  8               10
                                                                                                                                                                 Access network
                                 Access network
                                                       1             3                   5                 7                       9          11
                                                  CE                  PE                     P                 P             PE                 CE
                                                                                                                     Intermediate network
                                                                    Bearer network
Service Category Protection Type Tunnel Type Failure Point Detection Technology Protection Technology Specification
  56     Huawei Confidential
     Contents
1. SRv6 Overview
2. SRv6 Fundamentals
SBFD Ping/Tracert
Tracert
            SBFD
                                                                                                            SRv6 tunnel
SRv6 tunnel
Ping
      • SBFD can be used to detect tunnel connectivity in an E2E manner.   • SRv6 SID ping is mainly used to check network connectivity and
      • However, SBFD cannot detect the specific fault location on the       host reachability.
        network. As such, it is usually used with HSB or VPN FRR.                 Ping tests are classified into segment-by-segment tests and non-
                                                                                   segment-by-segment tests.
                                                                           • In addition to checking network connectivity and host reachability,
                                                                             SRv6 SID tracert can be used to analyze the specific fault location
                                                                             on the network.
58 Huawei Confidential
           ▫ The IPv6 address of the SBFD reflector must be the same as the endpoint of the
             corresponding SRv6 Policy.
• As SRv6 simply adds a new type of routing extension header to implement forwarding
  based on the IPv6 data plane, ICMPv6 ping and tracert can be directly used on an SRv6
  network for connectivity check based on common IPv6 addresses, without requiring any
  changes to hardware or software. ICMPv6 ping and tracert both support packet forwarding
  to a destination address over the shortest path, thereby checking the reachability to the
   destination. If the destination address is an SRv6 SID, the check can be performed through
   either ICMPv6 ping & tracert or SRv6 OAM extensions. Currently, SRv6 OAM can be
   extended in either of the following ways:
                                                                                                  •    The loopback packet constructed by the reflector carries the Admin Down or
                                                                                                       Up field.
  •     Before link detection, an SBFD initiator and an SBFD reflector exchange SBFD Control      •    After receiving a reflected packet carrying the Up state, the initiator sets the
        packets to notify each other of SBFD parameters (for example, discriminators). In              local state to Up. After receiving a reflected packet carrying the Admin Down
        link detection, the initiator proactively sends an SBFD packet, and the reflector loops        state, it sets the local state to Down. It also sets the local state to Down if it
        this packet back. The initiator then determines the local status based on the looped           does not receive any reflected packet before the timer expires.
        packet.
59 Huawei Confidential
• Because the state machine has only Up and Down states, the initiator can send packets
  carrying only the Up or Down state and receive packets carrying only the Up or Admin
  Down state. The initiator starts by sending an SBFD packet carrying the Down state to the
  reflector. The destination and source port numbers of the packet are 7784 and 4784,
  respectively; the destination IP address is a user-configured address on the 127 network
  segment; the source IP address is the locally configured LSR ID.
• The reflector does not have any SBFD state machine or detection mechanism. For this
  reason, it does not proactively send SBFD Echo packets, but rather, it only reflects SBFD
  packets. The destination and source port numbers in the looped-back packet are 4784 and
  7784, respectively; the source IP address is the locally configured LSR ID; the destination IP
      address is the source IP address of the initiator.
  Introduction to SRv6 Ping and Tracert
60 Huawei Confidential
• Currently, SRv6 ping and tracert can be implemented using the following two methods:
         ▫ One method is to use the O-bit (OAM bit) in the SRH. Because the O-bit is carried in
           the SRH, each SRv6 endpoint node needs to process and respond to ICMPv6 ping and
           tracert requests. Therefore, segment-by-segment tests can be implemented based
           on the O-bit. You can run the ping ipv6-sid and tracert ipv6-sid commands to initiate
           tests based on one or more SIDs.
         ▫ The other method is to introduce End.OP SIDs, which instruct data packets to be sent
           to the control plane for OAM processing. In the case of an SRv6 Policy test, the
           headend encodes an End.OP SID into the segment list. Because only the SRv6
           endpoint that has generated an End.OP SID can process ICMPv6 ping and tracert
             request packets, E2E tests can be implemented based on End.OP SIDs.
                    ▪ For SID stack-based tests, specify one or more End.OP SIDs in the ping ipv6-sid
                      and tracert ipv6-sid commands.
                    ▪ For SRv6 Policy-based tests, specify the end-op parameter in the ping srv6-te
                      policy and tracert srv6-te policy commands.
 SRv6 Ping Implementation
  ⚫    SRv6 ping can be classified into segment-by-segment ping and non-segment-by-segment ping.
                                                      DIPv6: FC01::1
                                                      SIPv6: FC03::3
  Segment-by-segment ping                              ICMPv6 Reply       Non-segment-by-segment ping
           test                                                                      test
61 Huawei Confidential
           ▫ After receiving the ICMPv6 Request packet, P1 sends an ICMPv6 Reply packet to PE1
             and forwards the ICMPv6 Request packet to PE2.
           ▫ After receiving the ICMPv6 Request packet, PE2 sends an ICMPv6 Reply packet to
             PE1.
           ▫ After receiving the ICMPv6 Request packet, PE2 sends an ICMPv6 Reply packet to
             PE1. In this case, you can view detailed information about the ping operation on PE1.
  SRv6 Tracert Implementation
  ⚫     SRv6 tracert can be classified into overlay tracert and non-overlay tracert.
                                                                                               PE1                 P1                     PE2
                                                                                              FC01::1            FC02::2                 FC03::3
                     PE1                       P1                        PE2
                   FC01::1                   FC02::2                FC03::3
                                                                                                DIPv6: FC02::2
                                                                                                SIPv6: FC01::1
                           DIPv6: FC02::2              DIPv6: FC03::3                               TTL=0
                           SIPv6: FC01::1              SIPv6: FC01::1                             SRH (SL = 1)
                              TTL=63                      TTL=62                                    FC03::3
                             SRH (SL = 1)                 SRH (SL = 0)                              FC02::2
                               FC03::3                      FC03::3                                     UDP             DIPv6: FC01::1
                               FC02::2                      FC02::2                                                     SIPv6: FC02::2
                                   UDP                       UDP                                                  ICMPv6 Time Exceeded
                                                                                                DIPv6: FC02::2          DIPv6: FC03::3
                                         DIPv6: FC01::1
                                                                                                SIPv6: FC01::1          SIPv6: FC01::1
                                         SIPv6: FC02::2
                                                                                                    TTL=1                   TTL=0
                                   ICMP Port-Unreachable
                                                                                                  SRH (SL = 1)           SRH (SL = 0)
                                                                                                    FC03::3                FC03::3
                                                            DIPv6: FC01::1
                                                                                                    FC02::2                FC02::2
                                                            SIPv6: FC03::3
                                                                                                        UDP                 UDP              DIPv6: FC01::1
                                                       ICMP Port-Unreachable
        Overlay                                                                Non-overlay                                                   SIPv6: FC03::3
62 Huawei Confidential
            ▫ After receiving the UDP packet, P1 changes the value of the Hop Limit field to 63,
              sends an ICMPv6 Port Unreachable message to PE1, and forwards the UDP packet to
                PE2.
            ▫ After receiving the UDP packet, PE2 changes the value of the Hop Limit field to 62
              and sends an ICMPv6 Port Unreachable message to PE1.
• For a non-overlay test:
      ▫ After receiving the UDP packet, P1 changes the value of the Hop Limit field to 0 and
         sends an ICMPv6 Time Exceeded message to PE1.
      ▫ After receiving the ICMPv6 Time Exceeded message from P1, PE1 increments the
        value of the Hop Limit field by 1 (the value now becomes 2) and continues to send
        the UDP packet.
      ▫ After receiving the UDP packet, P1 changes the value of the Hop Limit field to 1 and
        forwards the packet to PE2.
      ▫ After receiving the UDP packet, PE2 changes the value of the Hop Limit field to 0,
        determines that the SID type is End SID, and checks whether the upper-layer protocol
         header is a UDP or an ICMPv6 header.
Overview of Tunnel Protection Technologies
⚫    SRv6 tunnel protection can be classified into local protection and E2E protection.
                                                                                    Egress
                                                                                             TI-LFA FRR
     Local Protection                                                                        Midpoint TI-LFA FRR
     •   Fast switching                                                                      Microloop avoidance
     •   Only links and           Ingress
         nodes protected
     E2E protection
                                                                                    Egress
     •   Detection-dependent                                                                 HSB
         fast switching
     •   E2E paths protected                                                                 ECMP
    64    Huawei Confidential
                                                                                                                Local Protection Technology               E2E Protection Technology
 TI-LFA FRR
  ⚫    TI-LFA FRR provides link and node protection for SRv6 tunnels. It enables traffic to be rapidly switched to the backup path if a link or
       node failure occurs.
                                      PE1                           PE2                      ⚫   As shown in the figure, the shortest path from PE1 to PE2 is PE1 -> P1 -> P4 -> PE2,
           DIPv6: FC05::5                     FC01::1   FC06::6             DIPv6: FC06::6       which is the primary path. P1 needs to compute a TI-LFA backup path to PE2 through
           SIPv6: FC01::1                                                   SIPv6: FC01::1       the following operations:
            SRH (SL = 1)                                                     SRH (SL = 0)        1. Excludes the primary next hop (link P1 -> P4) and computes the post-convergence
              FC06::6                                                          FC06::6              shortest path: P1 -> P2 -> P3 -> P4 -> PE2.
              FC05::5                                                          FC05::5
                                                                                                 2. Computes the P space and Q space, which are (P1, P2) and (P3, P4, PE2),
                                              FC02::2 FC05::5                  Payload              respectively.
              Payload                    P1                       P4
                                                                                                 3. Computes the TI-LFA backup path. In this case, any path can be represented as a
                                                                            DIPv6: FC05::5          multi-segment path (source node <-> P <-> Q <-> destination node). Both the
                                                                                                    segments from the source node to the P node and from the Q node to the
       DIPv6: FC03::C4                                                      SIPv6: FC01::1
                                                                                                    destination node are loop-free. The P-to-Q path is expressed using a strict explicit
        SIPv6: FC01::1                   P2 FC03::3 FC04::4 P3               SRH (SL = 1)           path (End.X SID), ensuring that the entire TI-LFA strict explicit path is loop-free. To
           SRH (SL = 0)                                                        FC06::6              simplify repair path computation, P2 (which is farthest from the source node and
            FC03::C4                         End.X                             FC05::5              resides in the P space), P3 (which is farthest from the destination node and resides
                                     P space FC03::C4             Q space      Payload              in the Q space), and a link between the P and Q spaces are selected.
           SRH (SL = 1)
             FC06::6                           DIPv6: FC05::5                                    4. After detecting that the P1-to-PE2 link goes down, P1 uses backup forwarding
             FC05::5                           SIPv6: FC01::1                                       entries and encapsulates a new SRH into the packet, with the segment list being
                                                                                                    <FC03::C4, FC06::6>. In addition, the node changes the IPv6 destination address to
             Payload                             SRH (SL = 1)                                       FC03::C4 and then forwards the packet to the backup outbound interface in the B-
                                                   FC06::6                                          to-C direction.
                                                   FC05::5
                                                   Payload
65 Huawei Confidential
            ▫ P space: is a set of nodes reachable from the source node of a protected link using
              the shortest path tree (SPT) rooted at the source node without traversing the
              protected link.
            ▫ Extended P space: is a set of nodes reachable from all the neighbors of a protected
              link's source node using SPTs rooted at the neighbors without traversing the
              protected link.
            ▫ Q space: is a set of nodes reachable from the destination node of a protected link
              using the reverse SPT rooted at the destination node without traversing the
              protected link.
• Advantages of TI-LFA:
            ▫ Provides protection for all topologies, preventing cost planning from affecting
              protection path computation.
            ▫ Simplifies deployments. Backup paths are computed based on an IGP, eliminating the
              need to deploy additional protocols for reliability purposes.
            ▫ Preferentially uses the post-convergence path as the backup path, reducing the
              number of path switchovers and facilitating bandwidth planning.
• TI-LFA computation rules
▫ Priority: SRLG disjoint > Node protection > Link protection > Minimum cost
     ▫ TI-LFA computes a backup path that meets both the SRLG disjoint and node
       protection conditions. If multiple backup paths meet the two conditions, TI-LFA
       selects the path with the minimum cost.
     ▫ If no qualified backup path is available, TI-LFA computes a backup path that meets
        both the SRLG disjoint and link protection conditions. If multiple backup paths meet
        the two conditions, TI-LFA selects the path with the minimum cost.
     ▫ If no qualified backup path is available, TI-LFA computes a backup path that meets
       the node protection condition with the minimum cost.
     ▫ If no qualified backup path is available, TI-LFA computes a backup path that meets
       the link protection condition with the minimum cost.
                                                                                                                          Local Protection Technology         E2E Protection Technology
                   DIPv6: FC03::3
                   SIPv6: FC01::1
                                                             Using TI-LFA, P1
                     SRH (SL = 1)
                                                          computes the backup
                       FC06::6                                                                  P2 fails, causing a data
                                                          path to P3: P1 -> P2 ->
                       FC03::3                                                                    forwarding failure.
                                                                 P4 -> P3.
                       Payload
                                         PE1                                  P1                                     P2
                               FC01::1                            FC02::2                              FC03::3
                                                                                               Cost: 10
                                           DIPv6: FC04::C4
                                            SIPv6: FC0::1
                                               SRH (SL = 1)                         Cost: 10              Cost: 10
                                                 FC03::3
                                                FC04::C4
                                               SRH (SL = 1)         FC04::4                                FC05::5                         FC06::6
                                                                                               Cost: 100                                                                  Backup path
                                                 FC06::6
                                                 FC03::3
                                                                                     End.X FC04::C4                                                                       Primary path
                                                                              P3                                     P4                                 PE2
                                                Payload
    67   Huawei Confidential
                                                                                                               Local Protection Technology          E2E Protection Technology
⚫    Specifically, after detecting that the next-hop interface of the packet fails, the next-hop address is the destination address of the packet, and the SL
     value is greater than 0, the proxy forwarding node performs the End behavior on behalf of the midpoint. The behavior involves decrementing the SL
     value by 1, copying the next SID to the DA field in the outer IPv6 header, and then forwarding the packet according to the instruction bound to the
     SID. In this way, the failed midpoint is bypassed, achieving SRv6 midpoint protection.
                                          PE1                               P1                                     P2
                               FC01::1                          FC02::2                            FC03::3
                                                                                            Cost: 10
                                     DIPv6: FC03::3
                                     SIPv6: FC01::1
                                         SRH (SL = 1)                            Cost: 10               Cost: 10
                                           FC06::6
                                           FC03::3
                                          Payload                 FC04::4                               FC05::5                     FC06::6
                                                                                            Cost: 100                                                          Backup path
    68   Huawei Confidential
                                                                                                                    Local Protection Technology              E2E Protection Technology
  69      Huawei Confidential
                                                                                              Local Protection Technology           E2E Protection Technology
Microloop Introduction
⚫    TI-LFA FRR and midpoint protection can maintain data forwarding for a short time before IGP convergence is complete. After IGP
     convergence, however, data is forwarded through IGP routes instead of FRR (tunnel mode).
⚫    However, the convergence speed of devices on the live network may be different. As a result, a temporary loop, which is called a
     microloop, may be generated. The loop disappears only after all routers on the forwarding path complete convergence.
                                                                                                           PE1                                  PE2
                                     DIP: A
                                     SIP: B
                                                                              P1 completes IGP
                                    Payload       If the primary path         convergence and
                                                      fails, traffic is       does not forward
                                    DIPv6: C     forwarded along the        traffic based on FRR.     P1                                              P4
                                    SIPv6: A            FRR path.
                                                                                                                 P1 considers that the packet
                                   SRH (SL=1)                                              DIP: PE2
                FRR path               D
                                                                                                                  destined for PE2 should be
                                                                                           SIP: PE1              sent to P2, but P2 considers
                                       C
                                                                                            Payload              that the packet destined for
                                    Payload           After IGP                                                    PE2 should be sent to P1.
                                                 convergence, data is
                                                                           P2 does not complete                    Cost: 1000
                                     DIP: A      forwarded along the                                  P2                                              P3
                                                                           IGP convergence and
                                     SIP: B         primary path.
                                                                          forwards data based on
                                    Payload                                 the original routing
                                                                                   table.
    70   Huawei Confidential
                                                                                                    Local Protection Technology          E2E Protection Technology
   71     Huawei Confidential
                                                                                                                  Local Protection Technology              E2E Protection Technology
                                 PE1                             PE2
                                                                                                 forwarding path after the involved node has already completed
                                       FC01::1       FC06::6                                     convergence for a period of time. This prevents the loop caused by IGP
                                          DIPv6: FC06::6                                     ⚫   Because each TI-LFA backup path is loop-free, the packet can be forwarded
                                          SIPv6: FC01::1                                         along a TI-LFA backup path for a period of time, and then the TI-LFA process
                                             Payload
                                                                                                 can exit after the other nodes complete convergence.
 72        Huawei Confidential
                                                                                                 Local Protection Technology          E2E Protection Technology
 73     Huawei Confidential
                                                                                                           Local Protection Technology            E2E Protection Technology
⚫    A network node can pre-compute a loop-free backup path only when a directly connected link or node fails. That is, no loop-free path can be pre-computed against any other
     potential fault on the network. As such, a loop-free path needs to be computed after node convergence is completed, thereby resolving the microloop issue.
⚫    As shown in the following figures, the loop-free TI-LFA path P2 -> P1 -> P5 -> PE2 is computed after P2 converges. The loop-free path computed by P3 can be either a strict
     explicit one or a loose one.
                                                                                                                                PE1                         PE2
                                          PE1                              PE2                                                        FC01::1   FC07::7
                                                    FC01::1    FC07::7                                DIPv6: FC07::7
             DIPv6: FC07::7                                                                           SIPv6: FC01::1
             SIPv6: FC01::1
                                                                                                          Payload                  FC02::2        FC06::6
                Payload                        FC02::2        FC06::6                                                                End.X FC02::C4
                                                 End.X FC02::C4                                                           P1                                      P5
                                    P1                                           P5
                                                                                                      DIPv6: FC03::C4
            DIPv6: FC03::C4                                                                            SIPv6: FC01::1
                                         FC03::C4
             SIPv6: FC01::1
                                                                                                        SRH (SL = 1)
                                         End.X
    74    Huawei Confidential
                                                                                             Local Protection Technology    E2E Protection Technology
                                             Cost: 1000
                                                                             If P1 has not completed IGP convergence due to some reasons, for
  P2 completes convergence        P2                            P3
    and considers that the                                                    example, a large number of services are running or CPU usage is
    traffic destined for PE2
     should be sent to P1.                                                    high, the node still forwards the traffic back to P2 through the
                                                                              original path, forming a loop.
 75     Huawei Confidential
                                                                                                Local Protection Technology           E2E Protection Technology
⚫    As shown in the following figures, after P2 completes convergence, it encapsulates a new SRH into the packet, enabling traffic to be forwarded along
     the post-convergence path (which is a strict explicit path) to the destination address. After P1 converges, it forwards the traffic to PE2 along the post-
     convergence path.
            FC02::C4                                                                        FC02::C4
                                    End.X
            FC03::C4
                                                                                             Payload
             Payload                           FC03::3        FC04::4                                                    FC03::3             FC04::4
                               P2                                               P3                            P2                                                 P3
    76   Huawei Confidential
                                                                                              Local Protection Technology   E2E Protection Technology
⚫ When SBFD on the primary path detects a fault, traffic is switched from the primary path to the backup path.
⚫    If some devices on an SRv6 network do not support SRv6, local protection cannot be implemented. In this case, HSB can be used to provide high
     reliability.
SRv6 policy PE1toPE3
endpoint PE3 color 100
candidate-path preference 200 //Primary
 segment-list <PE1.End, PE3.End>
candidate-path preference 100 //Backup
 segment-list <PE2.End, P2.End, PE4.End, PE3.End >
                                              PE1                P1                    P3                     PE3
                                                                            SRv6-
                                                                          incapable
                       CE1                                                                                                          CE2
Primary path
                                                                                                                                          Backup path
                                              PE2                P2                    P4                     PE4
    77     Huawei Confidential
                                                                                Local Protection Technology   E2E Protection Technology
CE1 CE2
                                                                                                                            ECMP path
                                                  PE2                 P2   P4                   PE4
    78       Huawei Confidential
                                                                                            Local Protection Technology   E2E Protection Technology
         BGP 100
          ipv6-family vpn-instance vpn1
            segment-routing ipv6 traffic-engineer best-effort
PE1 P1 P3 PE3
                                                                     The primary
                                                                      path fails.          The backup
                         CE1                                                                                                      CE2
                                                                                            path fails.
                                                                                                                                     Primary path
                                                    PE2         P2                  P4   Traffic is carried   PE4
                                                                                          over SRv6 BE.                              Backup path
Best-effort path
    79       Huawei Confidential
     Contents
1. SRv6 Overview
2. SRv6 Fundamentals
  ⚫    As such, local protection and E2E protection technologies commonly used for SRv6 tunnels can protect only the source PE and transit nodes (Ps), but
       not the SRv6 endpoint.
⚫ If a fault occurs on the endpoint, it is mainly rectified through VPN FRR. In addition, it can also be rectified through anycast FRR or mirror protection.
                                                                                                         Local protection
                       CE1                                                                                     path                                 CE2
Primary path
81 Huawei Confidential
• Anycast FRR and mirror protection technologies are complex and therefore rarely used on
  live networks.
VPN FRR
⚫    VPN FRR helps rectify endpoint faults by directly forming VPN backup routes. It is implemented as follows:
            The source PE pre-computes primary and backup routes based on the two learned VPN routes with different next-hop PEs and then delivers the
             computed routes to the FIB table. In addition, after detecting a remote PE fault through BFD, the source PE switches VPN traffic to the backup
             path before VPN route convergence.
                                                                             BGP Route
                                                                           Prefix NHP
                                       PE1's BGP Route                     Net2 PE2
                                     Prefix NHP
                                     Net2 PE2
                                     Backup PE3
                                                                              P1                   PE2
                                                                PE1                                                     Net2
                                        CE1                                                                       CE2
                                                                              P2                   PE3
    82       Huawei Confidential
  Anycast FRR
  ⚫    Anycast FRR implements SRv6 egress protection by deploying the same locator and VPN SID on the PEs to which a CE
       is dual-homed.
                                                                                P1 where TI-LFA is
                                                            DIPv6: FC05::100      deployed pre-
                                                             SIPv6: FC01::1    computes a backup
                                                                               path to FC05:: /96.
                                                                Payload                                  PE3
                                                             PE1               P1                     FC05:: /96
                                               FC01:: /96
CE1 CE2
                                   Payload:                                                                         Payload:
                                  CE1 -> CE2                                                                       CE1 -> CE2
                                                                                      End.X
                                                             PE2               P2     FC04::C4        FC05:: /96
                                                                                    DIPv6: FC04::C4      PE4
                                                                                     SIPv6: FC01::1
                                                                                     SRH (SL = 1)                          Primary path
                                                                                      FC05::100
                                                                                       FC04::C4                            Local protection path
                                                                                       Payload
83 Huawei Confidential
• Anycast FRR can be used in both egress protection and local protection scenarios.
• Although anycast FRR can provide protection against PE failures, it has the following
  drawbacks:
           ▫ VPN SIDs must be manually configured to ensure that the two PEs configured with
             the same VPN instance have the same VPN SID.
           ▫ Only IGP route selection (not VPN route selection) can be performed. For example, if
             VPN services need to be load-balanced between PE3 and PE4 or the route advertised
             by PE3 needs to be preferentially selected, VPN route selection cannot be performed
             if the route advertised by PE4 is preferentially selected through an IGP on the path to
             FC05::.
           ▫ If there is a PE-CE link interface failure, such as a failure on the link between PE3 and
             CE2, traffic is still forwarded to PE3 and then to PE4, resulting in a traffic loop that
             cannot be eliminated.
SRv6 Access Protection
⚫    When a CE is dual-homed to PEs, if the link between the CE and endpoint PE fails, traffic may be lost. In this case, mixed FRR can be
     used to resolve this problem.
                                                                           PE2's BGP Route
                                                          Prefix NHP Out-Int
                                                          Net2 CE2 PE2 -> CE2 link interface
                                                          Backup PE3 SRv6 Tunnel                      When the path from PE2 to
                                                                                                     CE2 fails, the forwarding path
                                                                                                      becomes PE2 -> PE3 -> CE2.
                                                     P1                        PE2
                                                          PE3 advertises
               CE1                      PE1               a VPN route to                             Net2
                                                               PE2.
                                                                                               CE2
                                                     P2                        PE3                     CE2 advertises                 Primary path
                                                                                                     common routes to
                                                                                                        PE2 and PE3.                  Backup path
⚫    PE2 receives a VPN route from CE2 and another VPN route from PE3, forming FRR protection. When the link between PE2 and CE2
     fails, PE2 detects the fault and steers all relevant traffic to the backup path to PE3. In this case, the next hop of the primary path is
     an access interface and the backup path is an SRv6 tunnel, forming mixed FRR protection.
    84   Huawei Confidential
     Contents
1. SRv6 Overview
2. SRv6 Fundamentals
▫ SRv6 Policy
85   Huawei Confidential
  L3VPNv4 over SRv6 BE (1)
    AS 100
       Loopback0                           Loopback0                    Loopback0
   2001:DB8:1::1/128                    2001:DB8:2::2/128            2001:DB8:3::3/128
          PE1                                  P                                  PE2    Configuration roadmap:
                 :1 2001:DB88:12::/96     :2        :2 2001:DB88:23::/96     :3
                                                                                         1. Configure interface IPv6 addresses and IS-IS. (Configuration details are
           10.0.14.0/24                                          10.0.35.0/24              not provided.)
                    Loopback1                                  Loopback1                 2. Establish an MP-BGP peer relationship between PE1 and PE2.
                    10.1.4.4/32                                10.1.5.5/32               3. Enable SR and establish an SRv6 BE path on the backbone network.
          CE1            AS 65000                             AS 65001            CE2
                                                                                         4. Enable the VPN instance IPv4 address family on each PE.
    Networking requirements:                                                             5. Establish an MP-IBGP peer relationship between the PEs for them to
                                                                                           exchange routing information.
    1. Connect PE1 and PE2 to different CEs that belong to VPN instance
                                                                                         6. Verify the configuration.
      vpna.
86 Huawei Confidential
  87         Huawei Confidential
  L3VPNv4 over SRv6 BE (3)
    AS 100                                                                               Establish an SRv6 BE path between the PEs. PE1 configurations are as
       Loopback0                           Loopback0                    Loopback0
   2001:DB8:1::1/128                    2001:DB8:2::2/128            2001:DB8:3::3/128   follows: (PE2 configurations are not provided here, and the P does not
                                                                                         require such configurations.)
          PE1                                  P                                  PE2
                 :1 2001:DB88:12::/96     :2        :2 2001:DB88:23::/96     :3           [~PE1] segment-routing ipv6
                                                                                          [*PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
                                                                                          [*PE1-segment-routing-ipv6] locator as100 ipv6-prefix 2001:DB8:100:: 64 static
           10.0.14.0/24                                          10.0.35.0/24
                                                                                          32
                    Loopback1                                  Loopback1                  [*PE1-segment-routing-ipv6-locator] quit
                    10.1.4.4/32                                10.1.5.5/32                [*PE1-segment-routing-ipv6] quit
                                                                                          [*PE1] bgp 100
          CE1            AS 65000                             AS 65001            CE2
                                                                                          [*PE1-bgp] ipv4-family vpnv4
   Configuration roadmap:                                                                 [*PE1-bgp-af-vpnv4] peer 2001:DB8:3::3 prefix-sid
                                                                                          [*PE1-bgp-af-vpnv4] quit
   1. Configure interface IPv6 addresses and IS-IS. (Configuration details are
                                                                                          [~PE1-bgp] quit
      not provided.)                                                                      [~PE1] isis 1
   2. Establish an MP-BGP peer relationship between PE1 and PE2.                          [~PE1-isis-1] segment-routing ipv6 locator as100
   3. Enable SR and establish an SRv6 BE path on the backbone network.                    [*PE1-isis-1] commit
   4. Enable the VPN instance IPv4 address family on each PE.                             [~PE1-isis-1] quit
   5. Establish an MP-IBGP peer relationship between the PEs for them to
      exchange routing information.
   6. Verify the configuration.
88 Huawei Confidential
    AS 100                                                                               Enable the VPN instance IPv4 address family on each PE. PE1
       Loopback0                           Loopback0                    Loopback0
   2001:DB8:1::1/128                    2001:DB8:2::2/128            2001:DB8:3::3/128   configurations are as follows: (PE2 configurations are not
                                                                                         provided.)
          PE1                                  P                                  PE2
                 :1 2001:DB88:12::/96     :2        :2 2001:DB88:23::/96     :3           [~PE1] ip vpn-instance vpna
                                                                                          [*PE1-vpn-instance-vpna] ipv4-family
           10.0.14.0/24                                          10.0.35.0/24             [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
                                                                                          [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
                    Loopback1                                  Loopback1                  [*PE1-vpn-instance-vpna-af-ipv4] quit
                    10.1.4.4/32                                10.1.5.5/32                [*PE1-vpn-instance-vpna] quit
                                                                                          [~PE1] bgp 100
          CE1            AS 65000                             AS 65001            CE2
                                                                                          [*PE1-bgp] ipv4-family vpn-instance vpna
                                                                                          [*PE1-bgp-vpna] peer 10.0.14.4 as-number 65000
   Configuration roadmap:                                                                 [*PE1-bgp-vpna] segment-routing ipv6 best-effort
   1. Configure interface IPv6 addresses and IS-IS. (Configuration details are            [*PE1-bgp-vpna] segment-routing ipv6 locator as100
      not provided.)                                                                      [*PE1-bgp-vpna] commit
   2. Establish an MP-BGP peer relationship between PE1 and PE2.                          [~PE1-bgp-vpna] quit
                                                                                          [~PE1-bgp] quit
   3. Enable SR and establish an SRv6 BE path on the backbone network.
   4. Enable the VPN instance IPv4 address family on each PE.
   5. Establish an MP-IBGP peer relationship between the PEs for them to
      exchange routing information.
   6. Verify the configuration.
90 Huawei Confidential
• Configure VPN routes to recurse to SRv6 BE paths based on the carried SIDs.
1. Run the bgp { as-number-plain | as-number-dot } command to enter the BGP view.
           3. Run the segment-routing ipv6 best-effort command to enable VPN route recursion
              based on the SIDs carried by routes.
 AS 100
    Loopback0                           Loopback0                    Loopback0        Check the local SID table containing all types of SRv6 SIDs on PE2.
2001:DB8:1::1/128                    2001:DB8:2::2/128            2001:DB8:3::3/128   <PE2>display segment-routing ipv6 local-sid forwarding
       PE1                                  P                                  PE2
                                                                                                 My Local-SID Forwarding Table
              :1 2001:DB88:12::/96     :2        :2 2001:DB88:23::/96     :3                     -------------------------------------
                                                                                      SID    : 2001:DB8:300::1:0:0/128                    FuncType : End
        10.0.14.0/24                                          10.0.35.0/24            LocatorName: as100                               LocatorID: 2
  91         Huawei Confidential
L3VPNv4 over SRv6 BE (6)
 AS 100
    Loopback0                           Loopback0                    Loopback0        Check VPNv4 routing information on PE1.
2001:DB8:1::1/128                    2001:DB8:2::2/128            2001:DB8:3::3/128   <PE1>display bgp vpnv4 al routing-table 10.1.5.5
       PE1                                  P                                  PE2    BGP local router ID : 10.0.1.1
                                                                                      Local AS number : 100
              :1 2001:DB88:12::/96     :2        :2 2001:DB88:23::/96     :3          Total routes of Route Distinguisher(100:1): 1
                                                                                      BGP routing table entry information of 10.1.5.5/32:
        10.0.14.0/24                                          10.0.35.0/24            Label information (Received/Applied): 3/NULL
                                                                                      From: 2001:DB8:3::3 (10.0.3.3)
                 Loopback1                                  Loopback1                 Route Duration: 0d00h15m54s
                 10.1.4.4/32                                10.1.5.5/32               Relay IP Nexthop: FE80::DE99:14FF:FE7A:C301
       CE1            AS 65000                             AS 65001            CE2    Relay IP Out-Interface: GigabitEthernet0/3/0.12
                                                                                      Relay Tunnel Out-Interface:
                                                                                      Original nexthop: 2001:DB8:3::3
Configuration roadmap:
                                                                                      Qos information : 0x0
1. Configure interface IPv6 addresses and IS-IS. (Configuration details are           Ext-Community: RT <111 : 1>
   not provided.)                                                                     Prefix-sid: 2001:DB8:300::1:0:20
2. Establish an MP-BGP peer relationship between PE1 and PE2.                         AS-path Nil, origin incomplete, MED 0, localpref 100, pref-val 0, valid, internal,
3. Enable SR and establish an SRv6 BE path on the backbone network.                   best, select, pre 255, IGP cost 20
4. Enable the VPN instance IPv4 address family on each PE.                            Not advertised to any peer yet
5. Establish an MP-IBGP peer relationship between the PEs for them to
   exchange routing information.                                                       IPv6 address of the peer; SID corresponding to 10.1.5.5 (the same
6. Verify the configuration.                                                           as that locally allocated on PE2)
  92         Huawei Confidential
L3VPNv4 over SRv6 BE (7)
 AS 100
    Loopback0                           Loopback0                    Loopback0        Check vpna's routing information on PE1.
2001:DB8:1::1/128                    2001:DB8:2::2/128            2001:DB8:3::3/128   <PE1> display ip routing-table vpn-instance vpna 10.1.5.5 verbose
       PE1                                  P                                  PE2    Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole
                                                                                      route
              :1 2001:DB88:12::/96     :2        :2 2001:DB88:23::/96     :3          ------------------------------------------------------------------------------
                                                                                      Routing Table : vpna
        10.0.14.0/24                                          10.0.35.0/24            Summary Count : 1
  93         Huawei Confidential
L3VPNv4 over SRv6 BE (8)
 AS 100
    Loopback0                           Loopback0                    Loopback0
2001:DB8:1::1/128                    2001:DB8:2::2/128            2001:DB8:3::3/128   Verify the configuration on CE1.
                                                                                      <CE1>ping -a 10.1.4.4 10.1.5.5
       PE1                                  P                                  PE2
                                                                                       PING 10.1.5.5: 56 data bytes, press CTRL_C to break
              :1 2001:DB88:12::/96     :2        :2 2001:DB88:23::/96     :3            Reply from 10.1.5.5: bytes=56 Sequence=1 ttl=254 time=1 ms
                                                                                        Reply from 10.1.5.5: bytes=56 Sequence=2 ttl=254 time=1 ms
        10.0.14.0/24                                          10.0.35.0/24              Reply from 10.1.5.5: bytes=56 Sequence=3 ttl=254 time=1 ms
                                                                                        Reply from 10.1.5.5: bytes=56 Sequence=4 ttl=254 time=1 ms
                 Loopback1                                  Loopback1                   Reply from 10.1.5.5: bytes=56 Sequence=5 ttl=254 time=1 ms
                 10.1.4.4/32                                10.1.5.5/32
       CE1            AS 65000                             AS 65001            CE2
Configuration roadmap:
1. Configure interface IPv6 addresses and IS-IS. (Configuration details are
   not provided.)
2. Establish an MP-BGP peer relationship between PE1 and PE2.
3. Enable SR and establish an SRv6 BE path on the backbone network.
4. Enable the VPN instance IPv4 address family on each PE.
5. Establish an MP-IBGP peer relationship between the PEs for them to
   exchange routing information.
6. Verify the configuration.
  94         Huawei Confidential
     Contents
1. SRv6 Overview
2. SRv6 Fundamentals
95   Huawei Confidential
L3VPNv4 over SRv6 Policy (1)
 AS 100
    Loopback0                           Loopback0                    Loopback0
2001:DB8:1::1/128                    2001:DB8:2::2/128            2001:DB8:3::3/128   Configuration roadmap:
       PE1                                  P                                  PE2
                                                                                      1. Configure interface IPv6 addresses and IS-IS. (Configuration
              :1 2001:DB88:12::/96     :2        :2 2001:DB88:23::/96     :3
                                                                                        details are not provided.)
        10.0.14.0/24                                          10.0.35.0/24            2. Establish an MP-BGP peer relationship between PE1 and PE2.
                 Loopback1                                  Loopback1                 3. Enable SR and establish an SRv6 Policy on the backbone
                 10.1.4.4/32                                10.1.5.5/32
                                                                                        network.
       CE1            AS 65000                             AS 65001            CE2
                                                                                      4. Enable the VPN instance IPv4 address family on each PE and
 Networking requirements:                                                               establish an MP-IBGP peer relationship between the PEs for
1. Connect PE1 and PE2 to different CEs that belong to VPN instance them to exchange routing information.
  96         Huawei Confidential
  L3VPNv4 over SRv6 Policy (2)
97 Huawei Confidential
• The SIDs of PE1, the P, and PE2 are 2001:DB8:1000::111, 2001:DB8:2000::222, and
  2001:DB8:3000::333, respectively.
• In this experiment, the SRv6 Policy is established based on specified End SIDs.
  L3VPNv4 over SRv6 Policy (3)
98 Huawei Confidential
• SRv6 paths are established using SIDs. Static SRv6 SIDs are recommended. The
  configuration procedure is as follows:
2. Run the opcode func-opcode end command to configure a static End SID opcode.
99 Huawei Confidential
2. Run the segment-routing ipv6 command to enable SRv6 and enter the SRv6 view.
          3. Run the segment-list list-name command to configure a segment list (an explicit
             path) for an SRv6 Policy candidate path and enter the segment list view.
          4. Run the index index sid ipv6 ipv6address command to specify a next-hop SID for the
             segment list.
                         ▪ You can run the command multiple times. The system generates a SID stack for
                           the segment list by index index in ascending order. If a candidate path in the
                           SRv6 Policy is preferentially selected, traffic is forwarded using the segment
                           lists of the candidate path. A maximum of 10 SIDs can be configured for each
                           segment list.
2. Run the segment-routing ipv6 command to enable SRv6 and enter the SRv6 view.
      4. Run the srv6-te policy name-value endpoint endpoint-ip color color-value command
         to create an SRv6 Policy and enter the SRv6 Policy view.
      5. (Optional) Run the binding-sid binding-sid command to configure a binding SID for
         the SRv6 Policy.
            ▪ The value of binding-sid must be within the range of the static segment
              specified using the locator locator-name [ ipv6-prefix ipv6-address prefix-
              length [ static static-length | args args-length ] * ] command.
            ▪ Each SRv6 Policy supports multiple candidate paths. A larger preference value
              indicates a higher candidate path preference. If multiple candidate paths are
              configured, the one with the highest preference takes effect.
            ▪ The segment list must have been created using the segment-list (SRv6 view)
              command.
   AS 100                                                                         Enable the VPN instance IPv4 address family on each PE. PE1
          End                            End                        End
   2001:DB8:1000::111             2001:DB8:2000::222         2001:DB8:3000::333   configurations are as follows: (PE2 configurations are not
                                                                                  provided.)
      PE1                                 P                                 PE2
             :1 2001:DB88:12::/96   :2        :2 2001:DB88:23::/96     :3          [~PE1] ip vpn-instance vpna
                                                                                   [*PE1-vpn-instance-vpna] ipv4-family
          10.0.14.0/24                                     10.0.35.0/24            [*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
                                                                                   [*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
                Loopback1                                Loopback1                 [*PE1-vpn-instance-vpna-af-ipv4] quit
                10.1.4.4/32                              10.1.5.5/32               [*PE1-vpn-instance-vpna] quit
      CE1            AS 65000                           AS 65001            CE2    [*PE1-bgp] ipv4-family vpn-instance vpna
                                                                                   [*PE1-bgp-vpna] segment-routing ipv6 traffic-engineer best-effort
  Configuration roadmap:                                                           [*PE1-bgp-vpna] segment-routing ipv6 locator as1000
  1. Configure interface IPv6 addresses and IS-IS. (Configuration details are      [*PE1-bgp-vpna] commit
     not provided.)                                                                [~PE1-bgp-vpna] quit
                                                                                   [~PE1-bgp] quit
  2. Establish an MP-BGP peer relationship between PE1 and PE2.
  3. Enable SR and establish an SRv6 Policy on the backbone network.
  4. Enable the VPN instance IPv4 address family on each PE and establish
     an MP-IBGP peer relationship between the PEs for them to exchange
     routing information.
  5. Configure a tunnel policy and import VPN traffic.
  6. Verify the configuration.
AS 100
       End                             End                        End           Check SRv6 Policy information on PE1.
2001:DB8:1000::111              2001:DB8:2000::222         2001:DB8:3000::333   <PE1>display srv6-te policy
    PE1                                 P                                 PE2   PolicyName : policy1
                                                                                Color            : 101           Endpoint         : 2001:DB8:3::3
           :1 2001:DB88:12::/96   :2        :2 2001:DB88:23::/96     :3         TunnelId           :1           Binding SID       : 2001:DB8:1000::100
                                                                                TunnelType            : SRv6-TE Policy           DelayTimerRemain :
        10.0.14.0/24                                     10.0.35.0/24           Policy State        : Up
                                                                                Admin State           : UP                  Traffic Statistics : Disable
              Loopback1                                Loopback1                Candidate-path Count : 1
              10.1.4.4/32                              10.1.5.5/32               Candidate-path Preference : 100
    CE1            AS 65000                           AS 65001            CE2    Path State         : Active               Path Type           : Primary
                                                                                 Protocol-Origin        : Configuration(30) Originator            : 0, 0.0.0.0
Configuration roadmap:                                                           Discriminator        : 100            Binding SID : 2001:DB8:1000::100
1. Configure interface IPv6 addresses and IS-IS. (Configuration details          GroupId           :1                    Policy Name          : policy1
   are not provided.)                                                            DelayTimerRemain            :-               Segment-List Count : 1
2. Establish an MP-BGP peer relationship between PE1 and PE2.                    Segment-List          : list1
                                                                                  Segment-List ID : 1                       XcIndex            :1
3. Enable SR and establish an SRv6 Policy on the backbone network.                List State      : Up                    DelayTimerRemain : -
4. Enable the VPN instance IPv4 address family on each PE and establish           Weight           :1                    BFD State         :-
   an MP-IBGP peer relationship between the PEs for them to exchange              SID :
   routing information.                                                               2001:DB8:2000::222
5. Configure a tunnel policy and import VPN traffic.                                  2001:DB8:3000::333
6. Verify the configuration.
                                                                                 Color of the specified SRv6 Policy
AS 100
       End                             End                        End           Check VPNv4 routing information on PE1.
2001:DB8:1000::111              2001:DB8:2000::222         2001:DB8:3000::333   <PE1> display bgp vpnv4 all routing-table 10.1.5.5
    PE1                                 P                                 PE2    BGP local router ID : 10.0.1.1
                                                                                 Local AS number : 100
           :1 2001:DB88:12::/96   :2        :2 2001:DB88:23::/96     :3          Total routes of Route Distinguisher(100:1): 1
                                                                                 BGP routing table entry information of 10.1.5.5/32:
        10.0.14.0/24                                     10.0.35.0/24            Label information (Received/Applied): 3/NULL
                                                                                 From: 2001:DB8:3::3 (10.0.13.3)
              Loopback1                                Loopback1                 Route Duration: 0d00h03m30s
              10.1.4.4/32                              10.1.5.5/32               Relay IP Nexthop: FE80::DE99:14FF:FE7A:C301
    CE1            AS 65000                           AS 65001            CE2    Relay IP Out-Interface: GigabitEthernet0/3/0.12
                                                                                 Relay Tunnel Out-Interface:
Configuration roadmap:                                                           Original nexthop: 2001:DB8:3::3
1. Configure interface IPv6 addresses and IS-IS. (Configuration details are      Qos information : 0x0
   not provided.)                                                                Ext-Community: RT <111 : 1>, Color <0 : 101>
                                                                                 Prefix-sid: 2001:DB8:3000::1:0:1E
2. Establish an MP-BGP peer relationship between PE1 and PE2.
                                                                                 AS-path 65000, origin incomplete, MED 0, localpref 100, pref-val 0, valid,
3. Enable SR and establish an SRv6 Policy on the backbone network.              internal, best, select, pre 255, IGP cost 20
4. Enable the VPN instance IPv4 address family on each PE and establish an       Not advertised to any peer yet
   MP-IBGP peer relationship between the PEs for them to exchange routing
   information.
5. Configure a tunnel policy and import VPN traffic.                                                   The route recurses to the corresponding
6. Verify the configuration.                                                                           SRv6 Policy based on the color attribute.
1. SRv6 Overview
2. SRv6 Fundamentals
  ⚫    Deploying SRv6 through iMaster NCE-IP can avoid the problems faced by static SRv6 deployment.
  ⚫    iMaster NCE-IP uses the following protocols to deploy SRv6 tunnels and VPN services:
               BGP-LS
 NETCONF
R2
R1 R4 R3
• IGP: generates network topology information, such as bandwidth, delay, and SID
  information, on a router.
• BGP-LS: collects topology information and reports collected information to the controller. If
  an RR exists on the network, you only need to deploy BGP-LS on the RR and establish a
  BGP-LS peer relationship between the RR and controller.
• BGP IPv6 SR Policy: Such a peer relationship is established between the controller and
  forwarder, so that the controller can deliver an SRv6 Policy to the forwarder through the
  peer relationship to direct traffic forwarding. To reduce the number of peer relationships,
  you can deploy an RR and configure PEs and the controller to function as RR clients.
• NETCONF: delivers service configurations from the controller to forwarders. This document
  does not describe service delivery or NETCONF-related configuration.
  SRv6 Policy Advertisement Process
  ⚫    To facilitate configuration, the controller provides the
                                                                              3    Automatic path
       following functions:                                                        planning by the
                                                                                      controller                                                    2
                                                                                                                                                Requirement
               Directly creates a bidirectional tunnel between the ingress                                                                        input
                and egress. In other words, a tunnel from the egress to
                                                                                     4                                                                     Network
                the ingress is automatically created when a tunnel from                                                        Network
                                                                                                                                            1            administrator
                                                                                  Forwarding path
                                                                                                                               topology
                                                                                                                     BGP-LS
                the ingress to the egress is created.                               deployment
                                                                                                                               reporting
                                                                                           PE1                                RR                        PE3
               Allows you to configure tunnel and color templates to
                simplify the configuration of some parameters.
                                                                                                     Tunnel status
                                                                                                       reporting
  ⚫    In Huawei's CloudWAN solution:
                                                                                                               5
               The SRv6 Policy configurations are delivered by BGP IPv6
                                                                                              PE2                       P1                         PE4
                SR Policy, and the tunnel status is reported by BGP-LS.
                                                                                                                                           SRv6 Policy
                                                                                                                                           BGP-LS peer
                                                                                                                                           BGP SR-Policy peer
• The process of planning and deploying forwarding paths through iMaster NCE-IP is as
  follows:
            ▫ Devices use BGP-LS to report network topology information to the controller, which
              then generates forwarding paths based on requirements.
            ▫ The controller delivers the computed paths to the devices through BGP IPv6 SR
              Policy.
R2
⚫    Each router maintains one or more LSDBs. Each LSDB contains multiple link attributes, such as the interface IP
     address, link metric, TE metric, link bandwidth, and reservable bandwidth. The BGP process of a router obtains
     information from these LSDBs and carries the information in the extended NLRI attribute.
 Establishes a BGP-LS peer relationship with the ingress of the involved tunnel in order to obtain SRv6 Policy status.
                                                                                                                    BGP-LS
                                                                                                       PE1                              RR                              PE2
     PE1                                         P1                                 PE2
                                                                                                              CE1                                                 CE2
                   CE1                                                   CE2
                                                                                               ⚫    Solution 2: Establish BGP-LS peer relationships between iMaster NCE-IP
     ⚫        Solution 1: Establish BGP-LS peer relationships between                               and RRs and between the RRs and other devices.
              iMaster NCE-IP and all PEs and between iMaster NCE-IP and all                    ⚫    Solution 2 is recommended to reduce the number of BGP peers
              Ps.                                                                                   maintained by iMaster NCE-IP.
                                                        BGP-LS
                                2000::102                               FC01::5
• BGP-LS peer relationships can be established using IPv4 or IPv6 addresses. This course uses
  IPv6 addresses to establish such relationships.
  SRv6 Policy Path Computation and Deployment
  ⚫    With the SRv6 Policy path computation algorithm, the controller can provide the following path computation results if specified constraints are met:
               Minimum cost: path with the minimum cost among all qualified paths
 Minimum delay: path with the minimum delay among all qualified paths
 Bandwidth balancing: path with the most remaining bandwidth among all qualified paths that have the same cost
⚫ During SRv6 Policy creation, you need to specify a color value for each SRv6 Policy.
• Optional constraints:
            ▫ Bandwidth constraint: ensures that the bandwidth configured for a service does not
              exceed the remaining bandwidth of the link that the service traverses.
            ▫ PIR constraint: ensures that the peak bandwidth does not exceed the BC0 bandwidth
              of the link that the service traverses. PIR refers to the peak bandwidth of a service.
            ▫ Delay limit constraint: ensures that the path delay of a service does not exceed the
              configured delay limit.
            ▫ Hop limit constraint: ensures that the number of links that a service traverses does
              not exceed the configured hop limit.
            ▫ Affinity constraint: determines which types of links are allowed and which types of
              links are not allowed for services.
BGP IPv6 SR Policy Deployment
⚫    BGP IPv6 SR Policy is mainly used to deliver SRv6 tunnel information. As such, iMaster NCE-IP needs to establish a BGP IPv6 SR
     Policy peer relationship with the ingress of the involved tunnel.
                                                                                          CE1                                          CE2
                   CE1                                            CE2
                                                       BGP IPv6
                                2000::102                               FC01::5
                                                       SR Policy
• BGP IPv6 SR Policy peer relationships can be established using IPv4 or IPv6 addresses. This
  course uses IPv6 addresses to establish such relationships.
VPN Service Forwarding over SRv6 Policies
⚫    The following types of VPNs are available in enterprise network scenarios:
             L2VPN: Customer IP addresses are on the same network segment.
                                                                                                              L2VPN            L3VPN            EVPN
             L3VPN: Customer IP addresses are on different network segments.
             EVPN: Customer IP addresses are either on the same network segment (L2VPN scenario) or
                                                                                                                         Tunnel policy-based tunnel type
              on different network segments (L3VPN scenario).
                                                                                                                                     selection
⚫    A tunnel policy is used by an application module to select tunnels for services. There
     are two types of tunnel policies:
                                                                                                             SRv6 Policy          SRv6 Policy Group
             (Preferred mode) Tunnel type prioritizing policy: recurses services to a tunnel based on the
              tunnel type priority and the number of tunnels participating in load balancing.
             Tunnel binding policy: binds a destination address to a tunnel, so that the traffic of VPN              A forwarding path is selected among
                                                                                                                      tunnels of the same type in either of
              services referencing the policy and destined for this address will be transmitted over the
                                                                                                                             the following modes:
              tunnel.
⚫    VPN services first select tunnels in the up state based on the tunnel policy, and then
     select a forwarding path from qualified tunnels.
                                                                                                                 Color            DSCP
1. (Short-answer question) An SRv6 SID has 128 bits. What are the three fields of an SRv6 SID?
1. An SRv6 SID has 128 bits and consists of the Locator, Function, and Arguments fields.
     ▫ The bearer network uses one physical network to carry production, office,
       and other services. Powerful BGP routing policies are used to control traffic
        on the bearer network. Traffic diversion policies are deployed based on
        service attributes so that different types of services, such as production and
        office services, can run on different paths based on the customized policies.
     ▫ As multiple services are migrated to the cloud and multiple networks are
       converged, issues such as low network resource utilization and unbalanced
        traffic distribution become more prominent. The segment routing
        technology can be used to flexibly plan paths to carry traffic, improving
        network utilization.
• High reliability:
• BGP SR-Policy delivers data forwarding path information to the headend through
  the BGP route. The headend then directs traffic to a specific SR Policy. Segment
  lists in SR Policies are used to guide traffic forwarding. A segment list is calculated
  based on a series of optimization objectives and constraints, such as delay, affinity,
  and SRLG.
• iFIT measures the packet loss rate and delay of service packets transmitted on an
  IP network to determine network performance. It is easy to deploy and provides
  an accurate assessment of network performance.
• Channelized sub-interfaces provide a mechanism to isolate different types of
  services. Different types of service traffic can be forwarded to different VLAN
  channelized sub-interfaces that use different dot1q encapsulation modes. Each
  channelized sub-interface can implement independent HQoS scheduling to isolate
  different types of services.
• Core nodes in the same city are interconnected through WDM, and core nodes in
  different cities are interconnected through inter-provincial or inter-metro carrier
  private lines. The number of core nodes must be comprehensively considered and
  cannot be too large.
• Uniformity: All IP addresses on the entire network are planned in a unified
  manner, including service addresses, platform addresses, and network addresses.
• Hierarchy: The massive IPv6 address space poses higher requirements on the
  route summarization capability. The primary task of IPv6 address planning is to
  reduce network address fragments, enhance the route summarization capability,
  and improve the network routing efficiency.
• Security: Services with shared attributes have the same security requirements.
  Mutual access between services needs to be controlled. Services with shared
  attributes are allocated with addresses in the same address space, which
  facilitates security design and policy management.
• The Function field is also called opcode, which can be dynamically allocated using
  an IGP or statically configured using the opcode command. When configuring a
  locator, you can use the static static-length parameter to specify the length of
  the static segment, which determines the number of static opcodes that can be
  configured in the locator. When an IGP dynamically allocates opcodes, it applies
  for opcodes outside of the static segment range to ensure that SRv6 SIDs do not
  conflict.
• The Args field is determined by the args args-length parameter. The Args field is
  optional in SRv6 SIDs and is determined by the command configuration.
• End.DT SIDs can be classified into End.DT4 SIDs and End.DT6 SIDs.
     ▫ OSPF router ID: The global router ID is used. Generally, the router ID is the
       same as the loopback0 address.
▫ Interface type: To speed up convergence, all interfaces are of the P2P type.
• IBGP: PEs use loopback addresses to establish IBGP peer relationships with all RRs
  and use MP-IBGP to exchange VPN routes.
• EBGP: PEs use interface IP addresses to establish EBGP peer relationships with CEs.
  In inter-AS VPN route exchange scenarios, Option A is generally used.
• BGP-LS: The controller establishes BGP-LS peer relationships with all RRs to
  collect logical topology information on the backbone network.
• Deploy independent RRs and establish IBGP peer relationships for RRs on the
  backbone network.
• In addition to EBGP, IGPs such as OSPF, IS-IS, and RIP can also be used between
  PEs and CEs on the bearer network. Static routes can also be used to meet the
  requirements of flexible access in various scenarios.
• PE3 changes the MED value to 100 for the route whose next hop is PE1 (a PE on
  the same plane) and changes the MED value to 200 for the route whose next hop
  is PE2 (a PE on a different plane).
• PE4 changes the MED value to 100 for the route whose next hop is PE2 (a PE on
  the same plane) and changes the MED value to 200 for the route whose next hop
  is PE1 (a PE on a different plane).
• If PE1 and PE2 on the left learn the same VPN route and advertise the route to
  PE3 and PE4 on the right through the RR, PE3 and PE4 preferentially select the
  VPN route on the same plane as them. After the route is advertised to the CE,
  traffic from the CE preferentially travels along the route advertised by PE3
  (because the MED value of the route advertised by PE3 is only increased by 10).
• MPLS is a tunneling technology that guides data forwarding in essence and has
  complete tunnel creation, management, and maintenance mechanisms. The
  preceding mechanisms are driven by network operation and management
  requirements, not by applications.
• The process of forwarding VPN traffic based on SR-MPLS BE is similar to the
  process of forwarding BGP/MPLS IP VPN traffic based on LDP.
     ▫ After PE1 receives a VPN packet from CE1, PE1 searches the routing table
        and pushes two layers of labels into the packet. The outer label is a public
        network label, and the inner label is a private network label.
     ▫ PE1 then sends the packet to P1, which swaps the outer label of the packet
       based on the SR-MPLS BE tunnel entry and sends the packet to P2. The
        process on P2 is similar to that on P1.
     ▫ Upon receipt of the packet, PE2 sends the packet to a specific VPN site
        based on the inner label (PHP is not considered in this case).
• When an SR-MPLS Policy is used to carry VPN traffic, the forwarding path must
  be pre-computed and delivered to the ingress (PE1) as a segment list.
     ▫ After receiving a VPN packet from CE1, PE1 searches the corresponding
        table and pushes the related segment list into the packet.
     ▫ PE1 then sends the packet to P1, which determines the forwarding path
       based on the outer label, pops out the outer label, and sends the packet to
       P2.
     ▫ After receiving the packet, P2 determines the forwarding path based on the
       outer label, pops out the outer label, and sends the packet to PE2.
     ▫ PE2 sends the packet to the specified VPN site according to the inner label.
• When SRv6 BE is used to carry VPN traffic, data packets carry two layers of IPv6
  headers. The outer IPv6 header address is used to identify the VPN to which the
  data belongs, and the inner IPv6 header identifies the actual destination address
  of the data.
     ▫ The outer IPv6 address is generated by the locator of PE2 and advertised to
        PE1 through BGP. PE2 advertises the locator to other devices in the form of
        a route.
     ▫ After PE1 receives a packet destined for the destination network segment
       (2001:: 64), PE1 encapsulates the packet with an outer IPv6 header and
       forwards the packet based on the routing table.
▫ Ps (P1 and P2) forward the packet based on the outer IPv6 header.
     ▫ After receiving the packet, PE2 matches the packet with the corresponding
       VPN instance based on the outer IPv6 header and forwards the packet
        based on the routing table.
• When SRv6 Policy is used to carry VPN traffic, data packets carry two layers of
  IPv6 headers. The outer IPv6 header address is replaced by each hop based on the
  SRH information, and the inner IPv6 header identifies the actual destination
  address of the data.
     ▫ Upon receipt of a packet destined for 2001:: 64, PE1 adds an outer IPv6
        header (including the SRH) to the packet and sends the packet to the next
        hop based on the header.
• SR-MPLS BE tunnels are similar to LDP tunnels. Tunnel establishment depends on
  IGP design. Therefore, after IGP design is complete, SR-MPLS BE design is
  complete.
     ▫ If the network is large (for example, a network with multiple data centers
        and dozens of branches) and has 5,000 to 10,000 TE tunnels, the
        maintenance workload is heavy.
• For a tunnel planned based on bandwidth, the actual traffic volume of the tunnel
  cannot be limited on devices after the tunnel is delivered. The traffic volume of a
  tunnel needs to be limited on the ingress, and the QoS or network slicing
  technology needs to be used.
• In color-based traffic diversion, different tunnels (including primary and backup
  tunnels) can only be selected based on endpoints. If different service traffic (such
  as HTTP and FTP traffic) is destined for the same address, color-based traffic
  diversion diverts the traffic to the same tunnel. As a result, the quality of some
  services deteriorates.
• CP1 is the activated path because it is valid and has a higher priority. The two SID
  lists (also called segment lists) of CP1 are delivered to the forwarder, and traffic is
  balanced between the two tunnel paths based on weight. For example, traffic
  along the SID list <SID11, SID12> is balanced based on W1/(W1+W2). In the
  current mainstream implementation, a candidate path has only one segment list.
• For a tunnel planned based on bandwidth, the actual traffic volume of the tunnel
  cannot be limited on devices after the tunnel is delivered. The traffic volume of a
  tunnel needs to be limited on the ingress, and the QoS or network slicing
  technology needs to be used.
• RPO: recovery point objective
     ▫ Automatic optimization upon bandwidth threshold crossing: You can set the
        link threshold. Then, when the bandwidth usage of a link exceeds the
        threshold, the system automatically adds tunnels over the link to the path
        computation queue and performs optimization when the optimization
        period arrives.
• Optimization policy:
     ▫ The first era is the Internet era. The iconic technology of this era is best-
       effort forwarding represented by IPv4.
     ▫ The second era is the all-IP era. The core technology of this era is MPLS,
       which supports applications such as TE and E2E VPN.
• The IPE technology system is classified into the following three categories:
     ▫ The third category comprises APN6 and SFC. APN6 provides application-
       level network service capabilities from the application perspective, and SFC
       provides flexible programming of network capabilities.
     ▫ The carrier has two backbone networks. Network A has high bandwidth but
       does not have the VPN service capability. As a result, cleaned traffic may be
        sent back to the cleaning center if network A is used.
     ▫ Network B has the VPN service capability but does not have high
       bandwidth. It is unable to support the cleaning of all service traffic.
• In this situation, SRv6 can be deployed on network A to direct traffic to this high-
  bandwidth backbone network. This implementation can provide protection for
  hundreds of DCs. SRv6 is easy to deploy and supports fast service provisioning.
• Currently, large numbers of enterprises are deploying their services on the cloud.
  Carriers must consider how to quickly provision site-to-cloud private lines for
  enterprises.
     ▫ Traditional private lines, such as MPLS private lines, may involve multiple
       ASs. Different ASs are managed and maintained by different management
       teams. The provisioning of a private line for one-hop cloud access involves
        collaboration and coordination among multiple departments.
     ▫ With SRv6, the deployment is easy. An E2E SRv6 logical private line can be
       established between the enterprise CPE and cloud PE to carry cloud-based
       services, achieving one-hop cloud access. Moreover, private line provisioning
       is very fast in this case.
• The iMaster NCE + SRv6 solution measures the delay of each link on the network
  and computes paths based on the shortest delay. This solution allows public
  cloud-based services to travel along the shortest-delay paths, improving the
  competitiveness of the public cloud.
• WAMS: Wide Area Measurement System
• One WAN for all services is a technology that provides cross-domain network
  services through coordination among different networks. In the financial industry,
  tier-2 banks, outlets, subsidiaries, and external organizations access the head
  office DC through tier-1 banks, which aggregate service traffic and forward
  aggregated traffic to the bank core network.
• The financial industry has high requirements on SLA performance. With the
  development of banking services, diversified service types have emerged in
  outlets. In addition to traditional production and office services, there are also
  security protection, IoT, public cloud, and other services. This poses higher O&M
  requirements on the one-financial-WAN-for-all-services scenario. Against this
  backdrop, Huawei proposes the iFIT tunnel-level measurement solution.
• BIER overview:
     ▫ This multicast technology encapsulates a set of destination nodes of
       multicast packets in a BitString in the packet header before sending the
       packets. With this multicast technology, transit nodes do not need to
       establish a multicast distribution tree (MDT) for each multicast flow, or
       maintain the states of multicast flows. Instead, the transit nodes replicate
       and forward packets according to the BitString in the packet header.
     ▫ In BIER, each destination node is a network edge node. For example, on a
       network with no more than 256 edge nodes, each node needs to be
       configured with a unique value ranging from 1 to 256. In this case, the set
       of destinations is represented by a 256-bit (32-byte) BitString, and the
       position or index of each bit in the BitString indicates an edge node. This
       explains the meaning of Bit Index Explicit Replication.
• Advantages of BIER:
     ▫ Supports large-scale multicast service scenarios and reduces resource
       consumption as BIER does not need to establish an MDT for each multicast
       flow or maintain the states of multicast flows.
     ▫ Improves multicast group joining efficiency of multicast users in SDN
       network scenarios because requests of the multicast users do not need to
       be forwarded along the MDT hop by hop, and instead their requests are
       directly sent by leaf nodes to the ingress node. This is more suitable for the
       controller on an SDN network to directly deliver the set of destinations to
       which multicast packets are to be sent after collecting the set.
• BIERv6 inherits the advantages of BIER and uses IPv6 to program paths,
  functions, and objects, facilitating multicast forwarding on SRv6-based networks.
• Bit allocation fundamentals: BIER floods the mapping between bit positions (BFR-
  IDs) of nodes and prefixes through IS-IS LSPs (IS-IS for BIER is used as an
  example). Devices learn the complete BIFT (BIER neighbor table) through
  flooding. The BIFT has the following characteristics:
▫ In the neighbor table, each directly connected neighbor has one entry.
     ▫ Each entry contains information about the edge nodes that are reachable
       to a neighbor.
          ▪ BIERv6 uses IPv6 addresses to carry Multicast VPN (MVPN) and GTM
            services, further simplifying protocols and eliminating the need to
            allocate, manage, and maintain MPLS labels.
• WAN:
    ▫ The IPv6 evolution of enterprise WANs mainly considers the WAN upgrade
      policy.
• The overall IPv4-to-IPv6 network migration principle is "DCN first, WAN second,
  and campus network reconstruction on-demand".
     ▫ Phase 1: Deploy dual-stack services in the DC's public service and test zones
       and IPv4 single-stack services on the WAN's underlay network and dual-
       stack services on the WAN's overlay network, and pilot dual-stack services
       on the campus network .
    ▫ The reconstruction solutions for the Internet access zone include NAT64, IVI,
      and dual-stack reconstruction. It is recommended to use the dual-stack
      solution to provide IPv6 addresses and service capabilities.
           ▪ The IPv6 address structure of the IVI is limited and does not meet the
             IPv6 address planning principles. Therefore, the IVI is not
             recommended for large-scale deployment.
           ▪ VXLAN underlay IPv4 + overlay dual stack can be used for initial dual-
             stack reconstruction to quickly provide IPv6 service bearer capabilities.
           ▪ VXLAN underlay IPv6 + overlay dual stack can be used for new DCN
             deployment and existing DCN reconstruction. This facilitates gradual
             evolution to IPv6-only networks.
• DCN architecture:
     ▫ Integration verification: Design and verify the IPE evolution solution and
       prepare a feasibility report.
     ▫ Network construction: Build an IPE network based on the network plan and
       deploy capabilities such as SRv6, network slicing, in-band flow
        measurement, and SDN.
     ▫ Integration verification: Design and verify the IPE evolution solution and
       prepare a feasibility report.
     ▫ Edge device upgrade: Upgrade edge devices (cloud, Internet egress, and
       campus egress PEs preferred) to support IPE.
     ▫ IPE basic capability deployment: Deploy both SRv6 and traditional IP/MPLS
       on upgraded devices. Configure new and old devices to interwork through
        IP/MPLS, and use SRv6 between new devices. Deploy a controller for
        network-wide management and control.
• Overlay service deployment suggestions:
     ▫ Layer 2 services: Use SRv6 EVPN as the bearer protocol to provide P2P and
       P2MP connection models. Compared with traditional Virtual Switching
       Instances (VSIs) and Pseudo Wires (PWs), EVPN features simple
       deployment, high bandwidth utilization, and fast convergence. EVPN is the
       best choice for L2VPN service bearer.
     ▫ Layer 3 IPv4 services: Use either SRv6 L3VPN or SRv6 EVPN L3VPN. Using
       SRv6 EVPN L3VPN is recommended.
     ▫ Layer 3 IPv6 services: Use either SRv6 L3VPNv6 or SRv6 EVPN L3VPNv6.
       Using SRv6 EVPN L3VPNv6 is recommended.
• Loopback routes and SRv6 locator routes need to be advertised in an IGP domain.
  Loopback routes are used for network management or BGP peer relationship
  establishment. Locator routes are used to guide the forwarding of data traffic
  over SRv6 tunnels in an IGP domain.
• BGP Egress Peer Engineering (EPE) is used to allocate SRv6 SIDs to BGP peers
  between ASs in inter-AS scenarios.
• Route advertisement:
     4. After receiving the VPN IPv6 route advertised by CE2, PE2 converts it into
        an EVPN IP prefix route and advertises it to PE1 through the BGP EVPN
        peer relationship. The route carries an SRv6 VPN SID — VPN End.DT6 SID
        2002:1::D100.
     5. After receiving the EVPN route, PE1 leaks it to the IPv6 routing table of the
        corresponding VPN instance, converts it into a common IPv6 route, and
        advertises it to CE1.
• Campus networks mainly involve large- and medium-sized office campuses,
  small-sized branch office campuses, and industrial production campuses. There
  are three scenarios for intra-campus mutual access: mutual access between
  internal office systems, Internet access, and mutual access between production
  systems.
           ▪ In the scenario where only a single Internet private line is leased for
             backhaul, the campus egress generally connects to the intranet over
             an IPsec tunnel. One of the following solutions can be used: dual-
             stack traffic over IPsec6, dual-stack traffic over IPsec4, and dual-stack
             traffic over GRE over IPsec6/4.
• Phase 1 evolution strategy: Perform IPv6 and SDN evolution concurrently to
  achieve two objectives at one time.
     ▫ Overall solution roadmap: Create a core network that supports SDN and
       IPv6 network virtualization, and connect the core network to the campus
       egress network. Reconstruct the existing network building by building
       (aggregation + access) and connect the network to the new core network.
        Use the SDN controller for automated deployment in reconstructed
        buildings.
     ▫ If multiple egresses are involved, BGP can be used for interconnection with
       external networks and IGPs can be used to advertise default routes on the
       internal network. The campus network internally advertises default egress
       routes to ensure that internal service packets can reach egress routers. The
       egress routers connect to the Internet using BGP to implement optimal
       path selection and load balancing.
• Service forwarding:
     ▫ The overlay IPv6 design is similar to the original overlay IPv4 design. The
       underlay configuration from the access layer to the core layer remains
       unchanged. The VXLAN control plane uses BGP EVPN, and core switches are
       configured as RRs. Enable IPv6 on the centralized gateway and configure an
       IPv6 address for the VBDIF interface to ensure Layer 2 IPv6 communication.
     ▫ Edge nodes on the access side can associate forwarded packets with overlay
       BDs based on interface VLANs, enabling terminals to be assigned to
       different gateway areas. In VXLAN Layer 3 forwarding, terminal packets are
       first advertised to the centralized gateway, and horizontal and vertical
       traffic are forwarded by the gateway in a unified manner.
     ▫ For design details about internal network interconnection and external
       network interconnection involved in the network egress zone, see the
       traditional campus network solution.
• Access authentication:
     ▫ Edge nodes on the campus network provide access for dual-stack users.
       Single authentication and dual-stack service policy association needs to be
       implemented for dual-stack users. This helps prevent dual-stack terminals
       from undergoing two separate authentications when accessing IPv4 and
       IPv6 services.
     ▫ A campus network with VXLAN underlay IPv4 + overlay dual stack must
       support various authentication modes (such as 802.1x, Portal, and MAC
       address authentication) for IPv6 users, so that authentication schemes can
       be implemented flexibly based on user terminals (such as Portal
       authentication for guest terminals and 802.1x authentication for internal
       office terminals). The deployment of network authentication points, policy
       enforcement points, and access points in the IPv6 solution is consistent with
       that in the original IPv4 solution. The authentication server uses a unified
       controller to provide authentication policy services.
• Wireless terminal access solution:
     ▫ Different VLANs are configured for guest SSIDs. The core switches or AC
       functions as the gateway for wireless access of guest terminals. The SLAAC
        hybrid solution is used to allocate IPv6 addresses. Portal authentication is
        recommended.
• AP management solution:
     ▫ The CAPWAP tunnel supports both IPv4 and IPv6. However, only IPv4 or
       IPv6 can be selected at one time. That is, the AC can manage APs only in
       either IPv4 or IPv6 mode. The default mode is IPv4.
     ▫ APs can go online in either IPv4 or IPv6 mode. That is, an AP can obtain
       only one IP address. The AC's IP address is manually configured. The AC can
       use DHCPv6 or SLAAC to assign IP addresses to APs.
• Note: You can run the capwap ipv6 enable command to enable the IPv6
  function for the CAPWAP tunnel.
• NMS IPv6 reconstruction:
     ▫ The NMS is not user-oriented and requires only a small number of IP
       addresses. Therefore, IPv4 addresses can still be reserved for internal
       communication, and IPv6 addresses are only optional.
     ▫ Through reconstruction, the NMS can also provide the following functions:
           ▪ Identifies and manages various IPv6 address types.
           ▪ Supports DNS resolution monitoring in IPv6.
           ▪ Collects performance, resource, and fault data of IPv6 and dual-stack
             devices, and provides functions such as performance management,
             resource management, and fault management and analysis for these
             devices.
           ▪ Associates IPv4 with IPv6 for dual-stack devices, so that the resource,
             performance, and fault data of these devices can be smoothly
             associated with historical data.
           ▪ Accesses the IPv6 MIBs of devices through IPv4-based SNMP and
             obtains IPv6 configuration, traffic, and other information from the
             IPv6 MIBs.
• IPv6 reconstruction for server operating systems:
     ▫ Common Linux versions that support IPv6 are as follows (IPv6 installed by
       default):
           ▪ Fedora 13, Red Hat Enterprise Linux 6, Ubuntu 12.04, Oracle Solaris
             10, SUSE Linux Enterprise Server 11, etc.
     ▫ Windows Server supports IPv6 as follows:
           ▪ Windows Server 2003 supports IPv6, but IPv6 is not installed by
             default and needs to be manually installed.
           ▪ Windows Server 2008 and later versions have IPv6 installed by
             default.
1. ABC
2. A
• There are different device management modes, such as SNMP, CLI, IPFIX, and Web UI.
• Orchestration application layer: implements various upper-layer applications of user
  intents. Typical orchestration applications include OSS and OpenStack. The OSS is
  responsible for service collaboration on the entire network. The OpenStack is used for
  network, computing, and storage service collaboration in a data center. There are
  other orchestration-layer applications. For example, a user wants to deploy a security
  application. The security application does not care about the device deployment
  location but invokes a controller NBI, for example, Block (Source IP, DestIP). Then the
  controller delivers an instruction to network devices. This instruction varies according
  to the southbound protocol.
• Controller layer: The entity at the controller layer is the SDN controller, which is the
  core of the SDN network architecture. The control layer is the brain of the SDN system.
  Its core function is to implement network service orchestration.
• Device layer: The network devices receive instructions from the controller and forward
  the instructions.
• NBIs: NBIs are used by the controller to interconnect with the orchestration application
  layer. The main NBIs are RESTful interfaces.
• SBIs: SBIs are protocols used for interaction between the controller and devices,
  including NETCONF, SNMP, OpenFlow, and OVSDB.
• Network automation tools implement basic network automation. That is, tools connect
  to devices through SSH to implement batch operation and management.
• Unstructured data can be easily understood by humans, but it is difficult for machines
  to understand and difficult for automatic data collection.
• iMaster NCE is not only a controller, but also provides analysis and network
  management functions.
• Network automation developers may need to have more professional knowledge, such
  as database, algorithm, cryptography, software development lifecycle management,
  development framework, big data, cloud computing, and artificial intelligence (AI),
  depending on the specific work content and scenario.
• Part 1 of this course module describes how to use Python modules, including
  paramiko, pysnmp, ncclient, requests, and grpc, to communicate with devices.
• Part 2 focuses on the OPS. The OPS refers to open programmability provided by
  Huawei devices. You can upload Python code to a device, and the device runs the code
  to implement specified functions.
• An SND abstracts device capabilities based on a device YANG model. A user can
  generate an SND based on device YANG files and a few Python code. After the SND is
  uploaded to NCE, device management and service provisioning can be implemented.
  SND types include NETCONF SNDs, CLI SNDs, and customized SNDs.
     ▫ NETCONF SND: provides the capability of converting YANG files into NETCONF
       files.
• An SSP allows user to customize network services (apps), for example, quickly
  provision L3VPN services. These types of services or application involve multiple devices
  and protocols and are presented as an SSD. To compile an SSD, an engineer needs to
  compile service YANG files, Python scripts (service callback logic) for service mapping,
  and Jinja2 template. The basic principles are as follows (from north to south):
• https://en.wikipedia.org/wiki/General_purpose_technology
     ▫ There are many data issues, such as a few data sources and the requirement for
       data governance (labor-intensive).
     ▫ There are many algorithm engineering issues, such as conversion from paper to
       code and the efficiency of open-source algorithms.
     ▫ The computing power is difficult to obtain. The computing power is used during
       peak hours. (Nvida does not allow the use of G series GPUs in data centers.)
• The SSH transport layer protocol uses the Diffie-Hellman key exchange algorithm to
  implement PFS.
• When the SSH user authentication protocol is started, it receives a session ID from the
  SSH transport layer protocol. The session ID uniquely identifies a session and is a part
  of the digital signature to indicate the ownership of the private key.
• A TCP/IP connection can forward network data of other TCP ports through SSH
  channels, ensuring security.
• Data of Telnet, SMTP, IMAP, and other TCP/IP-based insecure protocols can be
  forwarded through SSH, which prevents the transmission of user names, passwords,
  and privacy information in plaintext and therefore enhances security. In addition, if the
  firewall restricts the use of some network ports but allows the SSH connection,
  communication can be implemented through the SSH TCP/IP connection.
• In X11, X refers to the X protocol, and 11 is the eleventh version of the X protocol. The
  Linux graphical user interface (GUI) is based on the X protocol at the bottom layer.
  When remote interaction with graphical applications on the Linux server is required, a
  method for enhancing communication security is to use SSH to display the GUI on the
  local client through the X11 tunnel.
• The client and server each randomly generate a private key Xc and Xs, respectively.
• The client and server each calculate their own public key Yc and Ys, respectively.
• The client and server calculate the session key for encryption based on the public and
  private keys.
• The Diffie-Hellman key exchange algorithm is used for key exchange, which is based
  on the mathematical discrete logarithm and is not described in this course. During key
  exchange, the private keys Xc and Xs are not transferred and, due to the difficulty in
  computing discrete logarithms, they cannot be decrypted by other users even if p, g,
  Yc, and Ys are obtained. This ensures the confidentiality of the session keys.
• Note that the public and private keys generated in this phase are used only to
  generate session keys and are irrelevant to subsequent user authentication. After the
  key exchange phase is complete, all subsequent packets are encrypted based on the
  session keys.
• The digital signature is encrypted by client’s private key. To see the content, we need
  public key to decrypt it.
• The channel types include session, x11, forwarded-tcpip, and direct-tcpip.
• For details, see section 4.9.1 "Connection Protocol Channel Types" in RFC4250 at
  https://www.ietf.org/rfc/rfc4250.txt.
• The Message class provides methods for writing bytes to a stream and extracting
  bytes.
• The Packetizer class provides methods for checking handshakes and obtaining channel
  IDs.
• The Transport class provides methods such as public key authentication, private key
  authentication, and channel opening.
• The SSHClient class provides methods for establishing connections and opening
  interactive sessions.
• The SFTPClient class provides methods such as file upload and download.
• OpenSSH is a free open-source implementation of the SSH protocol. It provides server
  programs and client tools. OpenSSH is integrated in all Linux operating systems.
  OpenSSH records the public key of each computer that a user has accessed in
  ~/.ssh/known_hosts. When the same computer is accessed next time, OpenSSH checks
  the public key. If the public keys are different, OpenSSH generates a warning to
  prevent man-in-the-middle attacks.
• This course describes methods of four classes: Transport, key handling, SSHClient, and
  SFTPClient.
• This process uses the Paramiko SFTP session as an example. Because the SSHClient
  class integrates the Transport, Channel, and SFTPClient classes, the preceding methods
  can be implemented by the SSHClient class. This is especially true for SSH sessions.
• For ease of use, you can use an address (as a tuple) or a host string as the sock
  parameter. The host string is the host name with an optional port, separated by a
  colon (:). If a port is transferred, it is converted to a tuple in the format (host name,
  port).
• OpenSSH records the public key of each computer that a user has accessed in
  ~/.ssh/known_hosts. When the same computer is accessed next time, OpenSSH checks
  the public key. If the public keys are different, OpenSSH generates a warning to
  prevent man-in-the-middle attacks. Generally, when a client connects to the SSH
  server for the first time, you need to enter Yes or No for confirmation.
• For details about the commands, refer to the product documentation at
  https://support.huawei.com/enterprise/en/doc/EDOC1000097293/466984de?idPath=24
  030814|21432787|21430822|22318704|9794900.
• For details about the commands, refer to the product documentation at
  https://support.huawei.com/enterprise/en/doc/EDOC1000097293/466984de?idPath=24
  030814|21432787|21430822|22318704|9794900.
1. ABCDE
• SNMP is based on UDP and is stateless, unordered, and unreliable for configuration
  management.
• SNMP can be configured for only one object, not for one service. During the concurrent
  configuration of multiple objects, if some objects are successfully configured but some
  objects fail to be configured, unknown impacts will be caused on the network.
• NETCONF 1.0 has no requirements on the model language. The combination between
  NETCONF 1.1 and YANG is determined.
• This example is not a real example. The YANG model does not take the entire device
  as one YANG file. Instead, the YANG model splits it into multiple YANG files by
  function.
• For details, see RFC 6241.
• <config> may contain the optional attribute <operation>, which is used to specify an
  operation type for a configuration item. If the <operation> attribute is not carried, the
  <merge> operation is performed by default. The <operation> attribute values are as
  follows:
     ▫ merge: In the database, modify the existing data or create data that does not
       exist. This is the default operation.
     ▫ create: Add configuration data to the configuration database only when the
       configuration data to be created does not exist in the configuration database. If
        the configuration data exists, <rpc-error> is returned, in which the <error-tag>
        value is data-exists.
• YANG files can be classified into three types: vendor-specific YANG files, IETF-defined
  YANG files, and OpenConfig YANG files.
2. YANG is a modeling language used to describe the content layer of NETCONF and
   RESTCONF. The difference between NETCONF and RESTCONF is as follows:
   RESTCONF constructs the transport layer, messages layer, and operations layer based
   on HTTP, while NETCONF has defined the operations layer and uses SSH as the
   transport layer and RPC as the messages layer.
• A microburst refers to a situation in which a large amount of burst data is received
  within a very short time (milliseconds), so that the burst data rate is tens or hundreds
  times higher than the average rate or even exceeds the port bandwidth. The NMS or
  network performance monitoring software calculates the real-time network bandwidth
  at an interval of seconds to minutes. At such an interval, the network traffic seems to
  be stable. However, packet loss may have occurred due to microbursts.
• SNMP queries are performed in a question-answer manner. If 1000 interactions are
  performed within 1 minute, SNMP parses 1000 query request packets. Telemetry
  avoids repeated queries. This is because subscription needs to be performed only once
  and then devices can continuously push data to the NMS.
• There is also a view in the industry that SNMP is considered as a traditional telemetry
  technology, and telemetry is currently referred to as streaming telemetry or model-
  driven telemetry.
• Google Remote Procedure Call (gRPC) is an open-source remote procedure call (RPC)
  system developed by Google.
• The collector constructs data in GPB or JSON format based on the subscribed event,
  compiles a .proto file through Protocol Buffers, establishes a gRPC channel with the
  device, and sends a request message to the device using gRPC.
• After receiving the request, the device parses the .proto file using Protocol Buffers to
  restore the data for processing.
• After data sorting is complete, the device re-compiles the data using Protocol Buffers
  and sends a response to the collector using gRPC.
• The collector receives the response message. So far, the gRPC interaction ends.
• After the files are compiled successfully, multiple Python files are generated in the
  current folder.
• The gRPC module is installed by running the pip install grpc command.
• Managed object (MO): an object that can be used to manage network devices by
  invoking RESTful APIs, such as CPU information, system information, and interface
  information.
• Uniform Resource Identifier (URI): identifies a specific resource. In the OPS, URIs are
  used to identify MOs. For example, the URI of the CPU information is
  /devm/cpuInfos/cpuInfo, which uniquely identifies the CPU information.
• Uniform resource locator (URL): A URL is a URI that can be used to present a resource
  and specify how to locate the resource, for example, http://www.ietf.org/rfc/rfc2396.txt
  and ftp://ftp.is.co.za/rfc/rfc1808.txt.
• Huawei network devices that support the OPS provides a running environment for
  Python scripts. Scripts in Java and C/C++ languages are not supported.
• An API is a particular set of rules and specifications that are used for communication
  between software programs.
• For more information about RESTful, see the HCIP Programming and Automation
  Course — RESTful Fundamentals and Practices.
• The OPS allows you to compile Python scripts, install the scripts on network devices,
  and send HTTP requests when the scripts are running to manage network devices.
• Currently, the implementation of RESTful APIs uses the HTTP standard specifications.
  Therefore, this section briefly describes HTTP.
• <headers> and <entity-body> are the header field and body of the packet on the
  previous page.
• Header field:
     ▫ Host: contains host name and port number of the web server.
     ▫ User-Agent: contains information about the user agent originating the request.
     ▫ Accept: specifies response media types that are acceptable.
     ▫ Accept-Language: indicate the set of natural languages that are preferred in the
       response.
     ▫ Date: represents the date and time at which the message was originated.
     ▫ Server: contains information about the software used by the origin server to
       handle the request.
     ▫ Last-Modified: last modified date for the requested object.
     ▫ ETag: specifies an identifier for a specific version of a resource, often a message
       digest.
     ▫ Accept-Ranges: allows a server to indicate that it supports range requests for the
       target resource.
     ▫ Vary: describes what parts of a request message, aside from the method, Host
       header field, and request target, might influence the origin server's process for
       selecting and representing this response.
     ▫ Content-Length: indicates length of the response body in octets.
     ▫ Content-Type: indicates Multipurpose Internet Mail Extensions (MIME) type of
       content.
• For details about the header fields, see RFC 2616.
• The formats of the OPS RESTful API request and response packets are similar to those
  of the HTTP request and response packets described in the previous slide.
• Currently the OPS RESTful APIs use the XML format to transmit data. In a later version,
  the APIs can use the JavaScript Object Notation (JSON) format to transmit data.
  Therefore, the body of the OPS RESTful API request and response packets is in XML
  format.
• You can download RESTful API Reference on the network device page of
  http://support.huawei.com.
• The maintenance assistant is a function of Huawei network devices. You can set the
  trigger conditions and the Python script to be executed when the conditions are met.
  The system monitors device running in real time. When the specified trigger condition
  is met, the network device system automatically executes the Python script to
  complete the actions defined in the script. For more information about the
  maintenance assistant, see the Huawei network device product documentation.
• DHCP server: allocates the temporary IP address, default gateway, and script file server
  address to the device to be automatically deployed.
• Script file server: stores scripts (Python) required for automatic network device
  deployment. By running the script files, a network device can obtain information such
  as the IP address of the software and configuration file server, version file, and
  configuration file.
• Software and configuration file server: stores system software, configuration files, and
  patch files required for automatic network device deployment.
• A Python script can be compiled to deliver commands. When the network is
  disconnected, the execution result is temporarily stored on the device. After the
  network is recovered, the execution result is transmitted to the server. Therefore, the
  impact of network disconnection can be mitigated.
• After knowing the format of the response message, you can parse the response
  message in the Python script. In this case, the response message is only displayed. You
  can try to parse the response message to implement more complex functions.
• For details about how to enable the FTP server on the local PC, you can easily search
  the way from a search engine.
1. ABCD
• The REST software architecture was first mentioned by Roy Fielding in his doctoral
  paper. Roy Fielding is one of the major authors of the HTTP specifications.
• OpenFlow was defined in the initial phase of SDN. With technology development,
  many other southbound interface (SBI) protocols are defined between the controller
  and network devices.
• SDN is a broader concept, not limited to OpenFlow. Separation between the control
  and data planes is a method rather than the essence of SDN.
• Application layer: provides various upper-layer applications for service intents, such as
  OSS and OpenStack. The OSS is responsible for service orchestration of the entire
  network, and OpenStack is used for service orchestration of network, compute, and
  storage resources in a DC. There are also other applications at this layer. For example,
  a user deploys a security app. This app invokes NBIs of the controller, such as Block
  (Source IP,DestIP), regardless of the device locations. Then the controller delivers
  different instructions to network devices based on different southbound protocols.
• Control layer: The SDN controller is deployed at this layer and is the core of the SDN
  network architecture. The control layer is the brain of the SDN system and implements
  network service orchestration.
• Infrastructure layer: A network device receives instructions from the controller and
  performs data forwarding.
• NBI: NBIs, mainly RESTful APIs, are used by the controller to interconnect with the
  application layer.
• SBI: SBIs are used by the controller to interact with devices through protocols such as
  NETCONF, SNMP, OpenFlow, and OVSDB.
• Cloud platform: resource management platform in a cloud DC. The cloud platform
  manages network, compute, and storage resources. OpenStack is the most mainstream
  open-source cloud platform.
• MTOSI or CORBA is used to interconnect with the BSS or OSS. Kafka or SFTP can be
  used to connect to a big data platform.
• iMaster NCE effectively connects physical networks with business intents and
  implements centralized management, control, and analysis of global networks. It
  enables resource cloudification, full lifecycle automation, and data analytics-driven
  intelligent closed-loop management according to business and service intents and
  provides open network APIs for rapid integration with IT systems.
• Huawei iMaster NCE can be used in the enterprise data center network (DCN),
  enterprise campus, and enterprise branch interconnection (SD-WAN) scenarios to
  make enterprise networks simpler, smarter, open, and secure, accelerating enterprise
  service transformation and innovation.
• The operation support system (OSS) is a necessary support platform for telecom
  services.
• Stateless request: The processing result on the server must be based on the
  information carried in the same request.
• An API is a set of predefined functions or methods for connecting different
  components of a software system. It is a set of routines that can be accessed by
  applications and developers based on software or hardware without having to access
  the source code or understand the details of the internal working mechanism. For
  example, if a computer needs to invoke information in a mobile phone, we simply
  need to connect the computer and the mobile phone by using a data cable. In this
  example, the interfaces on the computer and mobile phone at both ends of the data
  cable.
• Rendering refers to the process of transforming views such as HTML into visual images
  that can be seen by human eyes.
• For web applications of early days, a view is a graphical user interface (GUI) composed
  of HTML elements. For today's web applications, the GUI incorporates new elements
  such as Adobe Flash, XHTML, XML/XSL, and WML.
     ▫ The view is responsible for processing data display. Generally, a view is created
       based on model data.
• An API is a set of predefined functions or methods for connecting different
  components of a software system. It is a set of routines that can be accessed by
  applications and developers based on software or hardware without having to access
  the source code or understand the details of the internal working mechanism.
• Separation between the frontend and backend has become an industry standard for
  the Internet projects in the industry. It lays a solid foundation for the large-scale
  distributed architecture, elastic computing architecture, microservice architecture, and
  multi-terminal services (such as browsers, vehicle-mounted terminals, Android, and
  iOS). The key to separation between the frontend and backend is that the frontend
  page invokes the RESTful API of the backend for data interaction.
• Abstract of Roy's doctoral dissertation Architectural Styles and the Design of Network-
  based Software Architectures:
     ▫ The URL is a subset of the URI. The former must be an absolute path, while the
       latter can be an absolute path or a relative path. For example,
       http://127.0.01:8080/AppName/rest/product/1 is a URL, and
       AppName/rest/product/1 is a URI.
• As mentioned earlier, REST makes full use or heavily relies on HTTP. Next, we will
  move on to HTTP.
• SPeeDY (SPDY) is a TCP-based application-layer protocol developed by Google. Its
  objective is to optimize the performance of HTTP and shorten the loading time of web
  pages and improve security by using technologies such as compression, multiplexing,
  and priority. The core idea of SPDY is to minimize the number of TCP connections.
  SPDY is an enhancement to HTTP, instead of a protocol for replacing HTTP.
• When a TCP connection is released, if the value of the Connection field in the packet
  header is close, the server proactively closes the TCP connection, and the client
  passively closes and releases the TCP connection. If the value of Connection is
  keepalive, the connection lasts for a period of time and can continue to receive
  requests.
• The browser differentiates the displayed content such as HTML, XML, GIF, and flash
  based on MIME-type.
• Advantages of the connectionless feature: This mode saves the transmission time and
  improves the concurrent performance. No persistent connection is established. Instead,
  one response is made to each request. However, if a connection is repeatedly
  established and torn down, the efficiency is affected. In HTTP/1.1, a TCP connection is
  maintained between the browser and the server for a period of time and will not be
  disconnected immediately after a request ends.
• Stateless means that, if the processing of subsequent packets requires the previously
  exchanged information, the information must be retransmitted. Although HTTP/1.1 is
  a stateless protocol, cookies are introduced to implement the function of maintaining
  status information.
• A cookie is a text file stored on a client. This file is associated with a specific web page
  and saves the information about the web page accessed by the client.
• HTTP/1.1 has been widely used since it was proposed in 1999 and has become a
  mainstream standard for more than 20 years. In the following part, we will introduce
  HTTP packets, which are based on HTTP/1.1.
• In HTTP 1.0, each connection involves only one request and response and is closed
  after the request is processed. HTTP 1.0 does not have the Host field. In HTTP 1.1,
  multiple requests and responses can be transmitted in the same connection, and
  multiple requests can be processed concurrently.
• The browser differentiates the displayed content such as HTML, XML, GIF, and flash
  based on MIME-type.
• Header compression: The HPACK algorithm is used to compress headers to reduce the
  header size and improve performance.
• Multiplexing: A request message can be divided into frames, which are sent in
  sequence and are reassembled at the other end. In HTTP/1.1, when a client sends
  multiple requests through a TCP connection, the server can only respond to the
  requests in sequence. Subsequent requests may be blocked.
• Resource pushing: In addition to responding to client requests, the server can push
  additional resources to clients.
• Priority: HTTP/2 defines complex priority rules. A browser can request multiple
  resources at a time and specify priorities to help the server determine how to process
  these resources, avoiding resource competition.
• In this case, two objects are involved in the networking: CloudIDE and iMaster NCE.
• Platform layer: It provides four types of APIs and supports industry-standard network
  interconnection protocols.
• Network layer: It provides various open interfaces, such as NETCONF, YANG, and
  Telemetry, improving device manageability. APs are compatible with third-party IoT
  cards to implement IoT.
• Terminal layer: It supports access of IoT terminals (such as ZigBee, RFID, and BLE), and
  access of wired and wireless terminals (such as mobile phones, IP phones, tablets, and
  cameras).
• The tenant or MSP wants to use an existing or third-party authentication platform to
  authenticate user identities and authorize users for network access authentication
  through the web page (authentication portal). For example, an MSP provides a unified
  access authentication page for tenants.
• To access the Internet, a user connects to the SSID of a Wi-Fi network and logs in to
  the portal pushed by a developer app. The developer app calls the authorization API of
  Huawei iMaster NCE-Campus to deliver the user's Wi-Fi access permission to the AP.
  The user then can access the Internet.
• Huawei Agile Cloud Authentication (HACA) is based on the mobile Internet protocol
  HTTP/2.
• For more information about API-based authentication and authorization, visit
  https://devzone.huawei.com/cn/enterprise/campus/apiSolution.html.
• For details about RADIUS-based authentication, visit
  https://devzone.huawei.com/cn/enterprise/campus/radiusSolution.html.
• Location-based service (LBS) uses various locating technologies to obtain the current
  locations of devices and pushes information and basic service for these devices through
  mobile Internet.
• iMaster NCE-Campus aggregates the terminal location data collected by cloud APs and
  periodically sends the data to the third-party LBS platform. After parsing and analyzing
  the location data with a series of algorithms, the LBS platform provides VASs, such as
  heatmap, tracking, and customer flow analysis, for customers.
• Remarks: Partners need to meet related standards based on application scenarios, such
  as EU General Data Protection Regulation (GDPR).
• iMaster NCE-Campus can directly report terminal location data to a third-party LBS
  platform. In this solution, iMaster NCE-Campus function as a relay agent.
• For details about this process, see "Wi-Fi Terminal Location Practice in Huawei
  CloudCampus Solution" in the HCIP-Datacom-NCE Northbound Openness Lab Guide.
• The validator value is in UUID format and is generated by iMaster NCE-Campus.
• In the 5G era, everyone predicts that 5G will lead to new businesses and services.
  However, carriers raise requirements for the rollout of new services, and device
  vendors implement the requirements. The rollout period is half a year or several years.
  It takes only a few months for OTT providers to launch new services, which makes it
  impossible for carriers and OTT providers to compete equally. There are many reasons
  for slow service rollout. One of the reasons is that there is a gap between carriers and
  vendors. That is, carriers do not understand devices, and vendors do not understand
  carrier services. It is an urgent issue to eliminate the impact of this gap and enable
  carriers and vendors to play their roles in the fields they are familiar with and quickly
  provision new services.
• Finally, the products provided by vendors are universal, that is, they are applicable to
  most operators. Carriers want systems to match their service requirements and
  enterprise cultures. Therefore, they have customization requirements. For example, a
  carrier writes the customization capability into its bidding document or customizes
  enterprise specifications. From the perspective of vendors, customization requirements
  of customers generate high costs. Therefore, the best solution is to provide the
  customization capability and let customers complete customization by themselves.
• On traditional networks, network automation refers to the process of generating
  command line scripts based on the template mechanism and enabling devices to run
  the received command line scripts through the network management protocol. It does
  not change the way it interacts with network devices. During device adaptation,
  network management engineers use Python or Perl to compile a specific function with
  a narrow application scope to implement a series of automatic operations, or use
  automation tools such as Ansible and Puppet to implement more complex automation
  tasks. Network management engineers need to adapt to network devices to be
  supported one by one, regardless of whether they write scripts or use automation tools.
  As the script scale becomes larger and larger, script maintainability decreases
  continuously, and the time required for adding a new version increases accordingly.
  With the advent of the Internet of Everything (IoE) era, the time to market (TTM) of
  new services has become a core indicator for enterprises to survive.
• With the great success of the commercialization of cloud computing, the concept of
  software-defined networking (SDN, sometimes referred to as “software-driven
  network”), which was popular only in the academic circle, has begun to flourish. On an
  SDN network, the separation between the control and forwarding planes is highly
  recommended. In an ideal SDN network, a centralized controller becomes an
  indispensable basis. As the brain of the entire network, it collects information about
  the network topology, calculates an optimal path globally based on service
  requirements, and notifies devices along the path. When receiving a service packet,
  these devices forward the packet according to a path determined by the centralized
  controller.
• iMaster NCE is an innovative network cloudification engine developed by Huawei.
  Positioned as the brain of future cloud-based networks, NCE integrates functions such
  as network management, service control, and network analysis. It is the core
  enablement system for network resource pooling, network connection automation, and
  O&M automation. NCE aims to build an intent-driven network (IDN) that is first
  automated, then self-adapting, and finally autonomous.
• The overall openness and programmability of NCE include automation, analytics, and
  intent. The goal is to build a full-lifecycle open and programmable architecture to
  satisfy customer needs. The OPS, as a part of the automation engine, are crucial for
  the entire open programming system of NCE to form a closed loop. Equivalent to the
  limbs of the human body, the OPS is an executor, which needs to be flexible to support
  the automatic closed-loop capability driven by the brain of an intent-driven network.
• The open architectures of different industries are similar. Similar to the operating
  system on a computer, NCE service openness and programmability are crucial to
  networks.
• To connect the operating system to managed hardware, such as the mouse and
  keyboard, you need to install corresponding drivers. The drivers enable the operating
  system to recognize the hardware. NCE service openness and programmability have
  similar functions. The difference is that switches and routers are managed in the
  datacom industry. First, we need to understand and manage these switches and
  routers. That is, load device drivers first, and then add and understand the specific
  capabilities of the devices.
• At the top layer, the operating system provides program management to manage
  various applications, such as Word and Excel. Note that the mouse and keyboard
  capabilities are required for using these programs. NCE service openness and
  programmability implement service management at the top layer, that is, building
  network service capabilities based on application scenarios. In addition, NCE provides
  capabilities such as rollback up on a transaction failure and automatic detection of
  device configuration changes to improve O&M.
• NCE service openness and programmability depend on two software packages: SND
  and SSP.
     ▫ Specific NE Driver (SND): provides a data model for the iMaster NCE OPS to
       interact with NEs.
     ▫ Specific Service Plugin (SSP): defines a data model for completing network
       service configuration.
• Engineers compile SND packages and load them to iMaster NCE to quickly
  interconnect with new devices. Then, engineers compile SSP packages and load them
  to iMaster NCE to quickly construct new services.
• NE YANG model: YANG files generated by abstracting atomic capabilities (such as
  creating sub-interfaces) at the device layer. They are provided by device vendors.
• Service YANG model: YANG files generated by abstracting service models can be used
  to generate northbound interfaces and configuration GUIs.
• Easymap: a mapping logic algorithm that decomposes network-layer services into NE-
  layer services.
• The design state is used to establish the mapping between the service YANG model
  and NE YANG model. The system provides the mapping logic algorithm to decompose
  network-layer services into NE-layer services. Currently, the NCE service openness and
  programmability framework supports two layers of mapping logic: 1. Mapping from
  the service model to the device model, which is processed by the SSP package. 2.
  Mapping from the device model to protocol packets, which is processed by the SND
  package.
• The running state uses the mappings established in the design state to manage devices
  and provision services. Specifically:
• The running state provides the dryRun function to help users preview the results of the
  current operation and the modification of related device configurations.
• Jinja2 is a Python template engine. NCE service openness and programmability use
Jinja2 to quickly complete the template-based processing of SSP packages.
• The development process of NCE service openness and programmability is as follows:
     ▫ First, analyze requirements based on service scenarios and output the high level
       design (HLD). In this phase, analyze the configuration commands to be delivered
       and the involved device types, and then start the development of a Specific NE
       Driver (SND) package. The SND package is developed as required. If the SND
       package of a device exists and the SND package to be delivered is supported, you
       do not need to develop the SND package again.
     ▫ Then, develop a Specific Service Plugin (SSP) package. Step 1: Develop the
        southbound Jinja2 template. The southbound Jinja2 template can be considered
        as the tailoring of the open interfaces of the device. There are many open
        capabilities of devices. However, we only need to use some of them. Therefore,
        find and select the required ones. Step 2: Define the service YANG model and
        determine northbound input parameters. Step 3: Develop the service logic. This
        step is optional. If the service layer can directly map and use the southbound
        template, skip this step.
• For SND package processing, if the device is a NETCONF device, NCE service openness
  and programmability automatically convert the model data into NETCONF packets.
• For more information about NETCONF, see NETCONF/YANG Principles and Practices.
• In this example, the service YANG module hbng is customized.
• import and include introduce two modules for subsequent node definition.
• augment "/app:applications" { ... } indicates that the current module hbng is extended
  to the /app:applications directory of the app module.
• In this example, a container node named system is created, including the login
  container sub-node for recording login information.
▫ A leaf node named message, which records the login prompt information.
     ▫ A list node named user. In the list node, the unique key is defined as name and
       its type is character string; level is defined as user level and its type is number.
• In this example, the list interface is defined. config true indicates that the list is
  configuration data, and config false in observed-speed indicates that this leaf is status
  data.
• The leaf node name is a character string. The leaf node speed provides three options.
  type enumeration indicates that the enumerated values are 10m, 100m, and auto.
  The leaf node observed-speed is a positive integer of the uint32 type.
• In this example, a group node named ip-port is defined, including two leaf sub-nodes:
  ip and port.
• The container quadruple contains the source and destination information containers,
  both of which use the IP address and port information. The group node ip-port is
  reused.
• The container transfer-protocol is used to indicate the transmission protocol. The UDP
  and TCP protocols are provided. Either of them can be selected using the choice
  function. case a indicates that the UDP protocol is used, and case b indicates that the
  TCP protocol is used.
• In this example, an RPC interface named reset-specified-servers is defined for
  resetting services. input indicates that the input parameter is the IP address of the
  server to be restarted. If output is not defined, the HTTP status is used to determine
  the returned result.
• The servers list node defines action reset to restart the corresponding service. Input
  defines the leaf node reset-at, which indicates that the input parameter is the restart
  time. Output defines the leaf node complete-at, which indicates that the returned
  result is the restart completion time.
• The Jinja2 template is only a text file, which can be based on any text format (HTML,
  XML, CSV, etc.). In this example, the XML format is used.
• A template contains variables and expressions. The variables and expressions are
  converted to corresponding values when the template is used. It has the following
  common syntaxes:
• The variables in {{...}} can be modified using filters. Filters and variables are separated
  by vertical bars (|). For example, {{ 'abc' | capitalize }} indicates that the first letter is
  capitalized and the filtering result is Abc. In this example, {{dev.neName | to_ne_id}}:
  to_ne_id is a user-defined filter, indicating that the variable device name dev.neName
  is converted to the device ID.
• Key capabilities:
• The service layer shields differences between devices, supports interconnection with
  different device types, and delivers configurations through different protocols. The
  maintenance personnel or upper-layer system only needs to view corresponding
  services. They do not need to know the specific vendor and protocol of the device. This
  feature improves interconnection efficiency and reduces the pressure on maintenance
  personnel.
• Key capabilities:
• Key capabilities:
     1. Use the dryRun function to check whether the delivered configurations are
        correct in advance.
2. ACD