Switching
Switching
When you configure a switch, it needs to use the configuration. It also needs to be able to retain
the configuration in case the switch loses power.
Cisco switches contain random access memory (RAM) to store data while Cisco IOS is using it,
but RAM loses its contents when the switch loses power or is reloaded.
To store information that must be retained when the switch loses power or is reloaded, Cisco
switches use several types of more permanent memory, none of which has any moving parts. By
avoiding components with moving parts (such as traditional disk drives), switches can maintain
better uptime and availability.
The following list details the four main types of memory found in Cisco switches, as well as the
most common use of each type:
• RAM: Sometimes called DRAM, for dynamic random-access memory, RAM is used by
the switch just as it is used by any other computer: for working storage. The running
(active) configuration file is stored here.
• Flash memory: Either a chip inside the switch or a removable memory card, flash memory
stores fully functional Cisco IOS images and is the default location where the switch gets
its Cisco IOS at boot time. Flash memory also can be used to store any other files,
including backup copies of configuration files.
• ROM: Read-only memory (ROM) stores a bootstrap (or boothelper) program that is
loaded when the switch first powers on. This bootstrap program then finds the full Cisco
IOS image and manages the process of loading Cisco IOS into RAM, at which point
Cisco IOS takes over operation of the switch.
• NVRAM: Nonvolatile RAM (NVRAM) stores the initial or startup configuration file
that is used when the switch is first powered on and when the switch is reloaded.
It is the process that allows Ethernet switches to build and maintain a table of MAC addresses
and their associated ports. This process ensures that the switch can efficiently forward Ethernet
frames to the correct destination port without unnecessary flooding, optimizing the
performance of the network.
When a switch receives an Ethernet frame on one of its ports, it examines the source MAC
address of the frame.
• The switch records the source MAC address of the frame along with the port on which
the frame was received.
• The switch creates or updates an entry in its MAC address table (also known as the
forwarding table or content addressable memory (CAM) table).
The port through which the frame was received (the port on the switch).
This information is stored in the switch's MAC address table. The table is used to map MAC
addresses to specific switch ports. Over time, the switch learns which devices (with which
MAC addresses) are connected to which ports.
Note: A switch port is a physical or logical interface on a network switch where a device
(like a computer, router, or another switch) connects.
The MAC address table (or CAM table) is a hardware-based memory table that enables a switch
to perform Layer 2 forwarding based on learned source MAC address, VLAN ID, and interface
bindings. It’s dynamically built by inspecting ingress frames and is subject to aging, static
provisioning, and port security enforcement.
1. The switch learns the source MAC and associates it with the port and VLAN.
3. When sending traffic, the switch checks the destination MAC in the table to know
where to forward the frame.
What if we want to remove mac addresses before then? We have a couple of options. We can
either do it manually:
Another option is to change the aging time. This can be done as follows:
Leaving dynamic MAC addresses behind for a moment, we can set static MAC addresses as
follows:
We can also configure the switch to drop any frames to a particular MAC address:
What is errdisable?
errdisable (Error Disable) is a switch protection mechanism that automatically shuts down a
port when certain types of errors or violations are detected — it puts the port into an err-
disabled state.
First you must choose something to recover from. I’ll choose a port security violation:
<30-86400> timer-interval(sec)
This is the time to recover from an errdisabled state in seconds. The default is 300 seconds.
Note that we can also manually enable an interface/vlan this has been errdisabled:
L2 MTU:
L2 MTU defines the maximum Layer 2 frame size supported by a switch interface. It’s crucial
when implementing technologies like MPLS, Q-in-Q, or VXLAN, where additional
encapsulation overhead exceeds the standard 1518-byte Ethernet frame. Improper L2 MTU
design leads to silent frame drops, especially with transit switches that don’t fragment Layer 2
frames.
A collision domain is a part of a network where data can crash into each other when two or
more devices try to send information at the same time.
Imagine people in one room shouting at the same time. No one can understand anything — this
is like a collision in a network.
If only one person speaks at a time, everyone understands — that's how data should flow
without collision.
Switches are more intelligent than hubs; they are Layer 2 aware.
→ They use information in the Layer 2 header to decide where to send frames.
Additionally, switches have the ability to buffer frames before sending them.
→ If a switch receives two broadcast frames at the same time, it will not flood both at the same
time.
→ One message will be buffered and transmitted only after the other one.
The switch can temporarily store data (frames) in its memory when it can’t send it immediately.
A buffered frame means the switch is holding the data for a moment, like putting it in a queue,
before sending it out.
Broadcast Domains:
• A Broadcast Domain is a logical division of a network in which all nodes can reach each
other by Layer 2 broadcast.
→ It is a group of devices that will receive a broadcast frame sent by any one of the other
devices.
• All devices connected to a switch are in the same broadcast domain; switches flood
broadcast frames.
→ VLANs can be used to divide up broadcast domains on a switch.
• Each router interface is a unique broadcast domain; routers do not forward Layer 2
broadcast messages.
Layer 3 provides end to end addressing from the source host to the destination host.
• The physical address of each NIC (Network Interface Card) assigned by the
manufacturer.
The Layer 3 packet is destined for the end host, and Layer 2 addressing is used to pass the
packet to the next hop in the path to the end host.
ARP:
ARP is the bridge between Layer 2 (Data Link Layer) and Layer 3 (Network Layer) in the OSI
model. Its main function is to map a known Layer 3 address (IP address) to an unknown Layer 2
address (MAC address).
How It Works:
• When a device (say, a PC) wants to send data to another device, it checks if it knows the
MAC address associated with the next hop IP address.
Note: The ARP is used to find the MAC address of the next hop, not necessarily the
destination IP of the packet. This is important when a router is involved.
• Once learned the MAC to IP mapping is stored in the device’s ARP cache.
• Entries may time out after a few minutes and be refreshed when needed.
Imagine you want to browse a website. Your PC needs to send the packet to your default
gateway (router). ARP helps your PC learn the MAC address of the gateway so it can send the
Ethernet frame to it—even though the IP address is for a web server somewhere on the internet.
ARP Process:
10 | P a g e Pradeep
ARP Packet Capture:
An Incomplete ARP entry appears in the ARP table when a device sends an ARP Request to
resolve an IP address to a MAC address, but has not yet received an ARP Reply. The MAC
address field remains "Incomplete" until the reply is received.
2. Firewall or ACL Blocking ARP Replies – ARP replies are filtered or dropped.
11 | P a g e Pradeep
Proxy ARP:
Proxy ARP is a technique in which one device (usually a router) answers ARP requests on
behalf of another device. This allows devices on different IP networks to communicate as if they
were on the same local subnet, even when they are not.
The responding router “pretends” to be the destination by replying with its own MAC address,
effectively proxying for the real target.
1. Host A sends an ARP request for Host B's IP address (assuming they’re on the same
subnet).
3. The router receives the ARP request and recognizes that the destination IP is reachable
through another interface.
5. Host A sends the packet to the router, which then forwards it to Host B.
12 | P a g e Pradeep
Proxy ARP – Mismatched Subnet Mask:
• In the below network, PC1 believes that PC1, PC2, PC3, and PC4 are all in the same
subnet (192.168.0.0/16).
• When PC1 tries to communicate with PC3 (or PC4), it will send an ARP Request
directly to the IP address of PC3 (not the default gateway, R1).
• The ARP Request will not reach PC3 because R1 will not forward the broadcast message.
• With Proxy ARP, R1 will think: → I received an ARP Request for 192.168.1.13 on my G0/0
interface, even though the 192.168.1.0/24 subnet is not connected to G0/0 and is in a
different subnet than the source.
→ 192.168.1.13 is not my IP address, but I do have a route for 192.168.1.0/24 in my routing
table.
→ So, I will reply to PC1’s ARP Request on behalf of 192.168.1.13, using my MAC address.
→ When PC1 sends packets destined for 192.168.1.13 to my MAC address, I will forward
them to PC3.
13 | P a g e Pradeep
What is MTU?
• MTU (Maximum Transmission Unit) determines the maximum packet size that can be
sent/received by an interface.
• Jumbo frames:
→ Larger than the default 1500 bytes, typically defined as up to 9000 or 9216 bytes.
14 | P a g e Pradeep
Eth MTU:
• The Ethernet MTU specifies the maximum payload size of frames sent/received by an
interface.
→ This includes L2 and L3 interfaces, since both send Ethernet frames.
→ MTU is checked at both ingress (receiving) and egress (sending).
• If a frame’s payload (L3 header + L4 header + Data) is larger than the interface’s MTU, it
will be dropped.
→ Layer 2 doesn’t offer any fragmentation capabilities.
IP MTU:
• The IP MTU specifies the maximum size of an IP packet before it needs fragmentation
(default 1500 bytes).
→ If the DF-bit is not set, packets larger than the IP MTU are fragmented.
→ If the DF-bit is set, packets larger than the IP MTU are dropped.
• IP MTU only applies to Layer 3 ports; Layer 2 ports (on a switch) are not L3-aware.
• Identification:
Identifies the original packet the fragment is a part of.
• Flags:
• Fragment Offset:
Identifies the position of the fragment within the original packet.
15 | P a g e Pradeep
• The IP MTU cannot be greater than the Ethernet MTU.
→ If L3 tries to send a 1600 byte packet but the Ethernet MTU is 1500 bytes, the packet
would be dropped.
• If you increase the Ethernet MTU of an interface, the IP MTU is automatically increased
to match it.
• TCP MSS (Maximum Segment Size) defines the maximum amount of data (in bytes)
that a device can receive in a single TCP segment, excluding TCP and IP headers.
• The typical default MSS value is 1460 bytes, assuming a standard Ethernet MTU of 1500
bytes:
• If packets exceed the MSS and Path MTU Discovery (PMTUD) fails or is disabled,
fragmentation or drops can occur.
• Adjusting MSS can help avoid fragmentation in networks with tunnels, VPNs, or lower
MTU links.
16 | P a g e Pradeep
Layer 2 Forwarding Decisions:
• Layer 2 forwarding decisions involve looking for an exact match in the MAC address
table.
Steps:
1. Frame arrives
Example:
Definitions:
17 | P a g e Pradeep
Layer 3 Forwarding:
Layer 3 forwarding happens when a packet needs to travel between different IP subnets or
networks. It’s handled by routers or Layer 3 switches. The decision is based on the destination
IP address of the packet.
1. Packet Received:
o A Layer 3 device (like a router or Layer 3 switch) receives a packet on one of its
interfaces.
2. Destination IP Checked:
o The device checks the destination IP address in the packet’s Layer 3 (IP) header.
o It checks the routing table (RIB) to find a match for the destination IP.
o If the next-hop IP is on a directly connected subnet, the device uses ARP to find
the MAC address.
18 | P a g e Pradeep
▪ Destination MAC = Next-hop’s MAC address
7. Forwarding Out:
o The packet is sent out the selected interface toward the next-hop or destination.
Process Switching:
• Definition: In Process Switching, every incoming packet is handled by the router’s CPU.
The CPU makes the forwarding decision for each packet.
• How it works: Each packet is examined by the router's software, which looks up the
destination IP address in the routing table. Then, based on the lookup, it forwards the
packet.
• Performance: This method is less efficient because the CPU is involved in every packet's
forwarding decision, which can cause a significant performance hit, especially in high-
throughput environments.
• Use case: Process Switching is often used in older routers or in situations where the
router has a small number of packets to forward, such as in small networks or with
lower traffic loads.
• How it works:
o FIB: A table that stores precomputed routes, optimized for fast lookups.
o Adjacency Table: Stores information about Layer 2 forwarding decisions (like the
MAC address) to speed up the process.
o When a packet arrives, CEF looks up the destination in the FIB and forwards the
packet based on the result, all without involving the CPU for each packet.
19 | P a g e Pradeep
• Performance: CEF is far more efficient than Process Switching because the forwarding
decision is made using pre-built tables, not by the CPU for every packet. This results in
faster packet processing, particularly under heavy traffic conditions.
• Use case: CEF is the default packet-forwarding method on most modern Cisco routers
and is used in high-speed, large-scale networks.
• If the Active RP fails, the Standby RP will reboot the router and become the new Active.
20 | P a g e Pradeep
2. RPR (Route Processor Redundancy):
• Standby RP is partially initialized during normal operation. IOS image is booted (cold
boot).
• L2 protocols (STP, LACP, PaGP, VTP) and L3 protocols (OSPF, EIGRP) are not
initialized on the Standby RP.
3. RPR+
• Similar to RPR, the Standby RP is partially initialized, and the IOS image is booted.
21 | P a g e Pradeep
Stateful Switchover (SSO):
o Interface states
Checkpointing:
Protocol Behavior:
• Layer 2 protocols are initialized on the Standby RP, and their states are preserved during
failover.
o No traffic loss
o No interface flaps
o L3 forwarding is interrupted
22 | P a g e Pradeep
Failover Time:
• NSF further enhances SSO by allowing line cards to forward packets at Layer 3 while the
Route Processor (RP) is failing over.
FIB Synchronization:
• The FIB (Forwarding Information Base) is initially transferred to the Standby RP during
initialization.
It is then actively updated when changes occur. → NSF = checkpointing of the FIB from
Active RP to Standby RP.
o However, neighboring devices may drop routes learned from this device.
NSF Characteristics:
• NSF is a local feature: it does not require cooperation from neighboring devices.
• GR allows peers of a device performing a Route Processor (RP) failover to maintain their
routes via that device, even if the routing protocol adjacency is lost.
o Peers will continue sending packets to the device, even though the adjacency is
down.
23 | P a g e Pradeep
o The period during which neighbors keep forwarding traffic is called the grace
period.
GR + NSF:
GR Device Roles:
Requirements:
• GR and NSR both aim to ensure neighbors continue forwarding traffic via this device
during RP failover.
24 | P a g e Pradeep
o A device can’t use both at the same time for the same neighbor.
These features don’t make up for poor design — ideally there should be redundant routers,
not just a router with redundant RPs.
Before understanding VLANs, you must first have a specific understanding of the definition of a
LAN. For example, from one perspective, a LAN includes all the user devices, servers, switches,
routers, cables, and wireless access points in one location.
A broadcast domain includes the set of all LAN-connected devices, so that when any of the
devices sends a broadcast frame, all the other devices get a copy of the frame. So, from one
perspective, you can think of a LAN and a broadcast domain as being basically the same thing.
Using only default settings, a switch considers all its interfaces to be in the same broadcast
domain. That is, for one switch, when a broadcast frame entered one switch port, the switch
forwards that broadcast frame out all other ports. With that logic, to create two different LAN
broadcast domains, you had to buy two different Ethernet LAN switches
By using two VLANs, a single switch can accomplish the same goals of the design in Figure 8-
1—to create two broadcast domains—with a single switch. With VLANs, a switch can
configure some interfaces into one broadcast domain and some into another, creating multiple
broadcast domains. These individual broadcast domains created by the switch are called virtual
LANs (VLAN).
25 | P a g e Pradeep
For example, a broadcast sent by one host in a VLAN will be received and processed by all the
other hosts in the VLAN—but not by hosts in a different VLAN. Limiting the number of hosts
that receive a single broadcast frame reduces the number of hosts that waste effort processing
unneeded broadcasts. It also reduces security risks because fewer hosts see frames sent by any
one host.
The following list summarizes the most common reasons for choosing to create smaller
broadcast domains (VLANs):
To reduce CPU overhead on each device, improving host performance, by reducing the number
of devices that receive each broadcast frame
To reduce security risks by reducing the number of hosts that receive copies of frames that the
switches flood (broadcasts, multicasts, and unknown unicasts)
To improve security for hosts through the application of different security policies per VLAN.
To create more flexible designs that group users by department, or by groups that work
together, instead of by physical location
To solve problems more quickly, because the failure domain for many problems is the same set of
devices as those in the same broadcast domain
To reduce the workload for the Spanning Tree Protocol (STP) by limiting a VLAN to a single
access switch.
Key Definition:
Virtual LANs (VLANs) are used to segment a LAN into multiple virtual LANs (broadcast
domains).
• Without configuring VLANs, all hosts connected to the same LAN are in the same
broadcast domain. → All hosts are in the default VLAN: VLAN 1.
• Even if the network is segmented at Layer 3, if VLANs aren’t used to segment it at Layer
2, broadcast and unknown unicast frames will be flooded to all hosts.
→ Usually, 1 VLAN = 1 subnet.
26 | P a g e Pradeep
• By configuring VLANs, the switch is split into multiple virtual switches.
→ The switch will not forward/flood a frame to interfaces in a different VLAN than the
one it was received on.
Stretched VLANs:
In the diagram below, VLAN 12, 34 and 56 are stretched across the three sites, connected via
trunks that allow VLANs 12,34 and 56.
This is generally not preferred in modern networks; we want to minimize the size of broadcast
domains.
Local VLANs:
Local VLANs are localized to a site; connections between sites are layer 3, so VLANs do not pass
between them.
It doesn’t matter if the VLAN numbers are the same or different at each site. They are locally
significant to each site.
It may be desirable to use the same VLAN numbers at each site for consistency:
Service = VLAN 56
27 | P a g e Pradeep
Multiple Subnets per VLAN:
Use Case:
• When a subnet (e.g., 10.0.1.0/24) runs out of IP addresses, instead of creating a new
VLAN, you can:
28 | P a g e Pradeep
Important Routing Behavior:
• Although 10.0.1.0/24 and 10.0.3.0/24 are in the same VLAN (VLAN 10), they are different
IP subnets.
• For devices in different subnets to communicate, routing is required, even though they
are in the same Layer 2 VLAN.
o This is handled by the default gateway IPs configured on SW1's SVI (e.g., 10.0.1.1
and 10.0.3.1).
• Use the shutdown command on a VLAN to disable it on the current switch only.
• This is a local shutdown — the VLAN remains active in the VTP domain, but is disabled
only on the local switch.
• Use the state suspend command to disable the VLAN across the entire VTP domain.
• The VLAN will not be deleted, but no switch in the VTP domain will forward traffic in
this VLAN.
Configuring VLANs on a single switch requires only a little effort: you simply configure each
port to tell it the VLAN number to which the port belongs. With multiple switches, you have to
consider additional concepts about how to forward traffic between the switches.
29 | P a g e Pradeep
When you are using VLANs in networks that have multiple interconnected switches, the
switches need to use VLAN trunking on the links between the switches. VLAN trunking causes
the switches to use a process called VLAN tagging, by which the sending switch adds another
header to the frame before sending it over the trunk. This extra trunking header includes a
VLAN identifier (VLAN ID) field so that the sending switch can associate the frame with a
particular VLAN ID, and the receiving switch can then know in what VLAN each frame belongs.
Note: Trunking refers to the process of allowing traffic from multiple VLANs to travel over
a single physical connection (link) between network devices, such as switches or routers. It
is a key concept in VLAN configuration that enables devices in the same VLAN across
different switches to communicate.
Scenario 1:
VLANs that exist on multiple switches, but it does not use trunking.
VLAN trunking creates one link between switches that supports as many VLANs as you need.
As a VLAN trunk, the switches treat the link as if it were a part of all the VLANs. At the same
time, the trunk keeps the VLAN traffic separate, so frames in VLAN 10 would not go to devices
in VLAN 20, and vice versa, because each frame is identified by VLAN number as it crosses the
trunk.
30 | P a g e Pradeep
The use of trunking allows switches to forward frames from multiple VLANs over a single
physical connection by adding a small header to the Ethernet frame.
When SW2 receives the frame, it understands that the frame is in VLAN 10. SW2 then removes
the VLAN header, forwarding the original frame out its interfaces in VLAN 10 (Step 3).
Cisco has supported two different trunking protocols over the years: Inter-Switch Link (ISL)
and IEEE 802.1Q.
While both ISL and 802.1Q tag each frame with the VLAN ID, the details differ. 802.1Q inserts
an extra 4-byte 802.1Q VLAN header into the original frame’s Ethernet header.
31 | P a g e Pradeep
This 12-bit field supports a theoretical maximum of 212 (4096) VLANs, but in practice it
supports a maximum of 4094. (Both 802.1Q and ISL use 12 bits to tag the VLAN ID, with two
reserved values [0 and 4095].)
Cisco switches break the range of VLAN IDs (1–4094) into two ranges: the normal range and the
extended range. All switches can use normal-range VLANs with values from 1 to 1005.Only some
switches can use extended-range VLANs with VLAN IDs from 1006 to 4094.
The rules for which switches can use extended-range VLANs depend on the configuration of the
VLAN Trunking Protocol (VTP).
802.1Q also defines one special VLAN ID on each trunk as the native VLAN (defaulting to use
VLAN 1). By definition, 802.1Q simply does not add an 802.1Q header to frames in the native
VLAN. When the switch on the other side of the trunk receives a frame that does not have an
802.1Q header, the receiving switch knows that the frame is part of the native VLAN. Note that
because of this behavior, both switches must agree on which VLAN is the native VLAN.
Note: Encapsulation Dot1Q is used in VLANs to logically segment and isolate network traffic,
enhancing security, performance, and manageability.
If you create a campus LAN that contains many VLANs, you typically still need all devices to be
able to send data to all other devices.
VLANs (Virtual Local Area Networks) are used to segment a network into smaller, isolated
domains to improve performance, security, and manageability. However, VLANs are isolated by
32 | P a g e Pradeep
design, meaning devices in different VLANs cannot communicate with each other unless routing
is enabled.
When including VLANs in a campus LAN design, the devices in a VLAN need to be in the same
subnet. Following the same design logic, devices in different VLANs need to be in different
subnets.
To forward packets between VLANs, the network must use a device that acts as a router.
These switches that also perform Layer 3 routing functions go by the name multilayer switch or
Layer 3 switch.
NOTE The term default VLAN refers to the default setting on the switchport access vlan vlan-id
command, and that default is VLAN ID 1. In other words, by default, each port is assigned to
access VLAN 1.
33 | P a g e Pradeep
Creating VLANs and Assigning Access VLANs to an Interface:
Interface VLAN (also called Switched Virtual Interface - SVI) is a virtual Layer 3 interface on a
switch that is used to give IP connectivity to a specific VLAN.
34 | P a g e Pradeep
This example begins with SW1 not knowing about VLAN 3. With the addition of the
switchport access vlan 3 interface subcommand, the switch realized that VLAN 3 did not exist,
and as noted in the shaded message in the example, the switch created VLAN 3, using a default
name (VLAN0003).
VTP is a Cisco proprietary tool on Cisco switches that advertises each VLAN configured in one
switch (with the vlan number command) so that all the other switches in the campus learn
about that VLAN.
• Server: The server switch is responsible for the creation, modification, and deletion of
VLANs within the VTP domain.
• Client: The client switch receives VTP advertisements and modifies the VLANs on that
switch. VLANs cannot be configured locally on a VTP client.
• Transparent: VTP transparent switches receive and forward VTP advertisements but do
not modify the local VLAN database. VLANs are configured only locally.
• Off: A switch does not participate in VTP advertisements and does not forward them out
of any ports either. VLANs are configured only locally.
35 | P a g e Pradeep
VTP Communication:
VTP advertises updates by using a multicast address across the trunk links for advertising
updates to all the switches in the VTP domain. There are three main types of advertisements:
• Summary: This advertisement occurs every 300 seconds or when a VLAN is added,
removed, or changed. It includes the VTP version, domain, configuration revision
number, and time stamp.
• Subset: This advertisement occurs after a VLAN configuration change occurs. It contains
all the relevant information for the switches to make changes to the VLANs on them.
• Client requests: This advertisement is a request by a client to receive the more detailed
subset advertisement. Typically, this occurs when a switch with a lower revision number
joins the VTP domain and observes a summary advertisement with a higher revision than
it has stored locally.
VTP Configuration:
• Step 1. Define the VTP version with the command vtp version {1 | 2 | 3}.
• Step 2. Define the VTP domain with the command vtp domain domain-name. Changing
the VTP domain resets the local switch’s version to 0.
36 | P a g e Pradeep
• Step 3. Define the VTP switch role with the command vtp mode { server | client |
transparent | none }.
• Step 4. (Optional) Secure the VTP domain with the command vtp password password.
(This step is optional but recommended because it helps prevent unauthorized switches
from joining the VTP domain.
Commands:
37 | P a g e Pradeep
Dynamic Trunking Protocol:
DTP (Dynamic Trunking Protocol) is a Cisco proprietary protocol used to dynamically negotiate
and establish trunk links between two switches. It automates the process of configuring trunk
ports, allowing them to carry traffic for multiple VLANs over a single link, and supports VLAN
tagging protocols such as 802.1Q or ISL.
DTP advertises itself every 30 seconds to neighbors so that they are kept aware of its status.
DTP requires that the VTP domain match between the two switches.
• Trunk: This mode statically places the switch port as a trunk and advertises DTP
packets to the other end to establish a dynamic trunk. Place a switch port in this mode
with the command switchport mode trunk.
• Dynamic desirable: In this mode, the switch port acts as an access port, but it listens for
and advertises DTP packets to the other end to establish a dynamic trunk. If it is
successful in negotiation, the port becomes a trunk port. Place a switch port in this mode
with the command switchport mode dynamic desirable.
• Dynamic auto: In this mode, the switch port acts as an access port, but it listens for DTP
packets. It responds to DTP packets and, upon successful negotiation, the port becomes
a trunk port. Place a switch port in this mode with the command switchport mode
dynamic auto.
38 | P a g e Pradeep
Internal VLANs:
• In some cases, Cisco switches create internal VLANs for their own use.
o Example: When you make a routed port with no switchport, an internal VLAN is
created.
• Internal VLANs are created from the extended VLAN range, starting from 1006 (by
default).
You can control this with: vlan internal allocation policy {ascending | descending}
Internal VLANs will not appear in the output of show vlan, but you cannot use them for other
purposes.
39 | P a g e Pradeep
Note:
“If necessary, you can shut down the routed port assigned to the internal VLAN, which frees up
the internal VLAN, and then create the extended-range VLAN and re-enable the port, which
then uses another VLAN as its internal VLAN.”
Access Ports: Access ports (also called untagged ports) are switch ports that belong to a single
VLAN and send/receive untagged traffic.
Best Practice:
• Exception: Servers running VMs – they should connect to trunk ports (each VM on the
server uses a different VLAN).
Trunk Ports:
Trunk ports (also known as tagged ports) are ports which carry traffic in multiple VLANs.
→ VLAN tags (802.1Q) are used to indicate which VLAN traffic belongs to.
• The native VLAN is an exception: traffic in the native VLAN is sent untagged, and if
untagged frames are received on a trunk port, they are assumed to be in the native VLAN.
→ Generally, it is best practice to disable the native VLAN (assign an unused VLAN as
the native VLAN).
• The 802.1Q tag is placed between the Source and EtherType fields of a frame.
40 | P a g e Pradeep
41 | P a g e Pradeep
Dynamic Trunking Protocol (DTP):
DTP (Dynamic Trunking Protocol) is a Cisco proprietary protocol that allows switches to
automatically determine:
Key Points:
• Trunking encapsulation (choosing between 802.1Q or ISL) can only be negotiated if the
interface mode is also negotiated, not manually set.
• But this only works if the interface mode is also being negotiated dynamically (like with
switchport mode dynamic desirable or dynamic auto).
o If the port mode is manually configured as trunk, then the encapsulation must
also be manually configured.
42 | P a g e Pradeep
o If you use switchport trunk encapsulation negotiate, you cannot use switchport
mode trunk.
You must also let it negotiate whether it should be a trunk or access port.
Do Not Use:
Noe: Because in the second case, you already forced trunk mode, so the switch says:
You’ve already decided this port will be a trunk, so I’m not allowed to negotiate anything.
Why can’t you use switchport trunk encapsulation negotiate with switchport mode trunk?
Because:
• switchport mode trunk forces the port to trunk mode — there is no negotiation.
So when you force the trunk mode (switchport mode trunk), encapsulation must also be set
explicitly: switchport trunk encapsulation dot1q
43 | P a g e Pradeep
DTP Negotiation: desirable + auto:
44 | P a g e Pradeep
VLAN Trunking Protocol (VTP):
VTP (VLAN Trunking Protocol) is a Cisco proprietary protocol used to synchronize VLAN
information among switches within the same VTP domain.
• Add VLANs
• Delete VLANs
• Rename VLANs
• Does NOT assign access ports to VLANs → Only allows VLANs to be passed over trunk
links
VTP Domain:
o To configure a domain:
By default, Cisco Nexus switches do not participate in VTP — they operate in VTP transparent
mode.
• Transparent
• Meaning:
o But they will forward VTP advertisements received on trunk links to other
switches.
45 | P a g e Pradeep
VTP Modes:
Client Mode
• Sends VTP advertisements, but only what it learned from the server.
• Does not store VLANs in NVRAM (loses them after reboot unless learned again).
Transparent Mode
• VLAN changes are local only (doesn’t affect or sync with others).
46 | P a g e Pradeep
Completely disables VTP
EtherChannel Bundle:
Ethernet network speeds are based on powers of 10 (10 Mbps, 100 Mbps, 1 Gbps, 10 Gbps, 100
Gbps). When a link between switches becomes saturated, how can more bandwidth be added
to that link to prevent packet loss?
If both switches have available ports with faster throughput than the current link (for example,
10 Gbps versus 1 Gbps), then changing the link to higher-speed interfaces solves the bandwidth
contingency problem. However, in most cases, this is not feasible.
Ideally, it would be nice to plug in a second cable and double the bandwidth between the
switches. However, Spanning Tree Protocol (STP) will place one of the ports into a blocking
state to prevent forwarding loops,
47 | P a g e Pradeep
Fortunately, the physical links can be aggregated into a logical link called an EtherChannel
bundle. The industry-based term for an EtherChannel bundle is EtherChannel (for short), or
port channel, which is defined in the IEEE 802.3AD link aggregation specification. The physical
interfaces that are used to assemble the logical EtherChannel are called member interfaces. STP
operates on a logical link and not on a physical link. The logical link would then have the
bandwidth of any active member interfaces, and it would be load balanced across all the links.
EtherChannels can be used for either Layer 2 (access or trunk) or Layer 3 (routed) forwarding.
Definition: Etherchannel groups multiple physical interfaces into a single logical interface.
Spanning Tree sees the EtherChannel as a single interface, so it does not block any ports. We
now get the full bandwidth.
Traffic is load balanced across all the links in the EtherChannel. If an interface goes down its
traffic will fail over to the remaining links.
A primary advantage of using port channels is a reduction in topology changes when a member
link line protocol goes up or down. In a traditional model, a link status change may trigger a
Layer 2 STP tree calculation or a Layer 3 route calculation. A member link failure in an
EtherChannel does not impact those processes, as long as one active member still remains up.
48 | P a g e Pradeep
EtherChannel Protocols:
Two common link aggregation protocols are Link Aggregation Control Protocol (LACP) and
Port Aggregation Protocol (PAgP). PAgP is Cisco proprietary and was developed first, and then
LACP was created as an open industry standard. All the member links must participate in the
same protocol on the local and remote switches.
PAgP advertises messages with the multicast MAC address 0100:0CCC:CCCC and the protocol
code 0x0104. PAgP can operate in two modes:
Auto: In this PAgP mode, the interface does not initiate an EtherChannel to be established and
does not transmit PAgP packets out of it. If an PAgP packet is received from the remote switch,
this interface responds and then can establish a PAgP adjacency. If both devices are PAgP auto, a
PAgP adjacency does not form.
Desirable: In this PAgP mode, an interface tries to establish an EtherChannel and transmit PAgP
packets out of it. Active PAgP interfaces can establish a PAgP adjacency only if the remote
interface is configured to auto or desirable.
LACP advertises messages with the multicast MAC address 0180:C200:0002. LACP can operate
in two modes:
Passive: In this LACP mode, an interface does not initiate an EtherChannel to be established
and does not transmit LACP packets out of it. If an LACP packet is received from the remote
switch, this interface responds and then can establish an LACP adjacency. If both devices are
LACP passive, an LACP adjacency does not form.
Active: In this LACP mode, an interface tries to establish an EtherChannel and transmit LACP
packets out of it. Active LACP interfaces can establish an LACP adjacency only if the remote
interface is configured to active or passive.
49 | P a g e Pradeep
EtherChannel Parameters:
EtherChannel Configuration:
Note: By default, PAgP ports operate in silent mode, which allows a port to establish an
EtherChannel with a device that is not PAgP capable and rarely sends packets.
50 | P a g e Pradeep
Layer 3 Etherchannel:
LACP Configuration:
SW1(config)#interface port-channel 1
SW1(config-if)#switchport mode trunk
51 | P a g e Pradeep
Configure matching settings on the other switch on the other side of the links:
PAgP Configuration:
Configure matching settings on the switch on the other side of the links
LACP Basic:
o In L2 EtherChannel, the port-channel (logical port) gets its MAC address from
the first physical port in the channel that comes up.
52 | P a g e Pradeep
Layer 3 EtherChannel:
• In Layer 3, the Port-Channel interface is treated like a routed interface (i.e., it has an IP
address).
• Since it's Layer 3, the port-channel interface (like any routed interface) needs a MAC
address to send/receive IP packets over Ethernet.
So what happens?
• The router or switch automatically assigns a MAC address to the Port-Channel (just like
it does for any routed interface).
• This MAC is not inherited from a physical interface (like in Layer 2 EtherChannel).
• Instead, the device allocates a virtual MAC address from its pool or generates one
internally when the interface is created.
• L2 Port-Channel: Gets MAC from the first active physical port (because it behaves like a
switchport).
• L3 Port-Channel: Behaves like a routed port, so the device assigns a new MAC (like it
does for any routed interface).
LACP was originally defined in 802.3ad, but the most recent standard is in 802.1AX-2020.
• You can get 802.1AX-2020 (Link Aggregation) for free from IEEE Xplore (LACP is
section 6.4).
• A port in active mode actively sends LACP messages to negotiate an EtherChannel with
its neighbor.
• A port in passive mode only sends LACP messages after receiving messages from a
neighbor in active mode.
53 | P a g e Pradeep
Messages (LACPDUs) are rapidly exchanged during negotiation and sent every 30 seconds to
maintain the EtherChannel.
LACP messages are sent to multicast MAC 0180.c200.0002 (the “Ethernet Slow Protocols”
address).
• LACP messages are always sent untagged on trunk and access ports.
• LACP uses LACPDUs (Link Aggregation Control Protocol Data Units) to communicate
between the two routers (or router to switch).
54 | P a g e Pradeep
o The router bundles the links into a single logical interface called Port-
Channel<number>.
• A unique MAC address is assigned by the router to the Port-Channel interface (used for
ARP, routing).
5. Traffic Load-Balancing
• Traffic is load-balanced across the member links based on a hash algorithm (e.g., source-
destination IP, Layer 4 ports, etc.).
• Port roles:
• Logical bundling happens when both sides exchange and accept LACP data units.
• LACP compares these to choose which links go into the same bundle.
• Lower priority/MAC wins during bundling; misconfig (mismatch) leads to STP loops or
misbehavior.
5. LACP PDUs
55 | P a g e Pradeep
o EtherType 0x8809
• Contain key info: system ID, port ID, port state flags, key, timeout.
6. LACP Modes
• Use lacp rate fast under interface to tweak PDU frequency (useful for faster failure
detection).
7. Load Balancing
o MAC
o IP
o TCP/UDP ports
8. L2 vs L3 EtherChannel
• L3: The switch/router assigns a unique MAC to the port-channel itself (not from
physical links).
56 | P a g e Pradeep
9. Troubleshooting Tips
o speed/duplex
o channel-group numbers
o LACP modes
57 | P a g e Pradeep
NIC Teaming:
NIC Teaming combines multiple physical network cards into a single logical interface.
Terminology
• A Port Channel
• LAG Link Aggregation
• A link bundle
• Bonding
• NIC balancing
• Link aggregation
Spanning Tree Protocol (STP) enables switches to become aware of other switches through the
advertisement and receipt of bridge protocol data units (BPDUs).
58 | P a g e Pradeep
STP has multiple iterations:
Catalyst switches now operate in PVST+, RSTP, and MST modes. All three of these modes are
backward compatible with 802.1D.
The original version of STP comes from the IEEE 802.1D standards and provides support for
ensuring a loop-free topology for one VLAN. This topic is vital to understand as a foundation for
Rapid Spanning Tree Protocol (RSTP) and Multiple Spanning Tree Protocol (MST).
In the 802.1D STP protocol, every port transitions through the following states:
Disabled: The port is in an administratively off position (that is, shut down).
Blocking: The switch port is enabled, but the port is not forwarding any traffic to ensure that a
loop is not created. The switch does not modify the MAC address table. It can only receive
BPDUs from other switches.
Listening: The switch port has transitioned from a blocking state and can now send or receive
BPDUs. It cannot forward any other network traffic. The duration of the state correlates to the
STP forwarding time. The next port state is learning.
Learning: The switch port can now modify the MAC address table with any network traffic that
it receives. The switch still does not forward any other network traffic besides BPDUs. The
duration of the state correlates to the STP forwarding time. The next port state is forwarding.
Forwarding: The switch port can forward all network traffic and can update the MAC address
table as expected. This is the final state for a switch port to forward network traffic.
Broken: The switch has detected a configuration or an operational problem on a port that can
have major effects. The port discards packets as long as the problem continues to exist.
59 | P a g e Pradeep
NOTE: The entire 802.1D STP initialization time takes about 30 seconds for a port to enter
the forwarding state using default timers.
The 802.1D STP standard defines the following three port types:
Root port (RP): A network port that connects to the root bridge or an upstream switch in the
spanning-tree topology. There should be only one root port per VLAN on a switch.
Designated port (DP): A network port that receives and forwards BPDU frames to other
switches. Designated ports provide connectivity to downstream devices and switches. There
should be only one active designated port on a link.
Blocking port: A network that is not forwarding traffic because of STP calculations.
Root Bridge: The root bridge is the most important switch in the Layer 2 topology. All ports are
in a forwarding state. This switch is considered the top of the spanning tree for all path
calculations by other switches. All ports on the root bridge are categorized as designated ports.
Bridge protocol data unit (BPDU): This network packet is used for network switches to identify
a hierarchy and notify of changes in the topology. A BPDU uses the desti- nation MAC address
01:80:c2:00:00:00. There are two types of BPDUs:
• Configuration BPDU: This type of BPDU is used to identify the Root Bridge, root ports,
designated ports, and blocking ports. The configuration BPDU consists of the following
fields: STP type, root path cost, root bridge identifier, local bridge identifier, max age,
hello time, and forward delay.
• Topology change notification (TCN) BPDU: This type of BPDU is used to com- municate
changes in the Layer 2 topology to other switches. This is explained in greater detail later
in the chapter.
Root path cost: This is the combined cost for a specific path toward the root switch.
System priority: This 4-bit value indicates the preference for a switch to be root bridge. The
default value is 32,768.
60 | P a g e Pradeep
System ID extension: This 12-bit value indicates the VLAN that the BPDU correlates to. The
system priority and system ID extension are combined as part of the switch’s identification of
the root bridge.
Root bridge identifier: This is a combination of the root bridge system MAC address, system ID
extension, and system priority of the root bridge.
Local bridge identifier: This is a combination of the local switch’s bridge system MAC address,
system ID extension, and system priority of the root bridge.
Max age: This is the maximum length of time that passes before a bridge port saves its BPDU
information. The default value is 20 seconds, but the value can be configured with the command
spanning-tree vlan vlan-id max-age maxage. If a switch loses contact with the BPDU’s source, it
assumes that the BPDU information is still valid for the duration of the Max Age timer.
Hello time: This is the time that a BPDU is advertised out of a port. The default value is 2
seconds, but the value can be configured to 1 to 10 seconds with the command spanning-tree
vlan vlan-id hello-time hello-time.
Forward delay: This is the amount of time that a port stays in a listening and learning state. The
default value is 15 seconds, but the value can be changed to a value of 15 to 30 seconds with the
command spanning-tree vlan vlan-id forward-time forward-time.
The interface STP cost is an essential component for root path calculation because the root path
is found based on the cumulative interface STP cost to reach the root bridge. The interface STP
cost was originally stored as a 16-bit value with a reference value of 20 Gbps.
As switches have developed with higher-speed interfaces, 10 Gbps might not be enough.
Another method, called long mode, uses a 32-bit value and uses a reference speed of 20 Tbps.
The original method, known as short mode, is the default mode.
61 | P a g e Pradeep
Root Bridge Election:
The first step with STP is to identify the root bridge. As a switch initializes, it assumes that it is
the root bridge and uses the local bridge identifier as the root bridge identifier. It then listens to
its neighbor’s configuration BPDU and does the following:
• If the neighbor’s configuration BPDU is inferior to its own BPDU, the switch ignores that
BPDU.
• If the neighbor’s configuration BPDU is preferred to its own BPDU, the switch updates
its BPDUs to include the new root bridge identifier along with a new root path cost that
correlates to the total path cost to reach the new root bridge. This process continues
until all switches in a topology have identified the root bridge switch.
STP deems a switch more preferable if the priority in the bridge identifier is lower than the
priority of the other switch’s configuration BPDUs. If the priority is the same, then the switch
prefers the BPDU with the lower system MAC.
62 | P a g e Pradeep
The advertised root path cost is always the value calculated on the local switch. As the BPDU is
received, the local root path cost is the advertised root path cost plus the local interface port
cost. The root path cost is always zero on the root bridge.
63 | P a g e Pradeep
Locating Root Ports:
After the switches have identified the root bridge, they must determine their root port (RP). The
root bridge continues to advertise configuration BPDUs out all of its ports. The switch compares
the BPDU information to identify the RP. The RP is selected using the following logic (where
the next criterion is used in the event of a tie):
2. The interface associated to the lowest system priority of the advertising switch is preferred
next.
3. The interface associated to the lowest system MAC address of the advertising switch is
preferred next.
4. When multiple links are associated to the same switch, the lowest port priority from the
advertising switch is preferred.
5. When multiple links are associated to the same switch, the lower port number from the
advertising switch is preferred.
The root bridge can be identified for a specific VLAN through the use of the command show
spanning-tree root and examination of the CDP or LLDP neighbor information to identify the
host name of the RP switch. The process can be repeated until the root bridge is located.
Now that the root bridge and RPs have been identified, all other ports are considered designated
ports. However, if two non-root switches are connected to each other on their designated ports,
one of those switch ports must be set to a blocking state to prevent a forwarding loop. In our
sample topology, this would apply to the following links:
64 | P a g e Pradeep
SW2 Gi1/0/3→ → SW3 Gi1/0/2
SW4 Gi1/0/5→ → SW5 Gi1/0/4
SW4 Gi1/0/6→ → SW5 Gi1/0/5
The logic to calculate which ports should be blocked between two non-root switches is as
follows:
2. The switch with the lower path cost to the root bridge forwards packets, and the one with
the higher path cost blocks. If they tie, they move on to the next step.
3. The system priority of the local switch is compared to the system priority of the remote
switch. The local port is moved to a blocking state if the remote system priority is lower than
that of the local switch. If they tie, they move on to the next step.
4. The system MAC address of the local switch is compared to the system priority of the remote
switch. The local designated port is moved to a blocking state if the remote system MAC
address is lower than that of the local switch. If the links are connected to the same switch, they
move on to the next step.
All three links (SW2 Gi1/0/3 ↔ SW3 Gi1/0/2, SW4 Gi1/0/5 ↔ SW5 Gi1/0/4, and SW4 Gi1/0/6
↔ SW5 Gi1/0/5) would use step 4 of the process just listed to identify which port moves to a
blocking state. SW3’s Gi1/0/2, SW5’s Gi1/0/5, and SW5’s Gi1/0/6 ports would all transition to a
blocking state because the MAC addresses are lower for SW2 and SW4.
In a stable Layer 2 topology, configuration BPDUs always flow from the root bridge toward the
edge switches. However, changes in the topology (for example, switch failure, link failure, or
links becoming active) have an impact on all the switches in the Layer 2 topology.
The switch that detects a link status change sends a topology change notification (TCN) BPDU
toward the root bridge, out its RP. If an upstream switch receives the TCN, it sends out an
acknowledgment and forwards the TCN out its RP to the root bridge.
Upon receipt of the TCN, the root bridge creates a new configuration BPDU with the Topology
Change flag set, and it is then flooded to all the switches. When a switch receives a
configuration BPDU with the Topology Change flag set, all switches change their MAC address
65 | P a g e Pradeep
timer to the forwarding delay timer (with a default of 15 seconds). This flushes out MAC
addresses for devices that have not communicated in that 15-second window but maintains
MAC addresses for devices that are actively communicating.
STP/RSTP prevents loops by placing each switch port in either a forwarding state or a blocking
state. Interfaces in the forwarding state act as normal, forwarding and receiving frames.
However, interfaces in a blocking state do not process any frames except STP/RSTP messages
(and some other overhead messages). Interfaces that block do not forward user frames, do not
learn MAC addresses of received frames, and do not process received user frames.
66 | P a g e Pradeep
NOTE: The term STP convergence refers to the process by which the switches collectively
realize that something has changed in the LAN topology and determine whether they need
to change which ports block and which ports forward.
That completes the description of what STP/RSTP does, placing each port into either a
forwarding or blocking state. The more interesting question, and the one that takes a lot more
work to understand, is how and why STP/RSTP makes its choices. How does STP/RSTP
manage to make switches block or forward on each interface? And how does it converge to
change state from blocking to forwarding to take advantage of redundant links in response to
network outages?
The STP/RSTP algorithm creates a spanning tree of interfaces that forward frames. The tree
structure of forwarding interfaces creates a single path to and from each Ethernet link, just like
you can trace a single path in a living, growing tree from the base of the tree to each leaf.
The process used by STP, sometimes called the spanning-tree algorithm (STA), chooses the
interfaces that should be placed into a forwarding state. For any interfaces not chosen to be in a
forwarding state, STP/RSTP places the interfaces in blocking state. In other words, STP/RSTP
simply picks which interfaces should forward, and any interfaces left over go to a blocking state.
67 | P a g e Pradeep
STP/RSTP uses three criteria to choose whether to put an interface in forwarding state:
STP/RSTP elects a root switch. STP puts all working interfaces on the root switch in forwarding
state.
Each nonroot switch considers one of its ports to have the least administrative cost between
itself and the root switch. The cost is called that switch’s root cost. STP/RSTP places its port
that is part of the least root cost path, called that switch’s root port (RP), in forwarding state.
Many switches can attach to the same Ethernet segment, but due to the fact that links connect
two devices, a link would have at most two switches. With two switches on a link, the switch
with the lowest root cost, as compared with the other switches attached to the same link, is
placed in forwarding state. That switch is the designated switch, and that switch’s interface,
attached to that segment, is called the designated port (DP).
NOTE The real reason the root switches place all working interfaces in a forwarding state (at
step 1 in the list) is that all its interfaces on the root switch will become DPs. However, it is
easier to just remember that all the root switches’ working interfaces will forward frames.
NOTE STP/RSTP only considers working interfaces (those in a connected state). Failed
interfaces (for example, interfaces with no cable installed) or administratively shutdown
interfaces are instead placed into an STP/RSTP disabled state. So, this section uses the term
working ports to refer to interfaces that could forward frames if STP/RSTP placed the interface
into a forwarding state.
NOTE STP and RSTP do differ slightly in the use of the names of some states like blocking and
disabled, with RSTP using the status term discarding.
68 | P a g e Pradeep
Rapid Spanning Tree Protocol (RSTP):
It defined in IEEE 802.1w, is an enhanced version of the original Spanning Tree Protocol (STP).
RSTP improves upon STP by providing faster convergence in switched Ethernet networks,
enabling faster recovery from changes in the network topology (e.g., link failures or port state
changes).
Cisco created Per-VLAN Spanning Tree (PVST) and Per-VLAN Spanning Tree Plus (PVST+) to
allow more flexibility.
Discarding: The switch port is enabled, but the port is not forwarding any traffic to ensure that a
loop is not created. This state combines the traditional STP states disabled, blocking, and
listening.
Learning: The switch port modifies the MAC address table with any network traffic it receives.
The switch still does not forward any other network traffic besides BPDUs.
Forwarding: The switch port forwards all network traffic and updates the MAC address table as
expected. This is the final state for a switch port to forward network traffic.
NOTE: A switch tries to establish an RSTP handshake with the device connected to the other
end of the cable. If a handshake does not occur, the other device is assumed to be non-RSTP
compatible, and the port defaults to regular 802.1D behavior. This means that host devices such
as computers, printers, and so on still encounter a significant transmission delay (around 30
seconds) after the network link is established.
69 | P a g e Pradeep
RSTP (802.1W) Port Roles:
Root port (RP): A network port that connects to the root switch or an upstream switch in the
spanning-tree topology. There should be only one root port per VLAN on a switch.
Designated port (DP): A network port that receives and forwards frames to other switches.
Designated ports provide connectivity to downstream devices and switches. There should be
only one active designated port on a link.
Alternate port: A network port that provides alternate connectivity toward the root switch
through a different switch.
Backup port: A network port that provides link redundancy toward the current root switch.
The backup port cannot guarantee connectivity to the root bridge in the event that the upstream
switch fails. A backup port exists only when multiple links connect between the same switches.
RSTP defines three types of ports that are used for building the STP topology:
Edge port: A port at the edge of the network where hosts connect to the Layer 2 topology with
one interface and cannot form a loop. These ports directly correlate to ports that have the STP
portfast feature enabled.
Root port: A port that has the best path cost toward the root bridge. There can be only one root
port on a switch.
Point-to-point port: Any port that connects to another RSTP switch with full duplex. Full-
duplex links do not permit more than two devices on a network segment, so determining
whether a link is full duplex is the fastest way to check the feasibility of being connected to a
switch.
70 | P a g e Pradeep
STP Topology Tuning:
A properly designed network strategically places the root bridge on a specific switch and
modifies which ports should be designated ports (that is, forwarding state) and which ports
should be alternate ports (that is, discarding/blocking state).
Ideally the root bridge is placed on a core switch, and a secondary root bridge is designated to
minimize changes to the overall spanning tree. Root bridge placement is accomplished by
lowering the system priority on the root bridge to the lowest value possible, raising the
secondary root bridge to a value slightly higher than that of the root bridge, and (ideally)
increasing the system priority on all other switches. This ensures consistent placement of the
root bridge. The priority is set with either of the following commands:
• spanning-tree vlan vlan-id priority priority: The priority is a value between 0 and 61,440,
in increments of 4,096.
• spanning-tree vlan vlan-id root {primary | secondary} [diameter diameter]: This
command executes a script that modifies certain values. The primary keyword sets the
priority to 24,576, and the secondary keyword sets the priority to 28,672.
71 | P a g e Pradeep
Additional STP Protection Mechanisms:
Network packets do not decrement the time-to-live portion of the header as a packet is
forwarded in a Layer 2 topology. A network forwarding loop occurs when the logical topology
allows for multiple active paths between two devices. Broadcast and multicast traffic wreak
havoc as they are forwarded out of every switch port and continue the forwarding loop.
High CPU consumption and low free memory space are common symptoms of a Layer
forwarding loop. In Layer 2 forwarding loops, in addition to constantly consuming switch
bandwidth, the CPU spikes. Because the packet is received on a different interface, the switch
must move the media access control (MAC) address from one interface to the next.
The network throughput is impacted drastically; users are likely to notice a slowdown on their
network applications, and the switches might crash due to exhausted CPU and memory
resources.
The following are some common scenarios for Layer 2 forwarding loops:
Root Guard:
Root guard is an STP feature that is enabled on a port-by-port basis; it prevents a configured
port from becoming a root port. Root guard prevents a downstream switch (often misconfigured
or rogue) from becoming a root bridge in a topology. Root guard functions by placing a port in
an ErrDisabled state if a superior BPDU is received on a configured port. This prevents the
configured DP with root guard from becoming an RP.
Root guard is enabled with the interface command spanning-tree guard root. Root guard is
placed on designated ports toward other switches that should never become root bridges.
In the sample topology shown in Figure 3-1, root guard should be placed on SW2’s Gi1/0/4 port
toward SW4 and on SW3’s Gi1/0/5 port toward SW5. This prevents SW4 and SW5 from ever
becoming root bridges but still allows for SW2 to maintain connectivity to SW1 via SW3 if the
link connecting SW1 to SW2 fails.
72 | P a g e Pradeep
STP Portfast:
The generation of TCN for hosts does not make sense as a host generally has only one
connection to the network. Restricting TCN creation to only ports that connect with other
switches and network devices increases the L2 network’s stability and efficiency. The STP
portfast feature disables TCN generation for access ports.
STP enabled ports normally take 30 seconds to enter the forwarding state after being enabled.
This delay can be frustrating for users, who aren’t able to access the network for 30 seconds.
Ports connected to end hosts don’t pose a risk of causing Layer 2 loops, so the delay in
unnecessary.
Portfast allows a port to immediately transition to the forwarding state upon being
connected/enabled, bypassing the Listening and Learning states.
73 | P a g e Pradeep
BPDU Guard:
BPDU guard is a safety mechanism that shuts down ports configured with STP portfast upon
receipt of a BPDU. Assuming that all access ports have portfast enabled, this ensures that a loop
cannot accidentally be created if an unauthorized switch is added to a topology.
PortFast should only be enabled on ports connected to non-switch devices (end hosts, routers).
If an end user carelessly connects a switch to a port, it could affect the STP topology.
BPDU Guard can protect against unauthorized switches being connected to ports intended for
end hosts.
• It can be configured separately from PortFast, but usually both features are used
together.
If a BPDU Guard enabled port receives a BPDU, it enters the error-disabled state – effectively
shutting down the port.
74 | P a g e Pradeep
There are two main ways to configure BPDU Guard:
STP vs RSTP:
Note: In STP (Spanning Tree Protocol), each collision domain (basically, a single Ethernet
segment or cable between two switches) will have only one switch port that is responsible
for forwarding traffic toward that segment. This port is called the Designated Port (DP).
• Bridge priority, port cost, and port priority can be modified to change the STP topology.
75 | P a g e Pradeep
However, there are various differences between them. For example:
• Port costs
• Port states
• Port roles
• State transitions (RSTP uses a negotiation/sync mechanism to speed up the move to
Forwarding)
• Topology changes
76 | P a g e Pradeep
RSTP Ports States:
RSTP combines the Blocking and Listening states into a single state called Discarding.
• Root and Designated ports start in the Discarding state, but transition through it to
become stable in Forwarding.
The Learning state is only used if the RSTP sync mechanism fails and the port can’t transition
immediately from Discarding to Forwarding.
When an RSTP port is first enabled, it is a Designated port in the Discarding state.
If the STP algorithm decides it will be an Alternate or Backup port, it remains in the Discarding
state.
If the algorithm decides it will be a Designated or Root port, it moves to the Forwarding state in
one of two ways.
77 | P a g e Pradeep
If the RSTP sync mechanism succeeds, the port moves immediately from Discarding to
Forwarding.
If the RSTP sync mechanism fails, the port spends 15 seconds in Discarding and 15 seconds in
Learning (Forward Delay × 2) before moving to Forwarding.
• For example, if the port is connected to a switch that runs classic STP (not RSTP), the
RSTP sync mechanism won’t work.
The RSTP Root and Designated roles are the same as in STP:
• Root: A forwarding port that points toward the Root Bridge. The switch’s only active
path to reach the Root Bridge.
• Designated: A forwarding port that points away from the Root Bridge. All links
(collision domains) must have exactly one Designated Port.
• Alternate: An alternative for the switch’s Root Port. It provides an alternate path toward
the Root Bridge and is ready to take over if the Root Port fails.
• Backup: A backup path to the same link as a Designated Port on the same switch. This
will only occur if two ports on the same switch are connected to the same link (i.e., via a
hub).
• If a port is not Root or Designated, it is an Alternate Port if the switch is not the
Designated Bridge for the link.
• If a port is not Root or Designated, it is a Backup Port if the switch is the Designated
Bridge for the link.
78 | P a g e Pradeep
RSTP Algorithm:
The process STP uses to create a loop-free topology is called the STP algorithm.
• Lowest BID
• It is an Alternative Port if the switch is not the Designated Bridge for the link.
• It is a Backup Port if the switch is the Designated Bridge for the link.
In RSTP (802.1w), links between switches are categorized into three types to help determine
the best way to transition a port to the forwarding state quickly and safely.
1. Point-to-Point Link
• Behavior:
2. Shared Link
• Definition: A link that may connect multiple devices (e.g., using a hub).
• Behavior:
o Operates more like traditional STP (802.1D) with listening and learning states.
3. Edge Link
• Definition: A port that connects to an end device (like a computer or printer), not to
another switch.
• Behavior:
o BPDUs received on this port will make it lose its edge status (to prevent loops).
The RSTP Sync Process is the main benefit of RSTP over Classic STP. It allows ports to
immediately move to the Forwarding State instead of relying of Classic STP’s timer-based
process.
80 | P a g e Pradeep
Classic STP Convergence:
When a switch port is enabled, it becomes an STP Designated Port until its appropriate role is
determined.
Each switch will declare itself to be the Root Bridge until it receives a Superior BPDU form
another switch.
As soon as SW2 receives SW1’s Superior BPDU. SW2 accepts SW1 as the Root Bridge.
RSTP Convergence:
When a switch port is enabled. It becomes an RSTP Designated Port until its appropriate role is
determined.
Each switch will declare itself to be the Root Bridge until it receives a Superior BPDU from
another switch.
Upon receiving SW1’s superior Proposal BPDU, SW2 accepts SW1 as the Root Bridge.
The RSTP Sync Process only works on ports with the Point-to-Point Link Type.
The connected switches must both be using RSTP – a switch running Classic STP can’t sync.
• The port on the RSTP-enabled switch will operate like a Classic STP port.
In such cases, ports must spend 30 seconds transitioning through Discarding and Learning
before Forwarding.
81 | P a g e Pradeep
Why MSTP?
Multiple Spanning Tree Protocol (MSTP, often called MSTP) allows you to map multiple
VLANs to a single Spanning Tree instance.
• This greatly reduces the number of BPDUs that need to be sent, and the amount of
processing that switches need to do in a LAN with many VLANs.
Rapid PVST+ allows for load balancing among switches in a LAN by assigning a different Root
Bridge for each VLAN.
For example, in a LAN with 100 VLANs, each Distribution Layer switch can be Root Bridge for
50 VLANs.
Each Rapid PVST+ Instance runs independently and sends its own BPDU every 2 seconds.
Sending, receiving, and processing these BPDUs uses resources on the switches (as well as
bandwidth).
MSTP allows for multiple VLANs to be grouped into a single MST Instance (MST) requiring
only two instances to achieve the same load balancing.
82 | P a g e Pradeep
Switching Interview Questions with Answers:
Answer:
A switch is a Layer 2 network device that connects multiple devices in a LAN. It uses MAC
addresses to forward data only to the intended recipient instead of broadcasting to all ports like
a hub.
Answer:
The MAC address table is stored in a switch and maps MAC addresses to specific ports. When a
frame arrives, the switch checks this table to decide where to forward the frame.
Answer:
VLAN (Virtual LAN) is used to logically separate devices on the same physical switch. It
improves security, reduces broadcast traffic, and segments the network.
Answer:
• Access Port: Carries traffic for one VLAN. Used to connect end devices like PCs.
• Trunk Port: Carries traffic for multiple VLANs using tagging (802.1Q). Used to connect
switches or routers.
Answer:
STP prevents loops in a Layer 2 network by blocking redundant paths. Loops can cause
broadcast storms and MAC table instability.
Answer:
• Blocking
• Listening
83 | P a g e Pradeep
• Learning
• Forwarding
• Disabled
Answer:
PortFast is a feature used on access ports. It immediately puts the port in forwarding state,
skipping STP states (useful for PCs/printers). Not recommended on trunk ports.
Answer:
BPDU Guard disables a port if it receives a BPDU on a PortFast-enabled port. It prevents loops
caused by connecting switches to end-device ports.
Answer:
VTP is a Cisco protocol that distributes VLAN information to all switches in a domain. It
reduces manual VLAN configuration errors.
Answer:
Answer:
EtherChannel is used to bundle multiple physical links into one logical link to increase
bandwidth and provide redundancy.
84 | P a g e Pradeep
12. What is LACP and PAGP?
Answer:
Answer:
• Collision Domain: Where data collisions can occur. Each switch port is a separate
collision domain.
Answer:
Inter-VLAN Routing allows communication between VLANs using a router or Layer 3 switch.
Answer:
An Interface VLAN or SVI (Switched Virtual Interface) is a virtual Layer 3 interface configured
on a switch to provide IP communication for VLANs.
Answer:
When a frame arrives, the switch learns the source MAC address and the port it came from and
adds this to the MAC table.
Answer:
The switch floods the frame to all ports (except incoming port) to find the destination. Once a
reply comes, it updates the MAC table.
85 | P a g e Pradeep
18. What is RSTP and how is it different from STP?
Answer:
RSTP (Rapid STP) is a faster version of STP. It converges the network in seconds instead of 30–
50 seconds. It introduces new port roles and link types.
Answer:
VLAN 1 is the default VLAN. All ports are in VLAN 1 by default until configured otherwise.
Answer:
• Layer 2 Switch: Works using MAC addresses and does not perform routing.
86 | P a g e Pradeep
Thanks to ChatGPT & Jeremy!!
87 | P a g e Pradeep