0% found this document useful (0 votes)
10 views88 pages

Switching

The document provides an overview of switch configuration file storage, detailing the types of memory used in Cisco switches including RAM, Flash, ROM, and NVRAM. It also explains how switches learn and manage MAC addresses, including the MAC address table and features like aging timers and port security. Additionally, it covers concepts such as collision and broadcast domains, ARP, MTU, and the implications of using different frame sizes in network performance.

Uploaded by

Adrian Ferrer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views88 pages

Switching

The document provides an overview of switch configuration file storage, detailing the types of memory used in Cisco switches including RAM, Flash, ROM, and NVRAM. It also explains how switches learn and manage MAC addresses, including the MAC address table and features like aging timers and port security. Additionally, it covers concepts such as collision and broadcast domains, ARP, MTU, and the implications of using different frame sizes in network performance.

Uploaded by

Adrian Ferrer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

0|P age

Storing Switch Configuration Files:

When you configure a switch, it needs to use the configuration. It also needs to be able to retain
the configuration in case the switch loses power.

Cisco switches contain random access memory (RAM) to store data while Cisco IOS is using it,
but RAM loses its contents when the switch loses power or is reloaded.

To store information that must be retained when the switch loses power or is reloaded, Cisco
switches use several types of more permanent memory, none of which has any moving parts. By
avoiding components with moving parts (such as traditional disk drives), switches can maintain
better uptime and availability.

The following list details the four main types of memory found in Cisco switches, as well as the
most common use of each type:

• RAM: Sometimes called DRAM, for dynamic random-access memory, RAM is used by
the switch just as it is used by any other computer: for working storage. The running
(active) configuration file is stored here.
• Flash memory: Either a chip inside the switch or a removable memory card, flash memory
stores fully functional Cisco IOS images and is the default location where the switch gets
its Cisco IOS at boot time. Flash memory also can be used to store any other files,
including backup copies of configuration files.
• ROM: Read-only memory (ROM) stores a bootstrap (or boothelper) program that is
loaded when the switch first powers on. This bootstrap program then finds the full Cisco
IOS image and manages the process of loading Cisco IOS into RAM, at which point
Cisco IOS takes over operation of the switch.
• NVRAM: Nonvolatile RAM (NVRAM) stores the initial or startup configuration file
that is used when the switch is first powered on and when the switch is reloaded.

1|P age Pradeep


Learning MAC Addresses:

It is the process that allows Ethernet switches to build and maintain a table of MAC addresses
and their associated ports. This process ensures that the switch can efficiently forward Ethernet
frames to the correct destination port without unnecessary flooding, optimizing the
performance of the network.

How a Switch Learns MAC Addresses:

1. Receiving the Frame:

When a switch receives an Ethernet frame on one of its ports, it examines the source MAC
address of the frame.

2. Recording the MAC Address:

• The switch records the source MAC address of the frame along with the port on which
the frame was received.
• The switch creates or updates an entry in its MAC address table (also known as the
forwarding table or content addressable memory (CAM) table).

Each entry consists of:

The MAC address of the sender (the source MAC).

The port through which the frame was received (the port on the switch).

3. Storing the Information:

This information is stored in the switch's MAC address table. The table is used to map MAC
addresses to specific switch ports. Over time, the switch learns which devices (with which
MAC addresses) are connected to which ports.

Note: A switch port is a physical or logical interface on a network switch where a device
(like a computer, router, or another switch) connects.

2|P age Pradeep


Managing MAC address table:

What is the MAC Address Table?

The MAC address table (or CAM table) is a hardware-based memory table that enables a switch
to perform Layer 2 forwarding based on learned source MAC address, VLAN ID, and interface
bindings. It’s dynamically built by inspecting ingress frames and is subject to aging, static
provisioning, and port security enforcement.

How it Works (Simple View):

When a frame enters the switch:

1. The switch learns the source MAC and associates it with the port and VLAN.

2. It stores that in the MAC address table.

3. When sending traffic, the switch checks the destination MAC in the table to know
where to forward the frame.

Features Related to MAC Table:

• Aging Timer: Removes stale entries.

• Port Security: Limits MACs on a port.

• Static Entries: Manually define MAC-port mapping.

• Troubleshooting: Detect flapping, unknown unicast flooding, or MAC spoofing.

We can list all of the dynamically learned MAC addresses here:

SW03#show mac address table dynamic


Mac Address Table
-------------------------------------------
Vlan Mac Address Type Ports
---- ----------- -------- -----
100 002c.c8ff.cd31 DYNAMIC Gi1/0/48
300 d43b.0432.7355 DYNAMIC Gi1/0/47
999 f87b.2030.932f DYNAMIC Gi1/0/48

3|P age Pradeep


How long does this info stay in the table? The default aging time is 300 seconds. We can verify:

SW03#show mac address table aging time


Global Aging Time: 300
Vlan Aging Time

What if we want to remove mac addresses before then? We have a couple of options. We can
either do it manually:

SW03#clear mac address-table dynamic?


address address keyword
interface interface keyword
vlan vlan keyword
<cr>

Another option is to change the aging time. This can be done as follows:

SW03(config)#mac address-table aging-time?


<0-0> Enter 0 to disable aging
<10-1000000> Aging time in seconds

SW03(config)#mac address-table aging-time 1000?


routed-mac Set RM Aging interval
vlan VLAN Keyword
<cr>

Leaving dynamic MAC addresses behind for a moment, we can set static MAC addresses as
follows:

SW03(config)#mac address-table static 0000.aaaa.bbbb vlan 100 int gi1/0/1

SW03(config)#do sh mac address-table static | i 0000.aaaa.bbbb

100 0000.aaaa.bbbb STATIC Gi1/0/1

We can also configure the switch to drop any frames to a particular MAC address:

SW03(config)#mac address-table static 0000.aaaa.bbbb vlan 100 drop

4|P age Pradeep


The last thing I wanted to demonstrate in this section is how to disable MAC address learning
on a particular VLAN. This might be done if you have a VLAN that you only want to have static
entries:

SW03(config)#no mac address-table learning vlan 147

WA-3448-SW03(config)#do sh mac address-table learning

What is errdisable?

errdisable (Error Disable) is a switch protection mechanism that automatically shuts down a
port when certain types of errors or violations are detected — it puts the port into an err-
disabled state.

Note: The port appears as administratively up, but operationally down.

5|P age Pradeep


Common Reasons for Errdisable:

First you must choose something to recover from. I’ll choose a port security violation:

SW03(config)#errdisable recovery cause psecure violation

Then you can configure the interval:

SW03(config)#errdisable recovery interval?

<30-86400> timer-interval(sec)

This is the time to recover from an errdisabled state in seconds. The default is 300 seconds.

Note that we can also manually enable an interface/vlan this has been errdisabled:

SW03#clear errdisable interface gi1/0/1 vlan 1

L2 MTU:

L2 MTU defines the maximum Layer 2 frame size supported by a switch interface. It’s crucial
when implementing technologies like MPLS, Q-in-Q, or VXLAN, where additional
encapsulation overhead exceeds the standard 1518-byte Ethernet frame. Improper L2 MTU
design leads to silent frame drops, especially with transit switches that don’t fragment Layer 2
frames.

show system mtu

show interfaces <intf> | include MTU

6|P age Pradeep


Collision Domains:

A collision domain is a part of a network where data can crash into each other when two or
more devices try to send information at the same time.

Imagine people in one room shouting at the same time. No one can understand anything — this
is like a collision in a network.

If only one person speaks at a time, everyone understands — that's how data should flow
without collision.

Switches are more intelligent than hubs; they are Layer 2 aware.
→ They use information in the Layer 2 header to decide where to send frames.

Additionally, switches have the ability to buffer frames before sending them.
→ If a switch receives two broadcast frames at the same time, it will not flood both at the same
time.
→ One message will be buffered and transmitted only after the other one.

Devices connected to a switch are all in separate collision domains.


→ Devices can operate in full duplex.

When we say a switch can buffer frames, it means:

The switch can temporarily store data (frames) in its memory when it can’t send it immediately.

A buffered frame means the switch is holding the data for a moment, like putting it in a queue,
before sending it out.

7|P age Pradeep


Why is buffering important?

• It prevents data loss when there's temporary traffic congestion.


• Helps in handling speed mismatches between ports (e.g., 1 Gbps to 100 Mbps).
• Improves network performance by avoiding dropping frames.

Broadcast Domains:

• A Broadcast Domain is a logical division of a network in which all nodes can reach each
other by Layer 2 broadcast.
→ It is a group of devices that will receive a broadcast frame sent by any one of the other
devices.

• All devices connected to a switch are in the same broadcast domain; switches flood
broadcast frames.
→ VLANs can be used to divide up broadcast domains on a switch.

• Each router interface is a unique broadcast domain; routers do not forward Layer 2
broadcast messages.

8|P age Pradeep


Layer 2 and Layer 3 Addresses:

Layer 2 and Layer 3 addressing provide different functionality.

Layer 3 provides end to end addressing from the source host to the destination host.

• A logical address configured by a network admin.

• Deals with indirectly (and directly) connected devices.

Layer 2 provides hop to hop addressing within each network segment.

• The physical address of each NIC (Network Interface Card) assigned by the
manufacturer.

• Deals with directly connected devices.

The Layer 3 packet is destined for the end host, and Layer 2 addressing is used to pass the
packet to the next hop in the path to the end host.

ARP:

What is ARP (Address Resolution Protocol)?

ARP is the bridge between Layer 2 (Data Link Layer) and Layer 3 (Network Layer) in the OSI
model. Its main function is to map a known Layer 3 address (IP address) to an unknown Layer 2
address (MAC address).

How It Works:

• When a device (say, a PC) wants to send data to another device, it checks if it knows the
MAC address associated with the next hop IP address.

• If it doesn’t know the MAC address, it sends an ARP Request:


“Who has IP 192.168.1.1? Tell 192.168.1.10.”

• The device with that IP responds with its MAC address.

• The original sender stores this in its ARP cache.

Note: The ARP is used to find the MAC address of the next hop, not necessarily the
destination IP of the packet. This is important when a router is involved.

9|P age Pradeep


ARP Cache:

• Once learned the MAC to IP mapping is stored in the device’s ARP cache.

• This avoids repeated ARP requests, improving efficiency.

• Entries may time out after a few minutes and be refreshed when needed.

Real World Example:

Imagine you want to browse a website. Your PC needs to send the packet to your default
gateway (router). ARP helps your PC learn the MAC address of the gateway so it can send the
Ethernet frame to it—even though the IP address is for a web server somewhere on the internet.

ARP Process:

10 | P a g e Pradeep
ARP Packet Capture:

Definition of Incomplete ARP:

An Incomplete ARP entry appears in the ARP table when a device sends an ARP Request to
resolve an IP address to a MAC address, but has not yet received an ARP Reply. The MAC
address field remains "Incomplete" until the reply is received.

Causes of Incomplete ARP:

1. Target Device is Down – The destination IP is not active or powered off.

2. Firewall or ACL Blocking ARP Replies – ARP replies are filtered or dropped.

3. Incorrect IP or Subnet – Misconfiguration causing unreachable IP addresses.

4. No Route or Misconfigured Interface – Routing issues or wrong interface setup.

5. Physical Connectivity Issues – Cables disconnected or ports down.

11 | P a g e Pradeep
Proxy ARP:

Proxy ARP is a technique in which one device (usually a router) answers ARP requests on
behalf of another device. This allows devices on different IP networks to communicate as if they
were on the same local subnet, even when they are not.

The responding router “pretends” to be the destination by replying with its own MAC address,
effectively proxying for the real target.

How Proxy ARP Works:

1. Host A sends an ARP request for Host B's IP address (assuming they’re on the same
subnet).

2. Host B is on a different subnet, but connected via a router.

3. The router receives the ARP request and recognizes that the destination IP is reachable
through another interface.

4. The router replies with its own MAC address.

5. Host A sends the packet to the router, which then forwards it to Host B.

12 | P a g e Pradeep
Proxy ARP – Mismatched Subnet Mask:

• In the below network, PC1 believes that PC1, PC2, PC3, and PC4 are all in the same
subnet (192.168.0.0/16).

• When PC1 tries to communicate with PC3 (or PC4), it will send an ARP Request
directly to the IP address of PC3 (not the default gateway, R1).

• The ARP Request will not reach PC3 because R1 will not forward the broadcast message.

• With Proxy ARP, R1 will think: → I received an ARP Request for 192.168.1.13 on my G0/0
interface, even though the 192.168.1.0/24 subnet is not connected to G0/0 and is in a
different subnet than the source.
→ 192.168.1.13 is not my IP address, but I do have a route for 192.168.1.0/24 in my routing
table.
→ So, I will reply to PC1’s ARP Request on behalf of 192.168.1.13, using my MAC address.
→ When PC1 sends packets destined for 192.168.1.13 to my MAC address, I will forward
them to PC3.

13 | P a g e Pradeep
What is MTU?

• MTU (Maximum Transmission Unit) determines the maximum packet size that can be
sent/received by an interface.

• The default MTU is 1500 bytes.


→ Larger MTU values can be configured.

There are costs and benefits of larger MTU values:

• Increased network efficiency.


→ The data:header ratio is increased.

• Increased delay between packets.


→ Each individual packet takes more time to send.

• Increased impact of network errors.


→ Greater chance of a corrupt bit in each packet.
→ One corrupt bit requires retransmission of the entire packet.
→ Larger packets take more time to retransmit.

Special Frame Types:

• Jumbo frames:
→ Larger than the default 1500 bytes, typically defined as up to 9000 or 9216 bytes.

• Super Jumbo frames:


→ Frames even larger than Jumbo frames (rare).

• Baby Giant frames:


→ Larger than 1500 bytes, but smaller than Jumbo frames.
→ Typically defined as up to 1600 bytes.

14 | P a g e Pradeep
Eth MTU:

• The Ethernet MTU specifies the maximum payload size of frames sent/received by an
interface.
→ This includes L2 and L3 interfaces, since both send Ethernet frames.
→ MTU is checked at both ingress (receiving) and egress (sending).

• Sometimes called Interface MTU.

• If a frame’s payload (L3 header + L4 header + Data) is larger than the interface’s MTU, it
will be dropped.
→ Layer 2 doesn’t offer any fragmentation capabilities.

• The default Ethernet MTU is 1500 bytes.


→ So, the maximum frame size will generally be 1518 bytes or 1522 bytes (+4 bytes for
802.1q tag).

IP MTU:

• The IP MTU specifies the maximum size of an IP packet before it needs fragmentation
(default 1500 bytes).
→ If the DF-bit is not set, packets larger than the IP MTU are fragmented.
→ If the DF-bit is set, packets larger than the IP MTU are dropped.

• IP MTU only applies to Layer 3 ports; Layer 2 ports (on a switch) are not L3-aware.

Header Field Descriptions:

• Identification:
Identifies the original packet the fragment is a part of.

• Flags:

o Bit 0 = Reserved (always 0)

o Bit 1 = DF-bit (Don’t Fragment)

o Bit 2 = MF-bit (More Fragments)

• Fragment Offset:
Identifies the position of the fragment within the original packet.

15 | P a g e Pradeep
• The IP MTU cannot be greater than the Ethernet MTU.
→ If L3 tries to send a 1600 byte packet but the Ethernet MTU is 1500 bytes, the packet
would be dropped.

• If you increase the Ethernet MTU of an interface, the IP MTU is automatically increased
to match it.

TCP Segment Maximum Size (TCP MSS):

• TCP MSS (Maximum Segment Size) defines the maximum amount of data (in bytes)
that a device can receive in a single TCP segment, excluding TCP and IP headers.

• It is usually negotiated during the TCP three-way handshake.

• The typical default MSS value is 1460 bytes, assuming a standard Ethernet MTU of 1500
bytes:

o 1500 (Ethernet MTU)


o 20 bytes (IP header)
o 20 bytes (TCP header)
o 1460 bytes (MSS)

• If packets exceed the MSS and Path MTU Discovery (PMTUD) fails or is disabled,
fragmentation or drops can occur.

• Adjusting MSS can help avoid fragmentation in networks with tunnels, VPNs, or lower
MTU links.

16 | P a g e Pradeep
Layer 2 Forwarding Decisions:

• Layer 2 forwarding decisions involve looking for an exact match in the MAC address
table.

Steps:

1. Frame arrives

2. Check MAC table for exact match

3. Forward the frame

Example:

• Dst MAC: d8bb.c1cc.ff76

• MAC table entry:

Vlan Mac Address Type Ports

---- ------------ ------- -----

1 001c.7faf.a165 DYNAMIC Fa0/1

1 d8bb.c1cc.ff76 DYNAMIC Fa0/2

1 d8bb.c1cc.f287 DYNAMIC Fa0/3

Frames are forwarded as-is:


→ No need to change the source/destination MAC, update IP TTL, re-calculate FCS, etc.

Layer 2 forwarding is implemented in hardware using ASICs and CAM memory:


→ No need to consult the CPU to make forwarding decisions
→ Much faster than software-based forwarding (using the CPU)

Definitions:

• ASIC (Application-Specific Integrated Circuit):


A chip customized for a particular use.

• CAM (Content-Addressable Memory):


A kind of memory used by switches for rapid lookups of MAC addresses.

17 | P a g e Pradeep
Layer 3 Forwarding:

Layer 3 forwarding happens when a packet needs to travel between different IP subnets or
networks. It’s handled by routers or Layer 3 switches. The decision is based on the destination
IP address of the packet.

Step by Step Layer 3 Forwarding Process:

1. Packet Received:

o A Layer 3 device (like a router or Layer 3 switch) receives a packet on one of its
interfaces.

2. Destination IP Checked:

o The device checks the destination IP address in the packet’s Layer 3 (IP) header.

3. Routing Table Lookup:

o It checks the routing table (RIB) to find a match for the destination IP.

o Uses longest prefix match (LPM) to find the best route.

4. Next Hop Decision:

o Based on the routing table entry, it decides:

▪ What is the next-hop IP address

▪ Which interface to use for forwarding

5. ARP for Next-Hop (if needed):

o If the next-hop IP is on a directly connected subnet, the device uses ARP to find
the MAC address.

o If the MAC is already in the ARP cache, it uses that directly.

6. New Ethernet Frame Created:

o A new Layer 2 (Ethernet) frame is created:

▪ Source MAC = Router’s exit interface MAC

18 | P a g e Pradeep
▪ Destination MAC = Next-hop’s MAC address

o The original IP header remains the same, but TTL is decremented by 1

7. Forwarding Out:

o The packet is sent out the selected interface toward the next-hop or destination.

Process Switching:

• Definition: In Process Switching, every incoming packet is handled by the router’s CPU.
The CPU makes the forwarding decision for each packet.

• How it works: Each packet is examined by the router's software, which looks up the
destination IP address in the routing table. Then, based on the lookup, it forwards the
packet.

• Performance: This method is less efficient because the CPU is involved in every packet's
forwarding decision, which can cause a significant performance hit, especially in high-
throughput environments.

• Use case: Process Switching is often used in older routers or in situations where the
router has a small number of packets to forward, such as in small networks or with
lower traffic loads.

CEF (Cisco Express Forwarding):

• Definition: CEF is a more advanced and efficient packet-forwarding mechanism used in


modern Cisco routers. It maintains a FIB (Forwarding Information Base) and an
adjacency table to make forwarding decisions.

• How it works:

o FIB: A table that stores precomputed routes, optimized for fast lookups.

o Adjacency Table: Stores information about Layer 2 forwarding decisions (like the
MAC address) to speed up the process.

o When a packet arrives, CEF looks up the destination in the FIB and forwards the
packet based on the result, all without involving the CPU for each packet.

19 | P a g e Pradeep
• Performance: CEF is far more efficient than Process Switching because the forwarding
decision is made using pre-built tables, not by the CPU for every packet. This results in
faster packet processing, particularly under heavy traffic conditions.

• Use case: CEF is the default packet-forwarding method on most modern Cisco routers
and is used in high-speed, large-scale networks.

RP Failover (no SSO, NSF, GR, NSR):

After RP Failover on PE:

• After minutes of downtime, traffic flow resumes.


• On Router and Switches with separate & distributed Control Plane & Data planes,
ideally a control plane failover would not affect the data plane.
• The purpose of SSO, NSF, GR and NSR is to make that a reality.

RP Redundancy Modes (pre SSO):

1. HSA (High System Availability)

• In normal operation, the Standby RP is down.

• If the Active RP fails, the Standby RP will reboot the router and become the new Active.

• This results in a cold restart.

20 | P a g e Pradeep
2. RPR (Route Processor Redundancy):

• Standby RP is partially initialized during normal operation. IOS image is booted (cold
boot).

• Startup-Config is synced from the Active RP to the Standby RP.

• Configuration changes are not synced in real-time.

• L2 protocols (STP, LACP, PaGP, VTP) and L3 protocols (OSPF, EIGRP) are not
initialized on the Standby RP.

• On Active RP failure, Standby RP reinitializes as Active, reloads Startup-Config, reboots


line cards, and restarts the system.

• Failover time: 2–4 minutes.

3. RPR+

• Similar to RPR, the Standby RP is partially initialized, and the IOS image is booted.

• Startup-Config is synced from Active to Standby RP.

• Config changes are synced in real time.

• L2/L3 protocols are still not initialized on the Standby RP.

• If Active RP fails, no need to reload or reinitialize the Standby RP or line cards.

RP Failover (SSO, NSF, GR, NSR):

21 | P a g e Pradeep
Stateful Switchover (SSO):

• SSO fully boots and initializes the Standby RP.

• During Standby RP boot-up, a bulk synchronization is performed from Active to


Standby RP.

• Once booted, incremental synchronization occurs for:

o Running configuration changes

o Interface states

o Other runtime updates

Checkpointing:

• Syncing configurations and protocol states from Active to Standby is called


checkpointing.

Protocol Behavior:

• Layer 2 protocols are initialized on the Standby RP, and their states are preserved during
failover.

o Includes: STP, LACP, PaGP, VTP, Port Security, etc.

• Layer 2 forwarding is maintained during failover:

o No traffic loss

o No interface flaps

• Layer 3 protocols are not synced:

o During failover, L3 adjacencies go down.

o Routes are cleared from the routing table.

o L3 forwarding is interrupted

22 | P a g e Pradeep
Failover Time:

• Typically, just a few seconds, depending on the router or switch model.

Non Stop Forwarding (NSF):

• NSF further enhances SSO by allowing line cards to forward packets at Layer 3 while the
Route Processor (RP) is failing over.

• NSF is enabled by default (on supported devices) if SSO is enabled.


→ NSF cannot be manually configured.

FIB Synchronization:

• The FIB (Forwarding Information Base) is initially transferred to the Standby RP during
initialization.
It is then actively updated when changes occur. → NSF = checkpointing of the FIB from
Active RP to Standby RP.

Routing Protocol Behavior:

• Routing protocol adjacencies are still interrupted during failover:

o The local device can continue forwarding packets.

o However, neighboring devices may drop routes learned from this device.

o As a result, Layer 3 traffic flow is interrupted.

NSF Characteristics:

• NSF is a local feature: it does not require cooperation from neighboring devices.

• This is a common misunderstanding, as many confuse NSF with GR (Graceful Restart),


which does involve neighbor interaction.

Graceful Restart (GR):

• GR allows peers of a device performing a Route Processor (RP) failover to maintain their
routes via that device, even if the routing protocol adjacency is lost.

o Peers will continue sending packets to the device, even though the adjacency is
down.

23 | P a g e Pradeep
o The period during which neighbors keep forwarding traffic is called the grace
period.

GR + NSF:

• GR, in combination with NSF, helps maintain L3 forwarding during an RP failover.

• GR requires cooperation and communication between neighboring devices.

GR Device Roles:

• Devices can be:

o GR capable: able to perform a Graceful Restart during RP failover.

o GR aware: able to continue forwarding traffic while a neighbor restarts.

o A device can be both GR-capable and GR-aware.

Requirements:

• The failing device must be GR-capable.

• The neighboring devices must be GR-aware (or also GR-capable).

Non Stop Routing (NSR):

• NSR attempts to maintain neighbor adjacencies during an RP failover.

• Does not require cooperation from neighbors (unlike GR).


→ Neighbors aren’t aware that an RP failover is happening.

• In addition to checkpointing the FIB, routing protocol state information is also


checkpointed to the Standby RP.

To maintain all forwarding during an RP failover you need:

SSO + NSF + GR or NSR

• Enabling only one feature is not very useful.

• GR and NSR both aim to ensure neighbors continue forwarding traffic via this device
during RP failover.

24 | P a g e Pradeep
o A device can’t use both at the same time for the same neighbor.

These features don’t make up for poor design — ideally there should be redundant routers,
not just a router with redundant RPs.

Virtual LAN Concepts:

Before understanding VLANs, you must first have a specific understanding of the definition of a
LAN. For example, from one perspective, a LAN includes all the user devices, servers, switches,
routers, cables, and wireless access points in one location.

A LAN includes all devices in the same broadcast domain.

A broadcast domain includes the set of all LAN-connected devices, so that when any of the
devices sends a broadcast frame, all the other devices get a copy of the frame. So, from one
perspective, you can think of a LAN and a broadcast domain as being basically the same thing.

Using only default settings, a switch considers all its interfaces to be in the same broadcast
domain. That is, for one switch, when a broadcast frame entered one switch port, the switch
forwards that broadcast frame out all other ports. With that logic, to create two different LAN
broadcast domains, you had to buy two different Ethernet LAN switches

By using two VLANs, a single switch can accomplish the same goals of the design in Figure 8-
1—to create two broadcast domains—with a single switch. With VLANs, a switch can
configure some interfaces into one broadcast domain and some into another, creating multiple
broadcast domains. These individual broadcast domains created by the switch are called virtual
LANs (VLAN).

25 | P a g e Pradeep
For example, a broadcast sent by one host in a VLAN will be received and processed by all the
other hosts in the VLAN—but not by hosts in a different VLAN. Limiting the number of hosts
that receive a single broadcast frame reduces the number of hosts that waste effort processing
unneeded broadcasts. It also reduces security risks because fewer hosts see frames sent by any
one host.

The following list summarizes the most common reasons for choosing to create smaller
broadcast domains (VLANs):

To reduce CPU overhead on each device, improving host performance, by reducing the number
of devices that receive each broadcast frame

To reduce security risks by reducing the number of hosts that receive copies of frames that the
switches flood (broadcasts, multicasts, and unknown unicasts)

To improve security for hosts through the application of different security policies per VLAN.

To create more flexible designs that group users by department, or by groups that work
together, instead of by physical location

To solve problems more quickly, because the failure domain for many problems is the same set of
devices as those in the same broadcast domain

To reduce the workload for the Spanning Tree Protocol (STP) by limiting a VLAN to a single
access switch.

Key Definition:

Virtual LANs (VLANs) are used to segment a LAN into multiple virtual LANs (broadcast
domains).

• Without configuring VLANs, all hosts connected to the same LAN are in the same
broadcast domain. → All hosts are in the default VLAN: VLAN 1.

• In a very small network, this might be acceptable.


→ Anything beyond a SOHO network will probably use VLANs.

• Even if the network is segmented at Layer 3, if VLANs aren’t used to segment it at Layer
2, broadcast and unknown unicast frames will be flooded to all hosts.
→ Usually, 1 VLAN = 1 subnet.

26 | P a g e Pradeep
• By configuring VLANs, the switch is split into multiple virtual switches.
→ The switch will not forward/flood a frame to interfaces in a different VLAN than the
one it was received on.

Stretched VLANs:

Stretched VLANs are VLANs that span multiple sites.

In the diagram below, VLAN 12, 34 and 56 are stretched across the three sites, connected via
trunks that allow VLANs 12,34 and 56.

This is generally not preferred in modern networks; we want to minimize the size of broadcast
domains.

Local VLANs:

Local VLANs are localized to a site; connections between sites are layer 3, so VLANs do not pass
between them.

It doesn’t matter if the VLAN numbers are the same or different at each site. They are locally
significant to each site.

It may be desirable to use the same VLAN numbers at each site for consistency:

End users = VLAN 12

Security Cameras = VLAN 34

Service = VLAN 56

27 | P a g e Pradeep
Multiple Subnets per VLAN:

Usually, one VLAN = one subnet.

But it's also possible to assign multiple subnets to a single VLAN.

Use Case:

• When a subnet (e.g., 10.0.1.0/24) runs out of IP addresses, instead of creating a new
VLAN, you can:

o Add another subnet (e.g., 10.0.3.0/24) to the same VLAN.

How to Configure It:

• If using SVI (Switched Virtual Interface) for routing:

o Add a secondary IP address on the VLAN's interface.

SW1(config)# interface vlan10

SW1(config-if)# ip address 10.0.3.1 255.255.255.0 secondary

If using ROAS (Router on a Stick):

• Add a secondary IP address on the sub-interface for the VLAN.

28 | P a g e Pradeep
Important Routing Behavior:

• Although 10.0.1.0/24 and 10.0.3.0/24 are in the same VLAN (VLAN 10), they are different
IP subnets.

• For devices in different subnets to communicate, routing is required, even though they
are in the same Layer 2 VLAN.

o This is handled by the default gateway IPs configured on SW1's SVI (e.g., 10.0.1.1
and 10.0.3.1).

Shutting Down and Suspending VLANs:

1. Shutting Down a VLAN (Local Shutdown)

• Use the shutdown command on a VLAN to disable it on the current switch only.

• The switch will not forward traffic in this VLAN.

• This is a local shutdown — the VLAN remains active in the VTP domain, but is disabled
only on the local switch.

2. Suspending a VLAN (VTP Domain-wide)

• Use the state suspend command to disable the VLAN across the entire VTP domain.

• The VLAN will not be deleted, but no switch in the VTP domain will forward traffic in
this VLAN.

Creating Multiswitch VLANs Using Trunking:

Configuring VLANs on a single switch requires only a little effort: you simply configure each
port to tell it the VLAN number to which the port belongs. With multiple switches, you have to
consider additional concepts about how to forward traffic between the switches.

29 | P a g e Pradeep
When you are using VLANs in networks that have multiple interconnected switches, the
switches need to use VLAN trunking on the links between the switches. VLAN trunking causes
the switches to use a process called VLAN tagging, by which the sending switch adds another
header to the frame before sending it over the trunk. This extra trunking header includes a
VLAN identifier (VLAN ID) field so that the sending switch can associate the frame with a
particular VLAN ID, and the receiving switch can then know in what VLAN each frame belongs.

Note: Trunking refers to the process of allowing traffic from multiple VLANs to travel over
a single physical connection (link) between network devices, such as switches or routers. It
is a key concept in VLAN configuration that enables devices in the same VLAN across
different switches to communicate.

Scenario 1:

VLANs that exist on multiple switches, but it does not use trunking.

VLAN Tagging Concepts:

VLAN trunking creates one link between switches that supports as many VLANs as you need.
As a VLAN trunk, the switches treat the link as if it were a part of all the VLANs. At the same
time, the trunk keeps the VLAN traffic separate, so frames in VLAN 10 would not go to devices
in VLAN 20, and vice versa, because each frame is identified by VLAN number as it crosses the
trunk.

30 | P a g e Pradeep
The use of trunking allows switches to forward frames from multiple VLANs over a single
physical connection by adding a small header to the Ethernet frame.

When SW2 receives the frame, it understands that the frame is in VLAN 10. SW2 then removes
the VLAN header, forwarding the original frame out its interfaces in VLAN 10 (Step 3).

The 802.1Q and ISL VLAN Trunking Protocols:

Cisco has supported two different trunking protocols over the years: Inter-Switch Link (ISL)
and IEEE 802.1Q.

While both ISL and 802.1Q tag each frame with the VLAN ID, the details differ. 802.1Q inserts
an extra 4-byte 802.1Q VLAN header into the original frame’s Ethernet header.
31 | P a g e Pradeep
This 12-bit field supports a theoretical maximum of 212 (4096) VLANs, but in practice it
supports a maximum of 4094. (Both 802.1Q and ISL use 12 bits to tag the VLAN ID, with two
reserved values [0 and 4095].)

Cisco switches break the range of VLAN IDs (1–4094) into two ranges: the normal range and the
extended range. All switches can use normal-range VLANs with values from 1 to 1005.Only some
switches can use extended-range VLANs with VLAN IDs from 1006 to 4094.

The rules for which switches can use extended-range VLANs depend on the configuration of the
VLAN Trunking Protocol (VTP).

802.1Q also defines one special VLAN ID on each trunk as the native VLAN (defaulting to use
VLAN 1). By definition, 802.1Q simply does not add an 802.1Q header to frames in the native
VLAN. When the switch on the other side of the trunk receives a frame that does not have an
802.1Q header, the receiving switch knows that the frame is part of the native VLAN. Note that
because of this behavior, both switches must agree on which VLAN is the native VLAN.

Note: Encapsulation Dot1Q is used in VLANs to logically segment and isolate network traffic,
enhancing security, performance, and manageability.

Forwarding Data Between VLANs:

If you create a campus LAN that contains many VLANs, you typically still need all devices to be
able to send data to all other devices.

The Need for Routing Between VLANs:

VLANs (Virtual Local Area Networks) are used to segment a network into smaller, isolated
domains to improve performance, security, and manageability. However, VLANs are isolated by

32 | P a g e Pradeep
design, meaning devices in different VLANs cannot communicate with each other unless routing
is enabled.

Routing Packets Between VLANs with a Router:

When including VLANs in a campus LAN design, the devices in a VLAN need to be in the same
subnet. Following the same design logic, devices in different VLANs need to be in different
subnets.

To forward packets between VLANs, the network must use a device that acts as a router.

These switches that also perform Layer 3 routing functions go by the name multilayer switch or
Layer 3 switch.

NOTE The term default VLAN refers to the default setting on the switchport access vlan vlan-id
command, and that default is VLAN ID 1. In other words, by default, each port is assigned to
access VLAN 1.

33 | P a g e Pradeep
Creating VLANs and Assigning Access VLANs to an Interface:

What is Interface VLAN?

Interface VLAN (also called Switched Virtual Interface - SVI) is a virtual Layer 3 interface on a
switch that is used to give IP connectivity to a specific VLAN.

Switch(config)# interface vlan 10


Switch(config-if)# ip address 192.168.10.1 255.255.255.0
Switch(config-if)# no shutdown

34 | P a g e Pradeep
This example begins with SW1 not knowing about VLAN 3. With the addition of the
switchport access vlan 3 interface subcommand, the switch realized that VLAN 3 did not exist,
and as noted in the shaded message in the example, the switch created VLAN 3, using a default
name (VLAN0003).

VLAN Trunking Protocol:

VTP is a Cisco proprietary tool on Cisco switches that advertises each VLAN configured in one
switch (with the vlan number command) so that all the other switches in the campus learn
about that VLAN.

There are four roles in the VTP architecture:

• Server: The server switch is responsible for the creation, modification, and deletion of
VLANs within the VTP domain.
• Client: The client switch receives VTP advertisements and modifies the VLANs on that
switch. VLANs cannot be configured locally on a VTP client.
• Transparent: VTP transparent switches receive and forward VTP advertisements but do
not modify the local VLAN database. VLANs are configured only locally.
• Off: A switch does not participate in VTP advertisements and does not forward them out
of any ports either. VLANs are configured only locally.

35 | P a g e Pradeep
VTP Communication:

VTP advertises updates by using a multicast address across the trunk links for advertising
updates to all the switches in the VTP domain. There are three main types of advertisements:

• Summary: This advertisement occurs every 300 seconds or when a VLAN is added,
removed, or changed. It includes the VTP version, domain, configuration revision
number, and time stamp.
• Subset: This advertisement occurs after a VLAN configuration change occurs. It contains
all the relevant information for the switches to make changes to the VLANs on them.
• Client requests: This advertisement is a request by a client to receive the more detailed
subset advertisement. Typically, this occurs when a switch with a lower revision number
joins the VTP domain and observes a summary advertisement with a higher revision than
it has stored locally.

VTP Configuration:

The following are the steps for configuring VTP:

• Step 1. Define the VTP version with the command vtp version {1 | 2 | 3}.
• Step 2. Define the VTP domain with the command vtp domain domain-name. Changing
the VTP domain resets the local switch’s version to 0.

36 | P a g e Pradeep
• Step 3. Define the VTP switch role with the command vtp mode { server | client |
transparent | none }.
• Step 4. (Optional) Secure the VTP domain with the command vtp password password.
(This step is optional but recommended because it helps prevent unauthorized switches
from joining the VTP domain.

Commands:

• show vtp status


• show vtp status | i version run|Operating|VLANS|Revision
• show vtp status | i version run|Operating|VLANS|Revision
• show interfaces trunk
• show interfaces gi1/0/2 switchport | i Trunk

37 | P a g e Pradeep
Dynamic Trunking Protocol:

DTP (Dynamic Trunking Protocol) is a Cisco proprietary protocol used to dynamically negotiate
and establish trunk links between two switches. It automates the process of configuring trunk
ports, allowing them to carry traffic for multiple VLANs over a single link, and supports VLAN
tagging protocols such as 802.1Q or ISL.

DTP advertises itself every 30 seconds to neighbors so that they are kept aware of its status.

DTP requires that the VTP domain match between the two switches.

There are three modes to use in setting a switch port to trunk:

• Trunk: This mode statically places the switch port as a trunk and advertises DTP
packets to the other end to establish a dynamic trunk. Place a switch port in this mode
with the command switchport mode trunk.
• Dynamic desirable: In this mode, the switch port acts as an access port, but it listens for
and advertises DTP packets to the other end to establish a dynamic trunk. If it is
successful in negotiation, the port becomes a trunk port. Place a switch port in this mode
with the command switchport mode dynamic desirable.
• Dynamic auto: In this mode, the switch port acts as an access port, but it listens for DTP
packets. It responds to DTP packets and, upon successful negotiation, the port becomes
a trunk port. Place a switch port in this mode with the command switchport mode
dynamic auto.

38 | P a g e Pradeep
Internal VLANs:

• In some cases, Cisco switches create internal VLANs for their own use.

o Example: When you make a routed port with no switchport, an internal VLAN is
created.

• Internal VLANs are created from the extended VLAN range, starting from 1006 (by
default).

You can control this with: vlan internal allocation policy {ascending | descending}

o ascending → allocation starts from 1006

o descending → allocation starts from 4094

Internal VLANs will not appear in the output of show vlan, but you cannot use them for other
purposes.

39 | P a g e Pradeep
Note:
“If necessary, you can shut down the routed port assigned to the internal VLAN, which frees up
the internal VLAN, and then create the extended-range VLAN and re-enable the port, which
then uses another VLAN as its internal VLAN.”

Access Ports: Access ports (also called untagged ports) are switch ports that belong to a single
VLAN and send/receive untagged traffic.

• If a voice VLAN is configured, the port belongs to two VLANs:

o Access VLAN (untagged)

o Voice VLAN (tagged)

Best Practice:

• Ports connected to end hosts should usually be access ports.

• Exception: Servers running VMs – they should connect to trunk ports (each VM on the
server uses a different VLAN).

o To configure an access port:


o switchport mode access
o To specify the access VLAN (default is VLAN 1):
o switchport access vlan <vlan id>
o switchport access vlan name <vlan name>
o To specify the voice VLAN:
o switchport voice vlan <vlan id>

Trunk Ports:

Trunk ports (also known as tagged ports) are ports which carry traffic in multiple VLANs.
→ VLAN tags (802.1Q) are used to indicate which VLAN traffic belongs to.

• The native VLAN is an exception: traffic in the native VLAN is sent untagged, and if
untagged frames are received on a trunk port, they are assumed to be in the native VLAN.
→ Generally, it is best practice to disable the native VLAN (assign an unused VLAN as
the native VLAN).

• The 802.1Q tag is placed between the Source and EtherType fields of a frame.

40 | P a g e Pradeep
41 | P a g e Pradeep
Dynamic Trunking Protocol (DTP):

DTP (Dynamic Trunking Protocol) is a Cisco proprietary protocol that allows switches to
automatically determine:

• The Operational Mode (Trunk or Access)

• The Trunking Encapsulation (802.1Q or ISL) of their ports

Key Points:

• Trunking Encapsulation can only be negotiated if the Operational Status is also


negotiated.

• Trunking encapsulation (choosing between 802.1Q or ISL) can only be negotiated if the
interface mode is also negotiated, not manually set.

• But this only works if the interface mode is also being negotiated dynamically (like with
switchport mode dynamic desirable or dynamic auto).

o If the port mode is manually configured as trunk, then the encapsulation must
also be manually configured.

42 | P a g e Pradeep
o If you use switchport trunk encapsulation negotiate, you cannot use switchport
mode trunk.

You must also let it negotiate whether it should be a trunk or access port.

switchport mode dynamic desirable

switchport trunk encapsulation negotiate

Do Not Use:

switchport mode trunk

switchport trunk encapsulation negotiate ← Conflict!

Noe: Because in the second case, you already forced trunk mode, so the switch says:

You’ve already decided this port will be a trunk, so I’m not allowed to negotiate anything.

Why can’t you use switchport trunk encapsulation negotiate with switchport mode trunk?

Because:

• switchport mode trunk forces the port to trunk mode — there is no negotiation.

• But switchport trunk encapsulation negotiate requires negotiation of both encapsulation


and trunk mode.

So when you force the trunk mode (switchport mode trunk), encapsulation must also be set
explicitly: switchport trunk encapsulation dot1q

DTP Negotiation: auto + auto:

43 | P a g e Pradeep
DTP Negotiation: desirable + auto:

DTP Negotiation: desirable + desirable:

DTP Negotiation: desirable + trunk:

44 | P a g e Pradeep
VLAN Trunking Protocol (VTP):

VTP (VLAN Trunking Protocol) is a Cisco proprietary protocol used to synchronize VLAN
information among switches within the same VTP domain.

What VTP Can Do:

• Add VLANs

• Delete VLANs

• Rename VLANs

• Does NOT assign access ports to VLANs → Only allows VLANs to be passed over trunk
links

VTP Domain:

• VTP works only if switches are in the same VTP domain.

• By default, the VTP domain is NULL (no domain).

o In this state, switches do not send VTP advertisements.

o To configure a domain:

Nexus Switches & VTP Domain:

By default, Cisco Nexus switches do not participate in VTP — they operate in VTP transparent
mode.

Default VTP Mode on Nexus:

• Transparent

• Meaning:

o They do not send or process VTP advertisements.

o VLANs must be configured manually.

o But they will forward VTP advertisements received on trunk links to other
switches.

45 | P a g e Pradeep
VTP Modes:

There are three main VTP modes:

Server Mode (Default)

Full control of VLAN database

• Can create, delete, and modify VLANs.

• Sends VTP advertisements to other switches.

• Stores VLAN info in NVRAM (retains after reboot).

• Must be in the same VTP domain to sync with other switches.

Client Mode

Receives VLAN info, but can't make changes

• Cannot create, delete, or modify VLANs.

• Can receive and apply VLAN info from VTP Server.

• Sends VTP advertisements, but only what it learned from the server.

• Does not store VLANs in NVRAM (loses them after reboot unless learned again).

Transparent Mode

Does not participate in VTP synchronization

• VLAN changes are local only (doesn’t affect or sync with others).

• Forwards VTP advertisements it receives to other switches.

• Stores VLANs in NVRAM.

• Can create/delete VLANs, but doesn't advertise them.

(Bonus) Off Mode (Available in VTPv3 only)

46 | P a g e Pradeep
Completely disables VTP

• Does not send, receive, or forward VTP messages.

• Treated like fully isolated VLAN configuration.

• Useful for secure or tightly controlled environments.

EtherChannel Bundle:

Ethernet network speeds are based on powers of 10 (10 Mbps, 100 Mbps, 1 Gbps, 10 Gbps, 100
Gbps). When a link between switches becomes saturated, how can more bandwidth be added
to that link to prevent packet loss?

If both switches have available ports with faster throughput than the current link (for example,
10 Gbps versus 1 Gbps), then changing the link to higher-speed interfaces solves the bandwidth
contingency problem. However, in most cases, this is not feasible.

Ideally, it would be nice to plug in a second cable and double the bandwidth between the
switches. However, Spanning Tree Protocol (STP) will place one of the ports into a blocking
state to prevent forwarding loops,

47 | P a g e Pradeep
Fortunately, the physical links can be aggregated into a logical link called an EtherChannel
bundle. The industry-based term for an EtherChannel bundle is EtherChannel (for short), or
port channel, which is defined in the IEEE 802.3AD link aggregation specification. The physical
interfaces that are used to assemble the logical EtherChannel are called member interfaces. STP
operates on a logical link and not on a physical link. The logical link would then have the
bandwidth of any active member interfaces, and it would be load balanced across all the links.
EtherChannels can be used for either Layer 2 (access or trunk) or Layer 3 (routed) forwarding.

Definition: Etherchannel groups multiple physical interfaces into a single logical interface.

Spanning Tree sees the EtherChannel as a single interface, so it does not block any ports. We
now get the full bandwidth.

EtherChannel Load Balancing and Redundancy:

Traffic is load balanced across all the links in the EtherChannel. If an interface goes down its
traffic will fail over to the remaining links.

A primary advantage of using port channels is a reduction in topology changes when a member
link line protocol goes up or down. In a traditional model, a link status change may trigger a
Layer 2 STP tree calculation or a Layer 3 route calculation. A member link failure in an
EtherChannel does not impact those processes, as long as one active member still remains up.

48 | P a g e Pradeep
EtherChannel Protocols:

Two common link aggregation protocols are Link Aggregation Control Protocol (LACP) and
Port Aggregation Protocol (PAgP). PAgP is Cisco proprietary and was developed first, and then
LACP was created as an open industry standard. All the member links must participate in the
same protocol on the local and remote switches.

PAgP Port Modes:

PAgP advertises messages with the multicast MAC address 0100:0CCC:CCCC and the protocol
code 0x0104. PAgP can operate in two modes:

Auto: In this PAgP mode, the interface does not initiate an EtherChannel to be established and
does not transmit PAgP packets out of it. If an PAgP packet is received from the remote switch,
this interface responds and then can establish a PAgP adjacency. If both devices are PAgP auto, a
PAgP adjacency does not form.

Desirable: In this PAgP mode, an interface tries to establish an EtherChannel and transmit PAgP
packets out of it. Active PAgP interfaces can establish a PAgP adjacency only if the remote
interface is configured to auto or desirable.

LACP Port Modes:

LACP advertises messages with the multicast MAC address 0180:C200:0002. LACP can operate
in two modes:

Passive: In this LACP mode, an interface does not initiate an EtherChannel to be established
and does not transmit LACP packets out of it. If an LACP packet is received from the remote
switch, this interface responds and then can establish an LACP adjacency. If both devices are
LACP passive, an LACP adjacency does not form.

Active: In this LACP mode, an interface tries to establish an EtherChannel and transmit LACP
packets out of it. Active LACP interfaces can establish an LACP adjacency only if the remote
interface is configured to active or passive.

49 | P a g e Pradeep
EtherChannel Parameters:

• The switches on both sides must have a matching configuration


• The member interfaces must have the same settings on both sides:
• Speed and duplex
• Access or Trunk mode
• Native VLAN and allowed VLANs on trunks
• Access VLAN on access ports

EtherChannel Configuration:

• Static EtherChannel: A static EtherChannel is configured with the interface parameter


command channel-group etherchannel-id mode on.
• LACP EtherChannel: An LACP EtherChannel is configured with the interface parameter
command channel-group etherchannel-id mode {active | passive}.
• PAgP EtherChannel: A PAgP EtherChannel is configured with the interface parameter
command channel-group etherchannel-id mode {auto | desirable} [non-silent].

Note: By default, PAgP ports operate in silent mode, which allows a port to establish an
EtherChannel with a device that is not PAgP capable and rarely sends packets.

Sample Port-Channel Configuration:

50 | P a g e Pradeep
Layer 3 Etherchannel:

Switch1(config)#interface range GigabitEthernet 1/0/1 - 2


Switch1(config-if-range) #no switchport
Switch1(config-if-range) #channel-group 1 mode | active | auto
| desirable | on | passive
Switch1(config)#interface port-channel 1
Switch1(config-if) #ip address 192.168.0.1 255.255.255.252
Switch1(config-if) #no shutdown

LACP Configuration:

• LACP interfaces can be set as either Active or Passive


• If SW1’s interfaces are set as Active and SW2’s as Passive, the port
• channel will come up
• If both sides are Passive, the port channel will not come up
• If both sides are Active, the port channel will come up
• It is recommended to configure both sides as Active so you don’t have
• to think about which side is which

SW1(config)#interface range f0/23 - 24


SW1(config-if-range) #channel-group 1 mode active
This creates interface port-channel 1

SW1(config)#interface port-channel 1
SW1(config-if)#switchport mode trunk

Configure the interface settings on the port channel

51 | P a g e Pradeep
Configure matching settings on the other switch on the other side of the links:

SW2(config)#interface range f0/23 - 24


SW2(config-if-range)#channel-group 1 mode active
SW2(config)#interface port-channel 1
SW2(config-if)#switchport mode trunk

PAgP Configuration:

• PAgP interfaces can be set as either Desirable or Auto


• If one side is Desirable and the other Auto, the port channel will come up
• If both sides are Auto, the port channel will not come up
• If both sides are Desirable, the port channel will come up
• If you configure both sides as Desirable you don’t have to think about which side is
which

SW1(config)#interface range f0/23 - 24


SW1(config-if-range)#channel-group 1 mode desirable
SW1(config)#interface port-channel 1
SW1(config-if)#switchport mode trunk

Configure matching settings on the switch on the other side of the links

SW1(config)#interface range f0/23 - 24


SW1(config-if-range)#channel-group 1 mode on
SW1(config)#interface port-channel 1
SW1(config-if)#switchport mode trunk
Configure matching settings on the switch on the other side of the links

LACP Basic:

Link Aggregation Control Protocol (LACP) is an industry-standard EtherChannel negotiation


protocol.

o In L2 EtherChannel, the port-channel (logical port) gets its MAC address from
the first physical port in the channel that comes up.

o In L3 EtherChannel, a unique MAC is allocated by the switch as soon as the port-


channel comes up.

52 | P a g e Pradeep
Layer 3 EtherChannel:

• In Layer 3, the Port-Channel interface is treated like a routed interface (i.e., it has an IP
address).

• The individual physical interfaces bundled in the Port-Channel are no longer L2


switchports — they’re routed members.

MAC Address Allocation:

• Since it's Layer 3, the port-channel interface (like any routed interface) needs a MAC
address to send/receive IP packets over Ethernet.

So what happens?

• The router or switch automatically assigns a MAC address to the Port-Channel (just like
it does for any routed interface).

• This MAC is not inherited from a physical interface (like in Layer 2 EtherChannel).

• Instead, the device allocates a virtual MAC address from its pool or generates one
internally when the interface is created.

Why is it different from L2 EtherChannel?

• L2 Port-Channel: Gets MAC from the first active physical port (because it behaves like a
switchport).

• L3 Port-Channel: Behaves like a routed port, so the device assigns a new MAC (like it
does for any routed interface).

LACP was originally defined in 802.3ad, but the most recent standard is in 802.1AX-2020.

• You can get 802.1AX-2020 (Link Aggregation) for free from IEEE Xplore (LACP is
section 6.4).

channel-group number mode {active | passive}

• A port in active mode actively sends LACP messages to negotiate an EtherChannel with
its neighbor.

• A port in passive mode only sends LACP messages after receiving messages from a
neighbor in active mode.
53 | P a g e Pradeep
Messages (LACPDUs) are rapidly exchanged during negotiation and sent every 30 seconds to
maintain the EtherChannel.

• Like in PAgP, this behavior can be modified (more on that later).

LACP messages are sent to multicast MAC 0180.c200.0002 (the “Ethernet Slow Protocols”
address).

• Ethernet Slow Protocols use an EtherType of 0x8809 (a subtype of 0x01 is used to


indicate LACP).

• LACP messages are always sent untagged on trunk and access ports.

How LACP Works on a Router (L3 EtherChannel):

1. Interface Bundling Using LACP

• On the router, you configure multiple physical interfaces (e.g., GigabitEthernet0/0,


GigabitEthernet0/1, etc.) to be part of a Port-Channel (logical interface).

• These interfaces are grouped using:

channel-group <number> mode active/passive

active → Sends LACP packets.

passive → Waits to receive LACP packets.

2. LACP Negotiation Begins

• LACP uses LACPDUs (Link Aggregation Control Protocol Data Units) to communicate
between the two routers (or router to switch).

• It negotiates and ensures:

o Interfaces are compatible (same speed, duplex, etc.).

o Interfaces agree to form an EtherChannel.

3. Forming the Port-Channel

• After successful negotiation:

54 | P a g e Pradeep
o The router bundles the links into a single logical interface called Port-
Channel<number>.

o If this is L3, you assign an IP address on the Port-Channel (not on individual


interfaces).

4. MAC Assignment for L3 Port-Channel

• A unique MAC address is assigned by the router to the Port-Channel interface (used for
ARP, routing).

5. Traffic Load-Balancing

• Traffic is load-balanced across the member links based on a hash algorithm (e.g., source-
destination IP, Layer 4 ports, etc.).

• All traffic flows through the Port-Channel interface logically.

3. LACP States & Negotiation

• Port roles:

o Active ports send LACPDUs.

o Passive ports only respond.

• Logical bundling happens when both sides exchange and accept LACP data units.

4. System & Port Identifiers

• System ID = System MAC + system-priority.

• Port ID = Port number + priority.

• LACP compares these to choose which links go into the same bundle.

• Lower priority/MAC wins during bundling; misconfig (mismatch) leads to STP loops or
misbehavior.

5. LACP PDUs

• Encapsulated in Ethernet frames with:

o Multicast MAC 0180.c200.0002

55 | P a g e Pradeep
o EtherType 0x8809

o LACP subtype 0x01

• Contain key info: system ID, port ID, port state flags, key, timeout.

6. LACP Modes

• Fast (short timeout) – PDUs sent every 1 sec.

• Slow (default) – PDUs every 30 sec.

• Use lacp rate fast under interface to tweak PDU frequency (useful for faster failure
detection).

7. Load Balancing

• Hash-based, using fields like:

o MAC

o IP

o TCP/UDP ports

• Algorithm varies by platform (Cisco default is usually src-dst mac+ip).

8. L2 vs L3 EtherChannel

• L2: Port-channel inherits MAC of first physical interface that joins.

• L3: The switch/router assigns a unique MAC to the port-channel itself (not from
physical links).

• IP is configured directly on the port-channel. Useful for routing scenarios.

56 | P a g e Pradeep
9. Troubleshooting Tips

• show etherchannel summary

• show interfaces port-channel 1

• show lacp neighbor

• Check for mismatched:

o speed/duplex

o channel-group numbers

o LACP modes

o native VLANs (on trunks)

57 | P a g e Pradeep
NIC Teaming:

NIC Teaming combines multiple physical network cards into a single logical interface.

Terminology

EtherChannel is also known as:

• A Port Channel
• LAG Link Aggregation
• A link bundle

NIC Teaming is also known as:

• Bonding
• NIC balancing
• Link aggregation

Spanning Tree Protocol Fundamentals:

Spanning Tree Protocol (STP) enables switches to become aware of other switches through the
advertisement and receipt of bridge protocol data units (BPDUs).

STP builds a Layer 2 loop-free topology in an environment by temporarily blocking traffic on


redundant ports. STP operates by selecting a specific switch as the master switch and running a
tree-based algorithm to identify which redundant ports should not forward traffic.

58 | P a g e Pradeep
STP has multiple iterations:

• 802.1D, which is the original specification


• Per-VLAN Spanning Tree (PVST)
• Per-VLAN Spanning Tree Plus (PVST+)
• 802.1W Rapid Spanning Tree Protocol (RSTP)
• 802.1S Multiple Spanning Tree Protocol (MST)

Catalyst switches now operate in PVST+, RSTP, and MST modes. All three of these modes are
backward compatible with 802.1D.

IEEE 802.1D STP:

The original version of STP comes from the IEEE 802.1D standards and provides support for
ensuring a loop-free topology for one VLAN. This topic is vital to understand as a foundation for
Rapid Spanning Tree Protocol (RSTP) and Multiple Spanning Tree Protocol (MST).

802.1D Port States:

In the 802.1D STP protocol, every port transitions through the following states:

Disabled: The port is in an administratively off position (that is, shut down).

Blocking: The switch port is enabled, but the port is not forwarding any traffic to ensure that a
loop is not created. The switch does not modify the MAC address table. It can only receive
BPDUs from other switches.

Listening: The switch port has transitioned from a blocking state and can now send or receive
BPDUs. It cannot forward any other network traffic. The duration of the state correlates to the
STP forwarding time. The next port state is learning.

Learning: The switch port can now modify the MAC address table with any network traffic that
it receives. The switch still does not forward any other network traffic besides BPDUs. The
duration of the state correlates to the STP forwarding time. The next port state is forwarding.

Forwarding: The switch port can forward all network traffic and can update the MAC address
table as expected. This is the final state for a switch port to forward network traffic.

Broken: The switch has detected a configuration or an operational problem on a port that can
have major effects. The port discards packets as long as the problem continues to exist.

59 | P a g e Pradeep
NOTE: The entire 802.1D STP initialization time takes about 30 seconds for a port to enter
the forwarding state using default timers.

802.1D Port Types:

The 802.1D STP standard defines the following three port types:

Root port (RP): A network port that connects to the root bridge or an upstream switch in the
spanning-tree topology. There should be only one root port per VLAN on a switch.

Designated port (DP): A network port that receives and forwards BPDU frames to other
switches. Designated ports provide connectivity to downstream devices and switches. There
should be only one active designated port on a link.

Blocking port: A network that is not forwarding traffic because of STP calculations.

STP Key Terminology:

Several key terms are related to STP

Root Bridge: The root bridge is the most important switch in the Layer 2 topology. All ports are
in a forwarding state. This switch is considered the top of the spanning tree for all path
calculations by other switches. All ports on the root bridge are categorized as designated ports.

Bridge protocol data unit (BPDU): This network packet is used for network switches to identify
a hierarchy and notify of changes in the topology. A BPDU uses the desti- nation MAC address
01:80:c2:00:00:00. There are two types of BPDUs:

• Configuration BPDU: This type of BPDU is used to identify the Root Bridge, root ports,
designated ports, and blocking ports. The configuration BPDU consists of the following
fields: STP type, root path cost, root bridge identifier, local bridge identifier, max age,
hello time, and forward delay.
• Topology change notification (TCN) BPDU: This type of BPDU is used to com- municate
changes in the Layer 2 topology to other switches. This is explained in greater detail later
in the chapter.

Root path cost: This is the combined cost for a specific path toward the root switch.

System priority: This 4-bit value indicates the preference for a switch to be root bridge. The
default value is 32,768.

60 | P a g e Pradeep
System ID extension: This 12-bit value indicates the VLAN that the BPDU correlates to. The
system priority and system ID extension are combined as part of the switch’s identification of
the root bridge.

Root bridge identifier: This is a combination of the root bridge system MAC address, system ID
extension, and system priority of the root bridge.

Local bridge identifier: This is a combination of the local switch’s bridge system MAC address,
system ID extension, and system priority of the root bridge.

Max age: This is the maximum length of time that passes before a bridge port saves its BPDU
information. The default value is 20 seconds, but the value can be configured with the command
spanning-tree vlan vlan-id max-age maxage. If a switch loses contact with the BPDU’s source, it
assumes that the BPDU information is still valid for the duration of the Max Age timer.

Hello time: This is the time that a BPDU is advertised out of a port. The default value is 2
seconds, but the value can be configured to 1 to 10 seconds with the command spanning-tree
vlan vlan-id hello-time hello-time.

Forward delay: This is the amount of time that a port stays in a listening and learning state. The
default value is 15 seconds, but the value can be changed to a value of 15 to 30 seconds with the
command spanning-tree vlan vlan-id forward-time forward-time.

Spanning Tree Path Cost:

The interface STP cost is an essential component for root path calculation because the root path
is found based on the cumulative interface STP cost to reach the root bridge. The interface STP
cost was originally stored as a 16-bit value with a reference value of 20 Gbps.

As switches have developed with higher-speed interfaces, 10 Gbps might not be enough.
Another method, called long mode, uses a 32-bit value and uses a reference speed of 20 Tbps.
The original method, known as short mode, is the default mode.

61 | P a g e Pradeep
Root Bridge Election:

The first step with STP is to identify the root bridge. As a switch initializes, it assumes that it is
the root bridge and uses the local bridge identifier as the root bridge identifier. It then listens to
its neighbor’s configuration BPDU and does the following:

• If the neighbor’s configuration BPDU is inferior to its own BPDU, the switch ignores that
BPDU.
• If the neighbor’s configuration BPDU is preferred to its own BPDU, the switch updates
its BPDUs to include the new root bridge identifier along with a new root path cost that
correlates to the total path cost to reach the new root bridge. This process continues
until all switches in a topology have identified the root bridge switch.

STP deems a switch more preferable if the priority in the bridge identifier is lower than the
priority of the other switch’s configuration BPDUs. If the priority is the same, then the switch
prefers the BPDU with the lower system MAC.

show spanning-tree root (Verifying the STP Root Bridge)

62 | P a g e Pradeep
The advertised root path cost is always the value calculated on the local switch. As the BPDU is
received, the local root path cost is the advertised root path cost plus the local interface port
cost. The root path cost is always zero on the root bridge.

63 | P a g e Pradeep
Locating Root Ports:

After the switches have identified the root bridge, they must determine their root port (RP). The
root bridge continues to advertise configuration BPDUs out all of its ports. The switch compares
the BPDU information to identify the RP. The RP is selected using the following logic (where
the next criterion is used in the event of a tie):

1. The interface associated to lowest path cost is more preferred.

2. The interface associated to the lowest system priority of the advertising switch is preferred
next.

3. The interface associated to the lowest system MAC address of the advertising switch is
preferred next.

4. When multiple links are associated to the same switch, the lowest port priority from the
advertising switch is preferred.

5. When multiple links are associated to the same switch, the lower port number from the
advertising switch is preferred.

The root bridge can be identified for a specific VLAN through the use of the command show
spanning-tree root and examination of the CDP or LLDP neighbor information to identify the
host name of the RP switch. The process can be repeated until the root bridge is located.

Locating Blocked Designated Switch Ports

Now that the root bridge and RPs have been identified, all other ports are considered designated
ports. However, if two non-root switches are connected to each other on their designated ports,
one of those switch ports must be set to a blocking state to prevent a forwarding loop. In our
sample topology, this would apply to the following links:
64 | P a g e Pradeep
SW2 Gi1/0/3→ → SW3 Gi1/0/2
SW4 Gi1/0/5→ → SW5 Gi1/0/4
SW4 Gi1/0/6→ → SW5 Gi1/0/5

The logic to calculate which ports should be blocked between two non-root switches is as
follows:

1. The interface is a designated port and must not be considered an RP.

2. The switch with the lower path cost to the root bridge forwards packets, and the one with
the higher path cost blocks. If they tie, they move on to the next step.

3. The system priority of the local switch is compared to the system priority of the remote
switch. The local port is moved to a blocking state if the remote system priority is lower than
that of the local switch. If they tie, they move on to the next step.

4. The system MAC address of the local switch is compared to the system priority of the remote
switch. The local designated port is moved to a blocking state if the remote system MAC
address is lower than that of the local switch. If the links are connected to the same switch, they
move on to the next step.

All three links (SW2 Gi1/0/3 ↔ SW3 Gi1/0/2, SW4 Gi1/0/5 ↔ SW5 Gi1/0/4, and SW4 Gi1/0/6
↔ SW5 Gi1/0/5) would use step 4 of the process just listed to identify which port moves to a
blocking state. SW3’s Gi1/0/2, SW5’s Gi1/0/5, and SW5’s Gi1/0/6 ports would all transition to a
blocking state because the MAC addresses are lower for SW2 and SW4.

show spanning-tree vlan 1

STP Topology Changes:

In a stable Layer 2 topology, configuration BPDUs always flow from the root bridge toward the
edge switches. However, changes in the topology (for example, switch failure, link failure, or
links becoming active) have an impact on all the switches in the Layer 2 topology.

The switch that detects a link status change sends a topology change notification (TCN) BPDU
toward the root bridge, out its RP. If an upstream switch receives the TCN, it sends out an
acknowledgment and forwards the TCN out its RP to the root bridge.

Upon receipt of the TCN, the root bridge creates a new configuration BPDU with the Topology
Change flag set, and it is then flooded to all the switches. When a switch receives a
configuration BPDU with the Topology Change flag set, all switches change their MAC address

65 | P a g e Pradeep
timer to the forwarding delay timer (with a default of 15 seconds). This flushes out MAC
addresses for devices that have not communicated in that 15-second window but maintains
MAC addresses for devices that are actively communicating.

What Spanning Tree Does:

STP/RSTP prevents loops by placing each switch port in either a forwarding state or a blocking
state. Interfaces in the forwarding state act as normal, forwarding and receiving frames.

However, interfaces in a blocking state do not process any frames except STP/RSTP messages
(and some other overhead messages). Interfaces that block do not forward user frames, do not
learn MAC addresses of received frames, and do not process received user frames.
66 | P a g e Pradeep
NOTE: The term STP convergence refers to the process by which the switches collectively
realize that something has changed in the LAN topology and determine whether they need
to change which ports block and which ports forward.

That completes the description of what STP/RSTP does, placing each port into either a
forwarding or blocking state. The more interesting question, and the one that takes a lot more
work to understand, is how and why STP/RSTP makes its choices. How does STP/RSTP
manage to make switches block or forward on each interface? And how does it converge to
change state from blocking to forwarding to take advantage of redundant links in response to
network outages?

How Spanning Tree Works:

The STP/RSTP algorithm creates a spanning tree of interfaces that forward frames. The tree
structure of forwarding interfaces creates a single path to and from each Ethernet link, just like
you can trace a single path in a living, growing tree from the base of the tree to each leaf.

The process used by STP, sometimes called the spanning-tree algorithm (STA), chooses the
interfaces that should be placed into a forwarding state. For any interfaces not chosen to be in a
forwarding state, STP/RSTP places the interfaces in blocking state. In other words, STP/RSTP
simply picks which interfaces should forward, and any interfaces left over go to a blocking state.

67 | P a g e Pradeep
STP/RSTP uses three criteria to choose whether to put an interface in forwarding state:

STP/RSTP elects a root switch. STP puts all working interfaces on the root switch in forwarding
state.

Each nonroot switch considers one of its ports to have the least administrative cost between
itself and the root switch. The cost is called that switch’s root cost. STP/RSTP places its port
that is part of the least root cost path, called that switch’s root port (RP), in forwarding state.

Many switches can attach to the same Ethernet segment, but due to the fact that links connect
two devices, a link would have at most two switches. With two switches on a link, the switch
with the lowest root cost, as compared with the other switches attached to the same link, is
placed in forwarding state. That switch is the designated switch, and that switch’s interface,
attached to that segment, is called the designated port (DP).

NOTE The real reason the root switches place all working interfaces in a forwarding state (at
step 1 in the list) is that all its interfaces on the root switch will become DPs. However, it is
easier to just remember that all the root switches’ working interfaces will forward frames.

NOTE STP/RSTP only considers working interfaces (those in a connected state). Failed
interfaces (for example, interfaces with no cable installed) or administratively shutdown
interfaces are instead placed into an STP/RSTP disabled state. So, this section uses the term
working ports to refer to interfaces that could forward frames if STP/RSTP placed the interface
into a forwarding state.

NOTE STP and RSTP do differ slightly in the use of the names of some states like blocking and
disabled, with RSTP using the status term discarding.

68 | P a g e Pradeep
Rapid Spanning Tree Protocol (RSTP):

It defined in IEEE 802.1w, is an enhanced version of the original Spanning Tree Protocol (STP).
RSTP improves upon STP by providing faster convergence in switched Ethernet networks,
enabling faster recovery from changes in the network topology (e.g., link failures or port state
changes).

Cisco created Per-VLAN Spanning Tree (PVST) and Per-VLAN Spanning Tree Plus (PVST+) to
allow more flexibility.

RSTP (802.1W) Port States:

RSTP reduces the number of port states to three:

Discarding: The switch port is enabled, but the port is not forwarding any traffic to ensure that a
loop is not created. This state combines the traditional STP states disabled, blocking, and
listening.

Learning: The switch port modifies the MAC address table with any network traffic it receives.
The switch still does not forward any other network traffic besides BPDUs.

Forwarding: The switch port forwards all network traffic and updates the MAC address table as
expected. This is the final state for a switch port to forward network traffic.

NOTE: A switch tries to establish an RSTP handshake with the device connected to the other
end of the cable. If a handshake does not occur, the other device is assumed to be non-RSTP
compatible, and the port defaults to regular 802.1D behavior. This means that host devices such
as computers, printers, and so on still encounter a significant transmission delay (around 30
seconds) after the network link is established.

69 | P a g e Pradeep
RSTP (802.1W) Port Roles:

RSTP defines the following port roles:

Root port (RP): A network port that connects to the root switch or an upstream switch in the
spanning-tree topology. There should be only one root port per VLAN on a switch.

Designated port (DP): A network port that receives and forwards frames to other switches.
Designated ports provide connectivity to downstream devices and switches. There should be
only one active designated port on a link.

Alternate port: A network port that provides alternate connectivity toward the root switch
through a different switch.

Backup port: A network port that provides link redundancy toward the current root switch.
The backup port cannot guarantee connectivity to the root bridge in the event that the upstream
switch fails. A backup port exists only when multiple links connect between the same switches.

RSTP (802.1W) Port Types:

RSTP defines three types of ports that are used for building the STP topology:

Edge port: A port at the edge of the network where hosts connect to the Layer 2 topology with
one interface and cannot form a loop. These ports directly correlate to ports that have the STP
portfast feature enabled.

Root port: A port that has the best path cost toward the root bridge. There can be only one root
port on a switch.

Point-to-point port: Any port that connects to another RSTP switch with full duplex. Full-
duplex links do not permit more than two devices on a network segment, so determining
whether a link is full duplex is the fastest way to check the feasibility of being connected to a
switch.

70 | P a g e Pradeep
STP Topology Tuning:

A properly designed network strategically places the root bridge on a specific switch and
modifies which ports should be designated ports (that is, forwarding state) and which ports
should be alternate ports (that is, discarding/blocking state).

Root Bridge Placement:

Ideally the root bridge is placed on a core switch, and a secondary root bridge is designated to
minimize changes to the overall spanning tree. Root bridge placement is accomplished by
lowering the system priority on the root bridge to the lowest value possible, raising the
secondary root bridge to a value slightly higher than that of the root bridge, and (ideally)
increasing the system priority on all other switches. This ensures consistent placement of the
root bridge. The priority is set with either of the following commands:

• spanning-tree vlan vlan-id priority priority: The priority is a value between 0 and 61,440,
in increments of 4,096.
• spanning-tree vlan vlan-id root {primary | secondary} [diameter diameter]: This
command executes a script that modifies certain values. The primary keyword sets the
priority to 24,576, and the secondary keyword sets the priority to 28,672.

71 | P a g e Pradeep
Additional STP Protection Mechanisms:

Network packets do not decrement the time-to-live portion of the header as a packet is
forwarded in a Layer 2 topology. A network forwarding loop occurs when the logical topology
allows for multiple active paths between two devices. Broadcast and multicast traffic wreak
havoc as they are forwarded out of every switch port and continue the forwarding loop.

High CPU consumption and low free memory space are common symptoms of a Layer
forwarding loop. In Layer 2 forwarding loops, in addition to constantly consuming switch
bandwidth, the CPU spikes. Because the packet is received on a different interface, the switch
must move the media access control (MAC) address from one interface to the next.

The network throughput is impacted drastically; users are likely to notice a slowdown on their
network applications, and the switches might crash due to exhausted CPU and memory
resources.

The following are some common scenarios for Layer 2 forwarding loops:

• STP disabled on a switch


• A misconfigured load balancer that transmits traffic out multiple ports with the same
MAC address
• A misconfigured virtual switch that bridges two physical ports (Virtual switches
typically do not participate in STP.)
• End users using a dumb network switch or hub

Root Guard:

Root guard is an STP feature that is enabled on a port-by-port basis; it prevents a configured
port from becoming a root port. Root guard prevents a downstream switch (often misconfigured
or rogue) from becoming a root bridge in a topology. Root guard functions by placing a port in
an ErrDisabled state if a superior BPDU is received on a configured port. This prevents the
configured DP with root guard from becoming an RP.

Root guard is enabled with the interface command spanning-tree guard root. Root guard is
placed on designated ports toward other switches that should never become root bridges.

In the sample topology shown in Figure 3-1, root guard should be placed on SW2’s Gi1/0/4 port
toward SW4 and on SW3’s Gi1/0/5 port toward SW5. This prevents SW4 and SW5 from ever
becoming root bridges but still allows for SW2 to maintain connectivity to SW1 via SW3 if the
link connecting SW1 to SW2 fails.
72 | P a g e Pradeep
STP Portfast:

The generation of TCN for hosts does not make sense as a host generally has only one
connection to the network. Restricting TCN creation to only ports that connect with other
switches and network devices increases the L2 network’s stability and efficiency. The STP
portfast feature disables TCN generation for access ports.

STP enabled ports normally take 30 seconds to enter the forwarding state after being enabled.

This delay can be frustrating for users, who aren’t able to access the network for 30 seconds.

Ports connected to end hosts don’t pose a risk of causing Layer 2 loops, so the delay in
unnecessary.

Portfast allows a port to immediately transition to the forwarding state upon being
connected/enabled, bypassing the Listening and Learning states.

73 | P a g e Pradeep
BPDU Guard:

BPDU guard is a safety mechanism that shuts down ports configured with STP portfast upon
receipt of a BPDU. Assuming that all access ports have portfast enabled, this ensures that a loop
cannot accidentally be created if an unauthorized switch is added to a topology.

PortFast should only be enabled on ports connected to non-switch devices (end hosts, routers).

• These devices don’t send BPDUs.

If an end user carelessly connects a switch to a port, it could affect the STP topology.

• For example, by becoming the new Root Bridge.

BPDU Guard can protect against unauthorized switches being connected to ports intended for
end hosts.

• It can be configured separately from PortFast, but usually both features are used
together.

BPDU Guard enabled ports continue to send BPDUs.

If a BPDU Guard enabled port receives a BPDU, it enters the error-disabled state – effectively
shutting down the port.

74 | P a g e Pradeep
There are two main ways to configure BPDU Guard:

STP vs RSTP:

The fundamentals of STP and RSTP are the same:

• The switches elect one Root Bridge.


• Each non-Root switch selects one Root Port.
• One Designated Port is selected for each link (collision domain).
• Remaining ports are Non-Designated (Alternate/Backup in RSTP).

Note: In STP (Spanning Tree Protocol), each collision domain (basically, a single Ethernet
segment or cable between two switches) will have only one switch port that is responsible
for forwarding traffic toward that segment. This port is called the Designated Port (DP).

The STP tuning process is the same:

• Bridge priority, port cost, and port priority can be modified to change the STP topology.

The same optional features (STP Toolkit) can be used:

• UplinkFast- and BackboneFast-like functions are incorporated into RSTP itself.


• PortFast, BPDU Guard, BPDU Filter, Root Guard, and Loop Guard can be configured.

75 | P a g e Pradeep
However, there are various differences between them. For example:

• Port costs
• Port states
• Port roles
• State transitions (RSTP uses a negotiation/sync mechanism to speed up the move to
Forwarding)
• Topology changes

RSTP Port Costs: Short vs Long:

• Classic STP cost used 16-bit cost values (short).


• RSTP introduced the long costs (32-bit values) to accommodate links of greater speeds.
• Newer switches default to the long cost values, but older switches default to short (even
when running RSTP).

76 | P a g e Pradeep
RSTP Ports States:

RSTP combines the Blocking and Listening states into a single state called Discarding.

Discarding is both a transitional and stable state.

• Alternate/Backup ports are stable in the Discarding state.

• Root and Designated ports start in the Discarding state, but transition through it to
become stable in Forwarding.

The Learning state is only used if the RSTP sync mechanism fails and the port can’t transition
immediately from Discarding to Forwarding.

RSTP Port State Transitions:

When an RSTP port is first enabled, it is a Designated port in the Discarding state.

If the STP algorithm decides it will be an Alternate or Backup port, it remains in the Discarding
state.

If the algorithm decides it will be a Designated or Root port, it moves to the Forwarding state in
one of two ways.

77 | P a g e Pradeep
If the RSTP sync mechanism succeeds, the port moves immediately from Discarding to
Forwarding.

If the RSTP sync mechanism fails, the port spends 15 seconds in Discarding and 15 seconds in
Learning (Forward Delay × 2) before moving to Forwarding.

• For example, if the port is connected to a switch that runs classic STP (not RSTP), the
RSTP sync mechanism won’t work.

RSTP Port roles:

The RSTP Root and Designated roles are the same as in STP:

• Root: A forwarding port that points toward the Root Bridge. The switch’s only active
path to reach the Root Bridge.

• Designated: A forwarding port that points away from the Root Bridge. All links
(collision domains) must have exactly one Designated Port.

The Non-Designated role was divided into two separate roles:

• Alternate: An alternative for the switch’s Root Port. It provides an alternate path toward
the Root Bridge and is ready to take over if the Root Port fails.

• Backup: A backup path to the same link as a Designated Port on the same switch. This
will only occur if two ports on the same switch are connected to the same link (i.e., via a
hub).

Rules to Determine Alternate vs Backup:

• If a port is not Root or Designated, it is an Alternate Port if the switch is not the
Designated Bridge for the link.

o → If the switch doesn’t have the link’s Designated Port.

• If a port is not Root or Designated, it is a Backup Port if the switch is the Designated
Bridge for the link.

o → If the switch has the link’s Designated Port.

78 | P a g e Pradeep
RSTP Algorithm:

The process STP uses to create a loop-free topology is called the STP algorithm.

The RSTP algorithm is identical, expect for Alternative/Backup Port roles.

There are three main steps:

1. Root bridge election (one per LAN)

• Lowest BID

2. Root port selection (one per switch)

• Lowest root cost


• Lowest neighbor BID
• Lowest neighbor port ID
• Lowest local port ID

3. Designated port selection (one per segment)

• Port on switch with lowest root cost


• Port on switch with lowest BID
• Lowest local port ID

The remaining ports are Alternative/Backup:

• It is an Alternative Port if the switch is not the Designated Bridge for the link.
• It is a Backup Port if the switch is the Designated Bridge for the link.

RSTP Link Types

In RSTP (802.1w), links between switches are categorized into three types to help determine
the best way to transition a port to the forwarding state quickly and safely.

1. Point-to-Point Link

• Definition: A link that connects two switches directly.

• Detection: Automatically assumed if the port is full-duplex.

• Behavior:

o Can transition rapidly to the forwarding state.


79 | P a g e Pradeep
o RSTP assumes that full-duplex links are point-to-point and can use proposal-
agreement mechanism for faster convergence.

2. Shared Link

• Definition: A link that may connect multiple devices (e.g., using a hub).

• Detection: Automatically assumed if the port is half-duplex.

• Behavior:

o Transition to forwarding state is slower.

o Operates more like traditional STP (802.1D) with listening and learning states.

3. Edge Link

• Definition: A port that connects to an end device (like a computer or printer), not to
another switch.

• Manual Configuration: Set by using the command spanning-tree portfast.

• Behavior:

o Immediately transitions to forwarding state when the port comes up.

o Helps reduce delays for end devices during boot or reconnection.

o BPDUs received on this port will make it lose its edge status (to prevent loops).

RSTP Sync process:

• Classic STP convergence (30 seconds)


• Rapid STP convergence (subsecond)
• When the Sync Process fails

The RSTP Sync Process is the main benefit of RSTP over Classic STP. It allows ports to
immediately move to the Forwarding State instead of relying of Classic STP’s timer-based
process.

80 | P a g e Pradeep
Classic STP Convergence:

When a switch port is enabled, it becomes an STP Designated Port until its appropriate role is
determined.

• It will start in the Listening State.

Each switch will declare itself to be the Root Bridge until it receives a Superior BPDU form
another switch.

As soon as SW2 receives SW1’s Superior BPDU. SW2 accepts SW1 as the Root Bridge.

RSTP Convergence:

When a switch port is enabled. It becomes an RSTP Designated Port until its appropriate role is
determined.

• It will start in the Discarding State.

Each switch will declare itself to be the Root Bridge until it receives a Superior BPDU from
another switch.

• The Proposal Bit will be set in the BPDUs it sends out.


o This indicates that the switch is proposing itself as the Designated Bridge for the
segment.

Upon receiving SW1’s superior Proposal BPDU, SW2 accepts SW1 as the Root Bridge.

When the RSTP Sync Process Fails:

The RSTP Sync Process only works on ports with the Point-to-Point Link Type.

• Ports with the Shared Link Type cannot sync.

The connected switches must both be using RSTP – a switch running Classic STP can’t sync.

• The port on the RSTP-enabled switch will operate like a Classic STP port.

In such cases, ports must spend 30 seconds transitioning through Discarding and Learning
before Forwarding.

81 | P a g e Pradeep
Why MSTP?

Multiple Spanning Tree Protocol (MSTP, often called MSTP) allows you to map multiple
VLANs to a single Spanning Tree instance.

• This greatly reduces the number of BPDUs that need to be sent, and the amount of
processing that switches need to do in a LAN with many VLANs.

MSTP uses RSTP’s underlying mechanics for rapid convergence.

Rapid PVST+ allows for load balancing among switches in a LAN by assigning a different Root
Bridge for each VLAN.

For example, in a LAN with 100 VLANs, each Distribution Layer switch can be Root Bridge for
50 VLANs.

• These assignments should align with the HSRP/VRRP Active/Master.


• Access Layer switches will forward frames in half of the VLANs to one Distribution
switch, and the other half to the other Distribution switch.

Each Rapid PVST+ Instance runs independently and sends its own BPDU every 2 seconds.

Sending, receiving, and processing these BPDUs uses resources on the switches (as well as
bandwidth).

In most cases, only two unique Spanning Tree are needed:

• One with one Distribution Switch as the Root


• One with the other Distribution Switch as the Root

MSTP allows for multiple VLANs to be grouped into a single MST Instance (MST) requiring
only two instances to achieve the same load balancing.

82 | P a g e Pradeep
Switching Interview Questions with Answers:

1. What is a Switch and how does it work?

Answer:
A switch is a Layer 2 network device that connects multiple devices in a LAN. It uses MAC
addresses to forward data only to the intended recipient instead of broadcasting to all ports like
a hub.

2. What is a MAC Address Table (CAM Table)?

Answer:
The MAC address table is stored in a switch and maps MAC addresses to specific ports. When a
frame arrives, the switch checks this table to decide where to forward the frame.

3. What is VLAN and why is it used?

Answer:
VLAN (Virtual LAN) is used to logically separate devices on the same physical switch. It
improves security, reduces broadcast traffic, and segments the network.

4. Difference between Access Port and Trunk Port?

Answer:

• Access Port: Carries traffic for one VLAN. Used to connect end devices like PCs.

• Trunk Port: Carries traffic for multiple VLANs using tagging (802.1Q). Used to connect
switches or routers.

5. What is STP (Spanning Tree Protocol)? Why is it needed?

Answer:
STP prevents loops in a Layer 2 network by blocking redundant paths. Loops can cause
broadcast storms and MAC table instability.

6. What are different port states in STP?

Answer:

• Blocking

• Listening
83 | P a g e Pradeep
• Learning

• Forwarding

• Disabled

7. What is PortFast in STP?

Answer:
PortFast is a feature used on access ports. It immediately puts the port in forwarding state,
skipping STP states (useful for PCs/printers). Not recommended on trunk ports.

8. What is BPDU Guard?

Answer:
BPDU Guard disables a port if it receives a BPDU on a PortFast-enabled port. It prevents loops
caused by connecting switches to end-device ports.

9. What is VTP (VLAN Trunking Protocol)?

Answer:
VTP is a Cisco protocol that distributes VLAN information to all switches in a domain. It
reduces manual VLAN configuration errors.

10. What are VTP Modes?

Answer:

• Server: Create/delete VLANs and send advertisements.

• Client: Can’t create VLANs, only receives updates.

• Transparent: Forwards VTP messages but doesn’t apply them.

11. What is EtherChannel?

Answer:
EtherChannel is used to bundle multiple physical links into one logical link to increase
bandwidth and provide redundancy.

84 | P a g e Pradeep
12. What is LACP and PAGP?

Answer:

• LACP (IEEE Standard): Open protocol for EtherChannel.

• PAgP: Cisco proprietary protocol for EtherChannel.

13. What is difference between Collision Domain and Broadcast Domain?

Answer:

• Collision Domain: Where data collisions can occur. Each switch port is a separate
collision domain.

• Broadcast Domain: A group of devices receiving broadcast frames. Each VLAN is a


separate broadcast domain.

14. What is Inter-VLAN Routing?

Answer:
Inter-VLAN Routing allows communication between VLANs using a router or Layer 3 switch.

15. What is Interface VLAN (SVI)?

Answer:
An Interface VLAN or SVI (Switched Virtual Interface) is a virtual Layer 3 interface configured
on a switch to provide IP communication for VLANs.

16. How does a switch learn MAC addresses?

Answer:
When a frame arrives, the switch learns the source MAC address and the port it came from and
adds this to the MAC table.

17. What happens if the MAC address is not in the table?

Answer:
The switch floods the frame to all ports (except incoming port) to find the destination. Once a
reply comes, it updates the MAC table.

85 | P a g e Pradeep
18. What is RSTP and how is it different from STP?

Answer:
RSTP (Rapid STP) is a faster version of STP. It converges the network in seconds instead of 30–
50 seconds. It introduces new port roles and link types.

19. What is the default VLAN on a switch?

Answer:
VLAN 1 is the default VLAN. All ports are in VLAN 1 by default until configured otherwise.

20. What is the difference between Layer 2 and Layer 3 switches?

Answer:

• Layer 2 Switch: Works using MAC addresses and does not perform routing.

• Layer 3 Switch: Can do routing between VLANs using IP addresses.

86 | P a g e Pradeep
Thanks to ChatGPT & Jeremy!!

87 | P a g e Pradeep

You might also like