San Ibm Redbook Ficon
San Ibm Redbook Ficon
Aubrey Applewhaite
Mike Blair
Gary Fisher
Gavin O’Reilly
Lyle Ramsey
Fausto Vaninetti
Redbooks
IBM Redbooks
January 2022
SG24-8468-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
This edition applies to IBM Storage Networking c-type Family switches and directors that are used with IBM Z
platforms.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection
environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 IBM c-type hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Enterprise SAN Directors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 IBM Storage Networking SAN192C-6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 IBM Storage Networking SAN384C-6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.3 IBM Storage Networking SAN192C-6 and IBM Storage Networking SAN384C-6
supervisor modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.2.4 IBM Storage Networking SAN192C-6 and IBM Storage Networking SAN384C-6
crossbar fabric modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.2.5 IBM Storage Networking SAN192C-6 and IBM Storage Networking SAN384C-6
power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.2.6 IBM 48-Port 32-Gbps Fibre Channel Switching Module . . . . . . . . . . . . . . . . . . . . 29
1.2.7 IBM 24/10-Port SAN Extension Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.3 IBM Storage Networking SAN192C-6 and IBM Storage Networking SAN384C-6 software
licensing for NX-OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.3.1 Licensing model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.3.2 Mainframe Package (#AJJB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.3.3 Enterprise Package (#AJJ9) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.3.4 DCNM SAN Advanced Edition Package (#AJJA) . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.4 Extension switch model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.4.1 IBM Storage Networking SAN50C-R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.5 IBM c-type software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.5.1 NX-OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.5.2 Data Center Network Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Contents v
6.9 Installing switch licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
6.9.1 Using the PAK letter to create license keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
6.9.2 Transferring license files to the switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.9.3 Installing license files from the DCNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6.9.4 Installing bulk licenses by using the DCNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
6.9.5 Installing license files by using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM® Redbooks (logo) ®
C3® IBM Cloud® S/390®
DS8000® IBM FlashSystem® System z®
Enterprise Storage Server® IBM Z® Tivoli®
FICON® IBM z13® z/OS®
FlashCopy® Parallel Sysplex® z/VM®
GDPS® PowerPC® z13®
HyperSwap® Redbooks® z15™
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and
other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
The next-generation IBM® c-type Directors and switches for IBM Storage Networking
provides high-speed Fibre Channel (FC) and IBM Fibre Connection (IBM FICON®)
connectivity from the IBM Z® platform to the storage area network (SAN) core. It enables
enterprises to rapidly deploy high-density virtualized servers with the dual benefit of higher
bandwidth and consolidation.
This IBM Redbooks publication helps administrators understand how to implement or migrate
to an IBM c-type SAN environment. It provides an overview of the key hardware and software
products, and it explains how to install, configure, monitor, tune, and troubleshoot your SAN
environment.
Authors
This book was produced by a team of specialists from around the world working at Cisco,
Raleigh, North Carolina, and remotely.
Jon Tate
IBM Redbooks® (Retired)
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xi
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
This document describes, explains, and shows how to deploy IBM c-type Directors in a
mainframe FICON environment by following best practice processes and procedures. This
book should be used by architects and administrators to support, maintain, and report on
storage area network (SAN) fabrics and act as the guideline for a standard c-type mainframe
deployment across companies to ensure service continuity.
The IBM c-type portfolio is based on network and storage intelligence. The switches allow for
the configuration of scalable solutions that can help address the need for high performance
and reliability in environments that range from small deployments to large, integrated
enterprise SANs. The IBM family introduces various SAN capabilities. The family of products
is designed for investment protection, flexibility, scalability, robustness, advanced diagnostics,
and integration between low-cost environments and enterprise SANs.
Companies can use IBM c-type FC technology-based director and switches as resources to
deliver high-performance connectivity across data center fabrics worldwide. This technology
allows for scaling your SAN on demand and keeping the total cost of ownership (TCO) at a
minimum.
The terms director and switch are used interchangeably. When comparing both, director
generally means higher availability, larger port capacity, and more capabilities. IBM offers
c-type switches and director, but unless there is a need to differentiate, within this book we
simply refer to them as switches.
Note: IBM c-type is 64 gigbit Ethernet (GbE) (GEN 7) FICON ready when using a
combination of fabric-3 and supervisor-4 modules. For more information, see this white
paper.
In addition, this book includes a hardware naming convention table (IBM and Cisco names)
and introduces SAN technology features that are provided by the NX-OS operating system
(OS).
Note: The term FICON represents the architecture that is defined by the International
Committee for Information Technology Standards (INCITS) and published as ANSI
standards. FICON is a fibre connected input/output (I/O) interface that is used to connect
server systems to SANs and storage frames.
This chapter introduces the IBM Storage networking c-type range of hardware and software
products in the portfolio, and it provides an overview of the hardware components and
software features that are available for modern SAN data fabrics.
The IBM Storage Networking c-type family provides storage connectivity for mission-critical
applications, massive amounts of data, solid-state drives (SSDs), and cloud-based
environments with a single, proven OS and a centralized management platform that enables
evolutionary adoption and consistent SAN operations. Services-oriented SAN applications
enable centralized solutions to meet customer needs, including data migration and
acceleration of backup and replication performance between distant data centers.
For more information about IBM storage networking c-type, see IBM Storage Networking
c-type family.
Cisco MDS 9132T Switch IBM Storage Networking 8977 Model T32
SAN32C-6
Cisco MDS 9148T Switch IBM Storage Networking 8977 Model T48
SAN48C-6
Cisco MDS 9396T Switch IBM Storage Networking 8977 Model T96
SAN96C-6
Cisco MDS 9250i Multi-service IBM Storage Networking 8977 Model R50
Switch SAN50C-R
Cisco MDS 9706 Director IBM Storage Networking 8978 Model E04
SAN192C-6
Cisco MDS 9710 Director IBM Storage Networking 8978 Model E08
SAN384C-6
Cisco MDS 9718 Director IBM Storage Networking 8978 Model E16
SAN768C-6
Note: The scope of this document is focused on switches and director that support the
FICON protocol with the IBM Mainframe. The IBM Storage Networking SAN192C-6, IBM
Storage Networking SAN384C-6, and IBM Storage Networking SAN50C-R switches are
described.
IBM offers the following enterprise SAN c-type Directors with support for FICON:
IBM Storage Networking SAN192C-6
IBM Storage Networking SAN384C-6
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 3
IBM Storage Networking SAN192C-6 addresses the stringent requirements of large,
virtualized data center storage environments. It delivers uncompromising availability, security,
scalability, ease of management, and transparent integration of new technologies for flexible
data center SAN solutions. It shares the OS and management interface with other IBM data
center switches. By using the IBM Storage Networking SAN192C-6, you can transparently
deploy unified fabrics with FC, FICON, and Fibre Channel over IP (FCIP) connectivity for low
TCO.
For mission-critical enterprise storage networks that require secure, robust, and cost-effective
business-continuance services, the FCIP extension module delivers outstanding SAN
extension performance, reducing latency for disk and tape operations with FCIP acceleration
features, including FCIP write acceleration and FCIP tape write and read acceleration and
FICON tape acceleration.
Product highlights
IBM Storage Networking SAN192C-6 offers several important features, which are described
in this section.
By using IBM Storage Networking c-type family Directors' switching modules, the IBM
Storage Networking SAN192C-6 supports up to 192 ports in a 6-slot modular chassis, with up
to 768 ports in a single rack. FC ports can be configured and auto-negotiated at 2/4/8-Gbps,
4/8/16-Gbps, or 8/16/32-Gbps speeds, depending on the optics and switching module
selected.
IBM Storage Networking SAN192C-6 supports the same FC switching modules as the
IBM Storage Networking SAN384C-6 and IBM Storage Networking SAN768C-6 Directors for
a high degree of system commonality. Designed to grow with your storage environment,
IBM Storage Networking SAN192C-6 provides smooth migration, common sparing, and
outstanding investment protection.
The 24/10-Port SAN Extension Module is supported on IBM Storage Networking c-type family
Directors. With 24 line-rate 2-, 4-, 8-, 10-, and 16-Gbps FC ports, and eight 1- and 10-GbE
FCIP ports, this module enables large and scalable deployment of SAN extension solutions.
Enterprise-class availability
The IBM Storage Networking SAN192C-6 is designed for high availability (HA). In addition to
meeting the basic requirements of non-disruptive software upgrades and redundancy of all
critical hardware components, the IBM Storage Networking SAN192C-6 software architecture
offers outstanding availability. It provides redundancy on all major hardware components,
including the supervisors, fabric modules, and power supplies. The Supervisor Module
automatically restarts failed processes, which makes the IBM Storage Networking
SAN192C-6 exceptionally robust. In the rare event that a supervisor module is reset,
complete synchronization between the active and standby supervisor modules helps ensure
stateful failover with no disruption of traffic. Redundancy details are shown in Table 1-2.
Supervisor 1+1
HA is implemented at the fabric level by using robust and high-performance Inter-Switch Links
(ISLs). A port channel allows users to aggregate up to 16 physical links into one logical
bundle. The bundle can consist of any speed-matched ports in the chassis, which helps
ensure that the bundle can remain active if a port, ASIC, or module fails. ISLs in a port
channel can have different lengths.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 5
Business transformation with enterprise cloud deployment
Enterprise clouds provide organizations with elastic computing and network capabilities,
which enable IT to scale up or down resources as needed in a quick and cost-efficient
manner. IBM Storage Networking SAN192C-6 provides industry-leading scalability and the
following features for enterprise cloud deployments:
Pay-as-you-grow flexibility to meet the scalability needs in the cloud
Robust security for multitenant cloud applications
Predictable performance to meet stringent service-level agreements (SLAs)
Resilient connectivity for an always-on cloud infrastructure
Advanced traffic management capabilities, such as quality of service (QoS), to allocate
network capabilities to cloud applications rapidly and cost-efficiently
Furthermore, Data Center Network Manager (DCNM) provides resource monitoring and
capacity planning on a per-virtual machine (VM) basis. You can federate up to 10 DCNM
servers to easily manage large clouds. Resource-use information can be delivered through
Storage Management Initiative Specification (SMI-S)-based developer APIs to deliver IT as a
service.
Table 1-3 summarizes the IBM Storage Networking SAN192C-6 product specifications.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 7
Feature Description
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 9
Feature Description
Use a standard 19 inch, 4-post EIA cabinet or rack with mounting rails that conform to the
imperial universal hole spacing, per section 1 of the NSI/EIA-310-D-1992 standard.
The depth of a 4-post rack or a cabinet must be 24 - 32 inches (61.0 - 81.3 cm) between the
front and rear mounting vertical rails.
Ensure that the airflow and cooling are adequate and there is sufficient clearance around the
air vents on the switch.
The rack must have sufficient vertical clearance for the chassis, 2 RU for the shelf brackets,
and any needed clearance for the installation process.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 11
The front and rear doors of enclosed racks must have at least 60% of an open area
perforation pattern.
Additionally, you must consider the following site requirements for the rack:
Power receptacles must be within reach of the power cords that are used with the switch.
AC power supplies: Power cords for 3-kW AC power supplies are 8 - 12 feet (2.5 - 4.3 m).
DC power supplies (ask IBM representatives for this option): Power cords for 3.0-kW DC
power supplies are supplied and set by the customer.
HVAC/HVDC power supplies (ask IBM representatives for this option): Power cords for
3.5-kW HVAC/HVDC power supplies are 14 feet (4.26 m) long.
Where necessary, a seismic rating of Network Equipment Building Standards (NEBS)
Zone 3 or Zone 4, per GR-63-CORE.
To correctly install the switch in a cabinet in a hot-aisle/cold-aisle environment, you should fit
the cabinet with baffles to prevent exhaust air from recirculating into the chassis air intake.
Work with your cabinet vendors to determine which of their cabinets meet the following
requirements or see IBM Support for recommendations:
The height of the rack or cabinet must accommodate the 9 RU (15.75 inches (40.0 cm))
height of the switch and its bottom support bracket. The bottom support bracket is part of
the accessory kit for the switch.
Minimum gross load rating of 2000 lb (907.2 kg) (static load rating) if supporting four
switches.
For mission-critical enterprise storage networks that require secure, robust, and cost-effective
business-continuance services, the FCIP extension module is designed to deliver outstanding
SAN extension performance, reducing latency for disk and tape operations with FCIP
acceleration features, including FCIP write acceleration and FCIP tape write and read
acceleration.
The IBM Storage Networking SAN384C-6 is shown in Figure 1-2 on page 13.
Product highlights
IBM Storage Networking SAN384C-6 and its components offer the following main features:
Outstanding SAN performance: The combination of the 32 Gbps FC switching modules
and six Fabric-1 Crossbar switching modules enables up to 1.5 Tbps of front-panel FC
throughput between modules in each direction for each of the eight IBM Storage
Networking SAN384C-6 payload slots. With six Fabric-3 modules, 3 Tbps of front-panel
FC throughput is possible. This per-slot bandwidth is double the bandwidth that is needed
to support a 48-port 32 Gbps FC module at full line rate. Based on central arbitration and
a crossbar fabric, the IBM Storage Networking SAN384C-6 architecture provides 32 Gbps
line-rate, non-blocking, and predictable performance across all traffic conditions for every
chassis port.
HA: The IBM Storage Networking SAN384C-6 Director class switch enables redundancy
on all major components, including the fabric card. It provides grid redundancy on power
supply and 1+1 redundant supervisors. Users can include a fourth fabric-1 card to enable
N+1 fabric redundancy at 768 Gbps slot bandwidth, like with Fabric-3 modules but scaled
upwards. The IBM Storage Networking SAN384C-6 combines nondisruptive software
upgrades, stateful process restart and failover, and full redundancy of major components
for higher availability.
Business continuity: The IBM Storage Networking SAN384C-6 Director enables large and
scalable deployment of SAN extension solutions through the SAN Extension module.
Outstanding scalability: The IBM Storage Networking SAN384C-6 Director provides up to
24 Tbps of FC backplane bandwidth with Fabric-1 modules and double with Fabric-3
modules. A single chassis delivers 384 4/8/16 Gbps, or 8/16/32 Gbps full line-rate
autosensing FC ports. A single rack supports up to 1152 FC ports. The IBM Storage
Networking c-type family Directors are designed to meet the requirements of even the
largest data center storage environments.
Deployment of SAN extension solutions: Enable large and scalable multi-site SANs with
the 24/10-port SAN Extension Module.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 13
Intelligent network services: VSAN technology, ACLs for hardware-based intelligent frame
processing, and fabric-wide QoS enable migration from SAN islands to enterprise-wide
storage networks and include the following features:
– Integrated hardware-based VSANs and IVR: Integration of VSANs into port-level
hardware allows any port in a system or fabric to be partitioned to any VSAN.
Integrated hardware-based IVR provides line-rate routing between any ports in a
system or fabric without the need for external routing appliances.
– Intelligent storage services: IBM Storage Networking SAN384C-6 operates with
intelligent service capabilities on other IBM Storage Networking c-type family platforms
to provide services, such as acceleration of storage applications for data replication
and backup, and data migration to hosts and targets that are attached to the
IBM Storage Networking SAN384C-6.
Comprehensive security: The IBM Storage Networking c-type family supports a
comprehensive security framework. It consists of RADIUS and TACACS+, FC-SP, SFTP,
SSH Protocol, and SNMPv3 implementing VSANs, hardware-enforced zoning, ACLs, and
per-VSAN RBAC.
Unified SAN management: The IBM Storage Networking c-type family includes built-in
storage network management with all features available through a CLI or DCNM, which is
a centralized management tool that simplifies managing unified fabrics. DCNM supports
the federation of up to 10 DCNM servers to manage up to 150,000 devices by using a
single management window.
Sophisticated diagnostic tests: The IBM Storage Networking SAN384C-6 provides
intelligent diagnostic tests, protocol decoding, network analysis tools, and integrated Call
Home capability for greater reliability, faster problem resolution, and reduced service
costs.
Multiprotocol intelligence: The multilayer architecture of the IBM Storage Networking
SAN384C-6 enables a consistent feature set over a protocol-independent switch fabric.
IBM Storage Networking SAN384C-6 transparently integrates FC and FICON.
You can deploy intelligent fabric services, VSANs for consolidating physical SAN islands while
maintaining logical boundaries, and IVR for sharing resources across VSANs. You can
consolidate your data into fewer, larger, and more manageable SANs, which reduce the
hardware footprint and associated capital and operating expenses.
Enterprise-class availability
IBM Storage Networking SAN384C-6 is designed for HA. In addition to meeting the basic
requirements of nondisruptive software upgrades and redundancy of all critical hardware
components, the IBM Storage Networking SAN384C-6 software architecture offers
outstanding availability. The supervisor modules automatically restart failed processes, which
makes IBM Storage Networking SAN384C-6 exceptionally robust. In the rare event that a
supervisor module is reset, complete synchronization between the active and standby
supervisor modules helps ensure stateful failover with no disruption of traffic.
Supervisors 1+1
HA is implemented at the fabric level by using robust and high-performance ISLs. Port
channel allows users to aggregate up to 16 physical links into one logical bundle. The bundle
can consist of any speed-matched ports in the chassis, which helps ensure that the bundle
can remain active if a port, ASIC, or module fails. ISLs in a port channel can have different
lengths.
This capability is valuable in campus and metropolitan area network (MAN) environments
because logical links can now be spread over multiple physical paths, which helps ensure
uninterrupted connectivity even if one of the physical paths is disrupted. IBM Storage
Networking SAN384C-6 provides outstanding HA, which helps ensure that solutions exceed
the 99.999% uptime requirements of today’s most demanding environments.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 15
Feature Description
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 17
Feature Description
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 19
Item Description
Use a standard 19-inch, 4-post EIA cabinet or rack with mounting rails that conform to
imperial universal hole spacing, per section 1 of the NSI/EIA-310-D-1992 standard.
The depth of a 4-post rack or a cabinet must be 24 - 32 inches (61.0 - 81.3 cm) between the
front and rear mounting vertical rails.
Ensure that the airflow and cooling are adequate and there is sufficient clearance around the
air vents on the switch.
The rack must have sufficient vertical clearance for the chassis along with 2 RU for the shelf
brackets, and any necessary clearance for the installation process.
The front and rear doors of enclosed racks must have at least 60% of an open area
perforation pattern.
Additionally, you must consider the following site requirements for the rack:
The power receptacles must be within reach of the power cords that are used with the
switch.
AC power supplies: The power cords for 3-kW AC power supplies are 8 - 12 feet (2.5 -
4.3 m) long.
DC power supplies (ask IBM representatives for this option): The power cords for 3.0-kW
DC power supplies are supplied and set by the customer.
HVAC/HVDC power supplies (ask IBM representatives for this option): The power cords
for 3.5-kW HVAC/HVDC power supplies are 14 feet (4.26 m) long.
Where necessary, a seismic rating of Network Equipment Building Standards (NEBS)
Zone 3 or Zone 4, per GR-63-CORE.
The IBM Supervisor-1 Module has been the standard available for many years, and it
provides port speed support up to 32 Gbps. The new IBM Supervisor-4 Module is the default
option with a new IBM c-type Director Series switch, and it provides increased performance
and functions with a port speed of 32 Gbps and future support for 64-Gbps-enabled port
modules.
This powerful combination helps organizations build highly available (HA), scalable storage
networks with comprehensive security and unified management. The IBM Supervisor-4
Module is supported on the IBM Storage Networking SAN192C-6 and IBM SAN 384C-6
Series Multilayer Directors.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 21
Figure 1-3 shows the IBM Supervisor-4 Module.
Industry-leading scalability
The IBM Supervisor-4 Module is designed to meet the requirements of the largest data center
storage environments and combines industry-leading scalability and performance, intelligent
SAN services, nondisruptive software upgrades, stateful process restart and failover, and fully
redundant operation for a new standard in Director-class SAN switching.
Integrated performance
The combination of the IBM Supervisor-4 Module, IBM 48-Port 32-Gbps Fibre Channel
Switching Module, and IBM Fabric-3 crossbar switching modules enables up to 3 Tbps of FC
throughput between modules in each direction for each payload slot in the IBM c-type Series
Director switches. This per-slot bandwidth is two times the bandwidth that is needed to
support a 48-port 32-Gbps FC module at full line rate. The IBM c-type Series architecture,
which is based on central arbitration and crossbar fabric, provides 64 Gbps line-rate,
nonblocking, predictable performance across all traffic conditions for every port in the chassis.
High availability
The IBM Supervisor-4 Module and IBM c-type Series Multilayer Directors are designed for
HA. In addition to meeting the basic requirement of nondisruptive software upgrades, the IBM
c-type Series software architecture offers availability. The IBM Supervisor-4 Module can
automatically restart failed processes, making it exceptionally robust. In the rare event that a
supervisor module is reset, complete synchronization between the active and standby
supervisor modules helps ensure stateful failover with no disruption of traffic.
Multiprotocol intelligence
The multilayer architecture of the IBM c-type Series enables a consistent feature set over a
protocol-independent switch fabric. The IBM c-type Series transparently integrates FC, FCIP,
and FICON.
2/4/8-Gbps, 4/8/16-Gbps, 8/16/32-Gbps, and 10-Gbps FC and 10-GbE: The IBM c-type
Series supports both 2/4/8/16/32-Gbps and 10-Gbps ports on the IBM c-type 48-Port
32-Gbps Fibre Channel Switching Module. The IBM c-type Series also supports 10 GbE
clocked optics carrying 10-Gbps FC traffic.
FICON: The IBM c-type Director Series supports deployment in IBM Z FICON and Linux
environments.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 23
IBM Supervisor-1 Module
The IBM Supervisor-1 Module is designed specifically for the IBM Storage Networking
SAN192C-6 and IBM Storage Networking SAN384C-6 chassis. The IBM Supervisor-1
Module delivers the latest advanced switching technology with NX-OS software to power a
new generation of scalable and intelligent multilayer switching solutions for SANs. This
supervisor module provides control and management functions for the switch and enables
high-performance switching.
Designed to integrate multi-protocol switching and routing, intelligent SAN services, and
storage applications onto highly scalable SAN switching platforms, the IBM Supervisor-1
Module enables intelligent, resilient, scalable, and secure high-performance multilayer SAN
switching solutions when combined with the IBM Storage Networking c-type family switching
modules.
Two IBM Supervisor-1 Module modules are required per system to use the reliability and
availability features, such as Active-Active redundancy, Online nondisruptive software
upgrades, hot-swappable modules, stateful process restart, and stateful nondisruptive
supervisor failover.
Figure 1-5 shows a crossbar fabric module for the IBM Storage Networking SAN192C-6.
Figure 1-5 Crossbar fabric module for the IBM Storage Networking SAN192C-6
The IBM Storage Networking SAN384C-6 supports up to six crossbar fabric modules. There
is a crossbar fabric module that is designed specifically for the IBM Storage Networking
SAN384C-6. The crossbar fabric modules are installed vertically at the back of the chassis
behind the fan modules. A minimum of three crossbar fabric modules are required to operate
the switch. A fourth crossbar fabric module is required for N+1 protection.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 25
Figure 1-6 shows a crossbar fabric module for the IBM Storage Networking SAN384C-6.
Figure 1-6 Crossbar fabric module for the IBM Storage Networking SAN384C-6
Each crossbar fabric module connects to four or eight switching modules and two supervisor
modules. In addition, each crossbar fabric module supports four 55 Gbps fabric ports
(F_Ports) that are connected to each switching module and one 55 Gbps F_Port that is
connected to each supervisor module.
Because the crossbar fabric modules are behind the fan modules in the chassis, the LEDs on
the crossbar fabric module are not easily visible from the back of the chassis. So, crossbar
fabric status LEDs are provided on the fan modules too. Because each fan module covers two
fabric modules, the status LEDs for two crossbar fabric modules are present on each fan
module. If the fan module is removed, the status and locator LEDs on crossbar fabric modules
will be visible.
When a fabric module must be found, the locator LED of the corresponding fan module must
be activated, followed by the locator LED of the fabric module, by using the locator-led fan
<fan module number> and locator-led xbar <xbar slot number>. For example, to find
crossbar fabric module 4, the locator LED of fan module 2 must be activated followed by the
locator LED of fabric module 4.
Each fabric-1 module provides 256 Gbps of FC bandwidth per line card slot (fabric-3 offers
double that bandwidth). The maximum chassis bandwidth is 1.536 Tbps FC bandwidth per
line card slot with six fabric modules installed.
1 256 Gbps
2 512 Gbps
3 768 Gbps
4 1024 Gbps
5 1280 Gbps
6 1536 Gbps
Note: Fabric modules may be installed in any slot, but a best practice is one behind each
fan tray.
These AC power supplies hold 80Plus Platinum certification for maximum power efficiency.
The 3000 W AC power supply unit (PSU) may be connected to either 220 V or 110 V AC
power sources. When connected to 220 V, each PSU has a maximum output capacity of
3000 W. When connected to 110 V, each PSU has a maximum output capacity of 1450 W.
Each power supply module monitors its output voltage and provides the status to the
supervisor. In addition, the power supply modules provide information about local fans, power,
shutdown control, and E2PROM to the supervisor.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 27
A c-type SAN Director has a flexible power system providing different power modes. Any
operational power supply provides power to the system power bus, which allows the power
load of the system to be shared equally across all operational power supplies. Power supply
output can be allocated to one of two pools. The available pool is available to start system
components. The reserve pool is kept in reserve and not counted toward the available power.
The system can be configured in one of several modes that vary the size of the available and
reserve power pools according to user requirements:
Combined mode: This mode allocates the output power of all power supplies to available
power for switch operations. This mode does not reserve any output power in case of
power outages or power supply failures.
Power supply redundancy mode (N+1): In this mode, one power supply's output is
allocated to the reserve power pool, which provides the system with enough reserve
power if a single power supply fails. The remaining power supplies are allocated to the
available power pool. The reserve power supply must be at least as powerful as the most
powerful power supply in the available pool to potentially replace the full power output of
the failed power supply in the worst case. Because it is impossible to predict which power
supply might fail, provision the system with power supplies of equal rating so that the
output of any power supply that fails can be replaced by the remaining power supplies.
For example, a system with four 3 kW power supplies in N+1 redundancy mode has a total
of 12 kW. 9 kW are allocated to the available power pool, and 3 kW are reserved. If any of
the power supplies fail, enough power is reserved that the remaining power supplies can
still meet the 9 kW commitment.
Input grid redundancy mode (grid redundancy): In this mode, half of the power supply's
output is allocated to the reserve power pool and half to the available power pool, which
provides the system with enough reserve power in the case of 50% of the power supplies
failing, as when a power grid fails. The system logically allocates the left two columns of
PSU bays to Grid A and sums the output power of operational PSUs. It does the same for
the right two columns (Grid B) and uses the minimum of the two as the available power
pool. To use maximum power, the sum of the power supply outputs of Grid A and Grid B
PSU bays must be equal.
For example, a system with four 3 kW PSUs in Grid A bays and three 3 kW PSUs in Grid B
bays and in grid redundancy mode has 12 kW available from Grid A and 9 kW from Grid B.
The minimum of the two grids is 9 kW, so 9 kW is allocated to the available power pool and
9 kW are reserved. If either grid fails, enough power is reserved that the remaining power
supplies can still meet the 9 kW commitment. The output of the fourth PSU in Grid A is not
considered in the calculations even though it provides power.
Full redundancy mode: This mode supports both grid redundancy or N+1 redundancy.
50% of the power supply output is allocated to the reserve pool, and the other 50% of the
power supply outputs are allocated to the available power pool. The reserved power may
be used to back up either single power supply failures or a grid failure.
For example, a system with six 3 kW power supplies in grid redundancy mode has a total
of 18 kW. 9 kW are allocated to the available power pool and 9 kW are allocated to the
reserve pool. If a grid failure occurs (half of the power supplies lose power), the full reserve
power pool is available to meet the 9 kW commitment. Otherwise, as single power
supplies fail, power is allocated to the available pool from the remaining reserve power
pool until the reserve power pool is exhausted.
Note: After a single power supply has failed in this mode, grid redundancy is no longer
available.
Figure 1-9 shows how to connect power supplies in an IBM Storage Networking SAN192C-6
for grid redundancy.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 29
Figure 1-10 shows the 48-Port 32-Gbps Fibre Channel Switching Module.
With 384 line-rate 32 Gbps FC ports per director, the 48-Port 32-Gbps Fibre Channel
Switching Module meets the high-performance needs for flash memory and NVMe-FC
workloads. The switching module is hot swappable and compatible with 4 Gbps, 8 Gbps,
16 Gbps, and 32 Gbps FC interfaces. This module also supports hot swappable Enhanced
SFP+ transceivers.
Individual ports can be configured with 32 Gbps, 16 Gbps, and 8 Gbps SFP+ transceivers.
Each port supports 500 buffer credits for exceptional extensibility without the need for extra
licenses. With the Enterprise Package license, up to 8191 buffer credits can be allocated to
an individual port, enabling full-link bandwidth over long distances with no degradation in link
utilization.
Figure 1-11 shows the 48-Port 32-Gbps Fibre Channel Switching Module port group view.
Note: SAN Analytics is not supported by the FICON protocol. Only FC SCSI and NVMe
analytics are available on the switch port module.
SFP+ transceivers provide the uplink interfaces, laser transmit (Tx) and laser receive (Rx),
and support 850 - 1610 nm nominal wavelengths, depending upon the transceiver.
Note: Use only Cisco transceivers in the IBM c-type SAN switches and director. Each
transceiver is encoded with model information that enables the switch to verify that the
transceiver meets the requirements for the switch.
SFP+ transceivers can be ordered separately or with the IBM c-type SAN switches and
director.
For more information about a specific SFP+ transceiver, see SFP+ Transceiver
Specifications.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 31
1.2.7 IBM 24/10-Port SAN Extension Module
The capabilities of IBM Storage Networking SAN192C-6 and IBM Storage Networking
SAN384C-6 can be extended with the 24/10-Port SAN Extension Module supported on
IBM Storage Networking c-type Family Multilayer Directors. With 24 line-rate 2-, 4-, 8-, 10-,
and 16-Gbps FC ports and eight 1- and 10-GbE FCIP ports, this module enables large and
scalable deployment of SAN extension solutions. The SAN extension module has two
independent service engines that can each be individually and incrementally enabled to scale
as business requirements expand.
The SAN extension module supports the full range of services that are available on other
IBM Storage Networking c-type Family Fibre Channel Switching Modules, including VSAN,
security, and traffic management services. The FCIP module uses IBM expertise and
knowledge of IP networks to deliver outstanding SAN extension performance, reducing
latency for disk and tape operations with FCIP acceleration features, including FCIP write
acceleration and FCIP tape write and read acceleration. The switching module has two
service engines on its system board.
Hardware-based encryption helps secure sensitive traffic with IP Security (IPsec), and
hardware-based compression dramatically enhances performance for both high- and
low-speed links, enabling immediate cost savings in expensive WAN infrastructure. Multiple
FCIP interfaces within a single engine or across service engines can be grouped into a port
channel of up to 16 links for HA and increased aggregate throughput.
The SAN extension module supports AES 256 IPsec encryption for secure transmission of
sensitive data over extended distances. Hardware enablement of IPsec helps ensure line-rate
throughput. Together, hardware-based compression and hardware-based encryption provide
a high-performance, highly secure SAN extension capability.
Additionally, the SAN extension module supports FCIP write acceleration, a feature that can
significantly improve application performance when storage traffic is extended across long
distances. When FCIP write acceleration is enabled, WAN throughput is optimized by
reducing the latency of command acknowledgments.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 33
VSANs
Suitable for efficient, secure SAN consolidation, ANSI T11-standard VSANs enable more
efficient storage network utilization by creating hardware-based isolated environments with a
single physical SAN fabric or switch. Each VSAN can be zoned as a typical SAN and
maintained with its own fabric services for greater scalability and resilience. VSANs allow the
cost of SAN infrastructure to be shared among more users, while helping ensure
segmentation of traffic and retaining independent control of configuration on a
VSAN-by-VSAN basis.
IBM provides four optional feature-based licenses for the Director-class switches. The
features provide advanced functions, advanced management, analytics, and mainframe
support. This book focuses on mainframe FICON supported features.
Note: The feature SAN Insights provides SAN Analytics, which is not supported by FICON.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 35
Advanced features that are enabled with the Enterprise Package include:
Advanced Traffic-Engineering Features:
– IVR
– QoS
– Extended B2B credits
Enhanced Network Security Features:
– FC-SP
– Port security
– VSAN-based access control
– IPsec
– Digital certificates
– Fabric binding for open systems FC
– Cisco TrustSec
This feature enables the advanced functions that extend the DCNM features when the basic
level of DCNM features does not provide the required level of management capability.
Table 1-10 provides a comparison list of “Standard” and “Licensed” functions that should be
considered when determining whether the extra purchase cost is required. Typically, having
the Advanced feature is advised for ease of management.
DM Yes Yes
Reporting - Yes
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 37
Restrictions on opening an unlicensed fabric
Here are a few restrictions regarding the opening of an unlicensed fabric:
Opening a fabric from a remote SAN client requires Cisco DCNM Advanced License.
If you are using a remote Cisco DCNM SAN client, you cannot open any unlicensed fabric.
The fabric must be licensed or the Cisco DCNM Essential license does not work.
If you are trying to open an unlicensed fabric from a SAN client running on the Cisco
DCNM server, you can open only one unlicensed fabric at a time.
If one instance is opened from a local SAN client, you cannot open another instance of an
unlicensed fabric.
IBM Storage Networking SAN50C-R switch offers up to forty 16-Gbps FC ports and two
1/10-GbE IP storage services ports in a fixed two-rack-unit (2RU) form factor. The eight
10-GbE Fibre Channel over Ethernet (FCoE) ports are not used in FICON environments. The
IBM Storage Networking SAN50C-R switch connects to existing native FC networks,
protecting investments in storage networks.
The IBM SAN Extension over IP application package license is enabled as standard on the
two fixed, 1/10-GbE IP storage services ports, enabling features such as FCIP and
compression on the switch without the need for extra licenses. Also, by using the eight
10-GbE FCoE ports, the IBM Storage Networking SAN50C-R platform attaches to directly
connected FCoE and FC storage devices, and supports multi-tiered unified network fabric
connectivity directly over FCoE.
Product highlights
The IBM Storage Networking SAN50C-R switch provides unique multiservice and
multi-protocol functions in a compact 2RU form factor:
SAN consolidation with integrated multi-protocol support: The IBM Storage Networking
SAN50C-R switch is available in a base configuration of 20 ports of 16-Gbps FC for
high-performance SAN connectivity, 2 ports of 1/10-GbE for FCIP and iSCSI storage
services, and eight ports of 10-GbE for FCoE connectivity.
High-density FC switch with 16-Gbps connectivity: The IBM Storage Networking
SAN50C-R switch scales up to 40 ports of 16-Gbps FC in a fixed configuration switch. The
base configuration comes with 20 ports of 16-Gbps FC enabled for high-performance SAN
connectivity. It can be upgraded onsite to enable additional 20 ports of 16-Gbps FC by
adding the Port-On-Demand Activation license. Additionally, the IBM Storage Networking
SAN50C-R cost-effectively scales up for FICON mainframe environments.
Intelligent application services engine: The IBM Storage Networking SAN50C-R switch
includes as standard a single application services engine that enables the included
IBM SAN Extension over IP software solution package to run on the two fixed, 1/10-GbE
storage services ports. The IBM SAN Extension over IP package provides an integrated,
cost-effective, and reliable business-continuance solution that uses IP infrastructure by
offering FCIP for remote SAN extension, along with various advanced features to optimize
the performance and manageability of FCIP links.
Hardware-based virtual fabric isolation with virtual VSANs and FC routing with IVR:
VSANs and IVR enable deployment of large-scale multi-site and heterogeneous SAN
topologies. Integration into port-level hardware allows any port in a system or in a fabric to
be partitioned into any VSAN. Included in the optional IBM Storage Networking c-type
Enterprise advanced software package, IVR provides line-rate routing between any of the
ports in a system or in a fabric without the need for external routing appliances.
Remote SAN extension with high-performance FCIP:
– Simplifies data protection and business-continuance strategies by enabling backup,
remote replication, and other DR services over WAN distances by using
open-standards FCIP tunneling.
– Optimizes the usage of WAN resources for backup and replication by enabling
hardware-based compression, hardware-based encryption, FCIP write acceleration,
and FCIP tape read and write acceleration. Virtual ISL connections are provided on the
two 1/10-GbE ports through tunneling.
– Preserves IBM Storage Networking c-type Family enhanced capabilities, including
VSANs, IVR, advanced traffic management, and network security across remote
connections.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 39
Cost-effective iSCSI connectivity to Ethernet-attached servers:
– Extends the benefits of FC SAN-based storage to Ethernet-attached servers at a lower
cost than is possible by using FC interconnect alone.
– Increases storage usage and availability through the consolidation of IP and FC block
storage.
– Through transparent operation, it preserves the capability of existing storage
management applications.
Advanced FICON services: The IBM Storage Networking SAN50C-R supports FICON
environments, including cascaded FICON fabrics, VSAN-enabled intermix of mainframe
and open systems environments, and NPIV for mainframe Linux partitions. CUP support
enables in-band management of IBM Storage Networking c-type switches from the
mainframe management console.
FICON tape acceleration reduces latency effects for FICON channel extension over FCIP
for FICON tape read and write operations to mainframe physical or virtual tape. This
feature is sometimes referred to as tape pipelining.
Platform for intelligent fabric applications: The IBM Storage Networking SAN50C-R switch
provides an open platform that delivers the intelligence and advanced features that are
required to make multilayer intelligent SANs a reality, including hardware-enabled
innovations to host or accelerate applications for data migration, storage backup, and data
replication. Hosting or accelerating these applications in the network can dramatically
improve scalability, availability, security, and manageability of the storage environment,
resulting in increased utility and lower TCO.
In-service software upgrades (ISSUs) for FC interfaces: The IBM Storage Networking
SAN50C-R switch promotes high serviceability by enabling NX-OS software to be
upgraded while the FC ports are carrying traffic.
Intelligent network services: The IBM Storage Networking SAN50C-R switch uses VSAN
technology for hardware-enforced, isolated environments within a single physical fabric,
ACLs for hardware-based intelligent frame processing, and advanced traffic management
features such as fabric-wide QoS to facilitate migration from SAN islands to
enterprise-wide storage networks.
High-performance ISLs: The IBM Storage Networking SAN50C-R switch supports up to
16 FC ISLs in a single port channel. Links can span any port on any module in a chassis
for added scalability and resilience. Up to 253 B2B credits can be assigned to a single FC
port to extend storage networks over long distances.
Comprehensive network security framework: The IBM Storage Networking SAN50C-R
switch supports RADIUS and TACACS+, FC-SP, SFTP, SSH Protocol, SNMPv3
implementing AES, VSANs, hardware-enforced zoning, ACLs, and per-VSAN RBAC.
Additionally, the 10-GbE ports offer IPsec authentication, data integrity, and
hardware-assisted data encryption for FCIP and iSCSI.
IPv6 capable: The IBM Storage Networking SAN50C-R switch supports IPv6 as mandated
by the US DoD, Japan, and China. IPv6 support is provided for FCIP, iSCSI, and
management traffic routed inband and out of band.
Sophisticated diagnostic tests: The IBM Storage Networking SAN50C-R switch provides
intelligent diagnostic tests, protocol decoding, and network analysis tools, and integrated
IBM Call Home capability for added reliability, faster problem resolution, and reduced
service costs.
VSANs
VSANs are ideal for efficient, secure SAN consolidation, enabling more efficient storage
network usage by creating hardware-based isolated environments with a single physical SAN
fabric or switch. Each VSAN can be zoned as a typical SAN and maintains its own fabric
services for added scalability and resilience. VSANs allow the cost of a SAN infrastructure to
be shared among more users while helping ensure complete segmentation of traffic and
retaining independent control of configuration on a VSAN-by-VSAN basis.
IVR
In another step toward deploying efficient, cost-effective, and consolidated storage networks,
the IBM Storage Networking SAN50C-R switch supports IVR, the industry’s first routing
function for FC. IVR allows selective transfer of data between specific initiators and targets on
different VSANs while maintaining isolation of control traffic within each VSAN. With IVR, data
can transit VSAN boundaries while maintaining control plane isolation, maintaining fabric
stability and availability.
IVR is one of the feature enhancements that are provided with the enterprise advanced
software package. It eliminates the need for external routing appliances, greatly increasing
routing scalability while delivering line-rate routing performance, simplifying management,
and eliminating the challenges that are associated with maintaining separate systems.
Deploying IVR means lower total cost of SAN ownership
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 41
I/O Acceleration services
The IBM Storage Networking SAN50C-R switch supports I/O Acceleration (IOA) services, an
advanced software package that can improve application performance when storage traffic is
extended across long distances. When FC and FCIP write acceleration are enabled, WAN
throughput is optimized through reduced latency for command acknowledgments. Similarly,
the IBM Storage Networking SAN50C-R switch supports FC and FCIP tape write
acceleration, which allows operation at nearly full throughput over WAN links for remote tape
backup and restore operations. IOA can be deployed with disk data replication solutions to
extend the distance between data centers or reduce the effects of latency. IOA can also be
used to enable remote tape backup and restore operations without significant throughput
degradation. Here are the main features of IOA:
Extension of the acceleration service as a fabric service to any port in the fabric,
regardless of where it is attached
Fibre Channel Write Acceleration (FC-WA) and Fibre Channel tape acceleration (FC-TA)
FCIP write acceleration (FCIP-WA) and FCIP tape acceleration (FCIP-TA)
FC and FCIP compression
HA by using port channels with acceleration over FC and FCIP
Unified solution for disk and tape IOA over MANs and WANs
Speed-independent acceleration that accelerates 2/4/8/16-Gbps FC links and
consolidates traffic over 8/16 Gigabit ISLs
Mainframe support
IBM Storage Networking SAN50C-R is mainframe-ready and supports IBM Z FICON and
Linux environments that are provided with the mainframe advanced software package.
Qualified by IBM for attachment to all FICON-enabled devices in an IBM Z operating
environment, IBM Storage Networking SAN50C-R switches support transport of the FICON
protocol in both cascaded and non-cascaded fabrics, and an intermix of FICON and
open-system FCP traffic on the same switch. VSANs simplify intermixing of SAN resources
among IBM z/OS, mainframe Linux, and open-system environments, enabling increased SAN
utilization and simplified SAN management.
VSAN-based intermix mode eliminates the uncertainty and instability that is often associated
with zoning-based intermix techniques. VSANs also eliminate the possibility that a
misconfiguration or component failure in one VSAN will affect operations in other VSANs.
VSAN-based management access controls simplify partitioning of SAN management
responsibilities between mainframe and open systems environments, enhancing security.
FICON VSANs can be managed by using the standard DCNM, the CLI, or CUP-enabled
management tools, including Resource Measurement Facility (RMF) and Dynamic Channel
Path Management (DCM).
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 43
Digital certificates are issued by a trusted third party and used as electronic passports to
prove the identity of certificate owners.
Fabric binding for open systems helps ensure that the ISLs are enabled only between
switches that are authorized in the fabric binding configuration.
Ease of management
To meet the needs of all users, the IBM Storage Networking SAN50C-R switch provides three
principal modes of management: CLI, DCNM, and integration with third-party storage
management tools.
The IBM Storage Networking SAN50C-R switch presents a consistent, logical CLI. Adhering
to the syntax of the widely known I/O Subsystem (IOS) Software CLI, which is easy to learn
and delivers broad management capabilities. The CLI is an efficient and direct interface that
provides optimal capabilities to administrators in enterprise environments.
DCNM for SAN is an application that simplifies management across multiple switches and
converged fabrics. It provides robust features to meet the routing, switching, and storage
administration needs of present and future virtualized data centers, streamlines provisioning
of the unified fabric, and proactively monitors SAN components. DCNM SAN can be used
independently or with third-party management applications.
The solution is designed to scale to large enterprise deployments through a scale-out server
architecture with automated failover capability. These capabilities provide a resilient
management system that centralizes infrastructure and path monitoring across
geographically dispersed data centers. DCNM SAN base management functions are
available at no additional charge, but advanced features are unlocked by the DCNM SAN
Advanced license. The DCNM SAN application can be installed on Linux and Microsoft
Windows OSs and supports both PostgreSQL and Oracle databases.
Enterprise Optional
Table 1-12 Specifications for the IBM Storage Networking SAN50C-R switch
Feature Description
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 45
Feature Description
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 47
IBM Storage Networking SAN50C-R physical specifications
Table 1-13 details the specific requirements for planning the installation of the devices in the
data center rack.
Table 1-13 Physical and environmental specifications for IBM Storage Networking SAN50C-R
Feature Description
Fan modules
The IBM Storage Networking SAN50C-R switch has two fan trays that are installed vertically
at the back of the chassis. Each fan module can be removed while the other fan module
continues to move air through the chassis.
In the default configuration and with all three PSUs installed, the IBM Storage Networking
SAN50C-R switch has N+1 PSU redundancy. The only power redundancy mode that is
available is redundant; combined mode is not supported on this platform.
Typically, when FCOE ports are not used, grid redundancy with the IBM Storage Networking
SAN50C-R switch is possible with only two PSUs.
Table 1-14 Power modes when the FCoE ports are in the ADMIN DOWN state
Three online PSUs Two PSUs are connected to Grid A and one PSU
is connected to Grid B; they work in N+1
redundant mode.
One online PSU The PSU is connected to any one grid; it works in
non-redundant mode.
The power supplies in an IBM Storage Networking SAN50C-R switch work in the power
modes that are shown in Table 1-15 when the FCoE ports are in the ADMIN UP state.
Table 1-15 Power modes when the FCoE ports are in the ADMIN UP state
Three online PSUs These PSUs work in N+1 redundant mode.
Supported transceivers
The IBM Storage Networking SAN50C-R switch supports the following transceivers:
8 Gbps SW/LW LC Enhanced SFP+
10 GbE SR/LR/ER LC SFP+
16 Gbps SW/LW/ELW LC SFP+
4/8/16-Gbps FC LW SFP+ DWDM SM DDM 13 dB 40 km
4/8/16-Gbps FC LW SFP+ CWDM SM DDM 13 dB 40 km
4/8/16-Gbps FC/FICON LW SFP+ DWDM SM DDM 1550 nm 13 dB 40 km
2/4/8-Gbps FC LW SFP+ DWDM SM DDM 80 km
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 49
2/4/8-Gbps FC LW SFP+ CWDM SM DDM 23 dB 70 km
2/4/8-Gbps FC LW SFP+ SM DDM 80 km
Rack requirements
The rack-mount kit enables you to install the switch into racks of varying depths. You can use
the rack-mount kit parts to position the switch with easy access to either the port connections
end of the chassis or the end of the chassis with the fan and power supply modules.
1.5.1 NX-OS
The NX-OS software is a data center OS that runs on the IBM c-type range of SAN switches.
The software supports the modular switch hardware to provide the resiliency and
serviceability at the core of its design. NX-OS has its roots from the proven Cisco MDS 9000
SAN-OS Software. NX-OS provides the level of continuous availability that enterprise SAN
fabric solutions in the modern data center require for providing mission-critical solutions.
NX-OS offers reliability, innovation, and operational consistency across data center platforms.
NX-OS runs on the Cisco Nexus Family of hardware-based network switches and Cisco MDS
9000 family storage switches and Cisco UCS 6000 Series Fabric Interconnects.
Its value proposition becomes more apparent when fabric-level actions are required, as
opposed to single device tasks. DCNM increases overall data center infrastructure uptime
and reliability, thus improving business continuity. Statistically, the DCNM license is always
associated to IBM c-type Director sales, but it has a reduced traction with stand-alone fabric
switches due to the simplicity of the topology. In those situations where the fabric is just one
switch, there is no strong need for a network-wide management tool, and some customers
prefer more automation (for example, autozone) versus more management capabilities.
Chapter 1. IBM Storage Networking c-type family for mainframe IBM Fibre Connection environments 51
52 IBM Storage Networking c-type FICON Implementation Guide
2
The close partnership between IBM and Cisco continues to provide various information
technology expertise that is leveraged to offer the best technical solutions and help you create
and deploy updated platforms that provide your data center with flexible and secure
connectivity. We are constantly learning and adapting to meet your evolving business needs.
IBM is dedicated to building a secure foundation as technology continues to advance and
provide extraordinary outcomes in the data center today from core to edge.
Globally over the years, more businesses have been deploying large-scale SANs due to
exponential growth in the data center. IBM c-type SAN switches offer many features that will
help clients lower total cost of ownership (TCO) coupled with the ability to consolidate SAN
environments and deliver the tools for IT professionals to manage, report, diagnose, and
monitor their SAN infrastructures. The ability to incorporate robust security into on-premises
business deployments has transformed and crossed over to off-premise cloud environments.
The IBM c-type family portfolio has four SAN switches and three SAN Directors. Each of
these components comes with hardware and software that is needed to implement a resilient,
secure fabric for your storage and server devices. All IBM c-type hardware comes with 1 year
of onsite 24x7 same-day maintenance with optional service options to upgrade. The
IBM c-type platform offers a mainframe IBM Fibre Connection (FICON) intermix solution,
which includes traffic isolation, management, quality of service (QoS), and scaling.
IBM mainframes have been the workhorse of the computing world for more than 6 decades
and have gained the trust of many large organizations who entrust IBM Z servers with their
most demanding mission-critical applications.
Important: Switch-based licenses are mapped to the serial number of an IBM c-type
chassis, which is 11 digits.
Note: For more information about standard and licensed features, see Cisco MDS 9000
Series Licensing Guide, Release 8.x.
When the Mainframe Package is installed, IBM c-type devices can be used with the
IBM z13®, IBM z14, and IBM z15™. The package includes support for the new FICON
Express 16S feature and complete interoperability with Forward Error Correction (FEC)
capabilities.
VSANs can span switch modules and can vary in size. For example, one VSAN with multiple
FC ports can span different line cards. Adding ports to a VSAN is a nondisruptive process.
VSANs can contain up to 239 switches and have an independent address space that allows
identical Fibre Channel IDs (FCIDs) to be used simultaneously in different VSANs. The
maximum number of ports for a FICON VSAN is 253 (0xFE is CUP and 0xFF is reserved) due
to FICON specific addressing rules that do not apply to FC.
With VSANs, you can specify and assign specific cascaded links. The VSAN feature allows
you to maintain throughput and enable complete traffic isolation when using the same
physical hardware throughout the FICON fabric, as shown in Figure 2-1.
Inter-VSAN Routing
In another step toward deployment of efficient, cost-effective, and consolidated storage
networks, the IBM c-type supports IVR, the industry’s first and most efficient routing function
for FC.
IVR allows selective transfer of data between specific initiators and targets on different
VSANs while maintaining isolation of control traffic within each VSAN. With IVR, data can
move across VSAN boundaries while maintaining control-plane isolation, thus maintaining
fabric stability and availability.
IVR eliminates the need for external routing appliances, greatly increasing routing scalability
while delivering line-rate routing performance, simplifying management, and eliminating the
challenges that are associated with maintaining separate systems. IVR reduces the total cost
of SAN ownership.
With the IVR Zone wizard inside the DCNM management GUI, an administrator can simplify
the process of setting up and configuring the IVR zones in a fabric. The wizard can help with
checking all of the switches in the fabric for the code that is running on the switch.
Any configuration changes that you apply to a port channel are applied to each member
interface of that port channel.
Note: A port channel is operationally up when at least one of the member ports is up and
that port’s status is channeling. The port channel is operationally down when all member
ports are operationally down.
Over the last few years, there have been many new FICON CUP enhancements that were
introduced by IBM, Cisco, and other vendors. The advanced technology enhancements that it
provides are geared toward making it easier for administrators to get a better overall insight
into FICON SAN fabrics because IBM Z has evolved and configurations have become more
complex.
Note: IBM c-type FICON Directors support up to eight FICON VSANs, each with its own
CUP device.
Currently, IBM c-type switches support many different speeds of connectivity up to 32 Gbps.
In addition to this, 64 Gbps is expected to become available in the future. When available, the
new linecards can be added to existing director chassis with no service disruption. As your
data center grows or shrinks, these SAN switches can keep up with the dynamic pace of the
changing environment.
The FCP ports on all the modules support an auto-sensing Small Form-factor Pluggable
(SFP) that can determine the speed that needed to operate in that fabric.
Note: FCIP is supported by IBM Storage Networking SAN50C-R and the IBM 24/10 Port
SAN Extension Module.
In an expensive wide area network (WAN), the IBM c-type switches can use hardware-based
compression, which enhances performance for both high and low speed links while keeping
costs down. Hardware-based encryption can also be enabled to help secure sensitive traffic
with IPsec.
2.8 Trunking
Trunking is a commonly used storage industry term, but unfortunately it has assumed
different meanings. The NX-OS software and switches in the IBM c-type family adopt the
meaning that is prevalent in the industry and implement trunking and port channels as
follows:
Port channels enable several physical links to be combined into one aggregated logical
link.
Trunking enables a link transmitting frames in the EISL format to carry (trunk) traffic from
multiple VSANs. For example, when trunking is operational on an E-port, that E-port
becomes a Trunking E-port (TE-port). A TE-port is specific to IBM c-type switches. An
industry-standard E-port can link to other vendor switches and is referred to as a
non-trunking interface.
FSPF can dynamically establish the shortest and quickest path between any two switches
supporting multiple paths and automatically determine an alternative path around a failed link.
It provides a preferred route when two equal paths are available.
FSPF Link Cost tracks the state of links on all switches in the fabric, associates a cost with
each link in its database, and then chooses the path with a minimal cost. The cost that is
associated with an interface can be administratively changed to implement the FSPF route
selection. The integer value to specify cost can be 1 - 65,535. For example, the default cost
for 1 Gbps is 1000 and for 8 Gbps is 125.
2.10 Zoning
In this section, we describe four IBM c-type zoning features. Basic zoning and enhanced
zoning are described together for convenience.
Basic and enhanced zoning:
– Basic zoning provides the basic approach of zoning initiators and targets.
– The enhanced zoning feature is used to prevent overwrite of zoning configurations by
concurrent operations through a fabric-wide lock of a zoning session.
Smart zoning
Smart zoning is used to reduce zoning time, reduce the size of the zone database, and
optimize resource utilization on SAN switches by automatically creating multiple
single-initiator single-target (SIST) pairings within a single large zone.
Autozone
The Autozone feature is used to eliminate the need for manual zoning in a single-switch
SAN by automatically creating SIST zones.
Important: Although there are several zoning options that can be leveraged based on use
cases, we recommend enhanced zoning for standard deployments.
Note: Enhanced zoning prevents multiple storage administrators from modifying zone sets
at the same time. A best practice is to use enhanced zoning for all configured VSANs in a
fabric.
The alternative is to use either SIST zones or single-initiator multiple-target zones. In large
environments, the creation of all these separate zones can be a significant operational
overhead. Smart zoning combines the benefits of both these approaches.
Autozone eliminates manual zoning configuration for selective use cases. Today, a user
manually creates zones, adds initiators and targets to the zones, and adds each zone to a
zone set. Autozone replaces these steps with a single command. The IBM c-type 32-Gbps
fabric switches automatically detect and identify an end-device type as an initiator or a target.
All end devices are zoned automatically by following the scheme of SIST zoning. Autozone is
invoked only once. Any new devices are automatically detected as initiators or targets and
zoned. You do not have to access the switch to modify the zoning configuration when a new or
modified storage assignment is required. The final configuration that is made by Autozone is
the same as that obtained by a manual-zoning configuration.
You can apply QoS to ensure that FC data traffic for your latency-sensitive applications
receive higher priority over throughput-intensive applications, such as data warehousing.
The IBM c-type switches support QoS for internally and externally generated control traffic.
Within a switch, control traffic is sourced to the supervisor module and is treated as a
high-priority frame. A high-priority status provides absolute priority over all other traffic and is
assigned in the following cases:
Internally generated time-critical control traffic.
Externally generated time-critical control traffic entering a switch from another vendor’s
switch. High-priority frames originating from other vendor switches are marked as high
priority as they enter an IBM c-type switch.
N_Port ID Virtualization
NPIV allows an FC host connection or N_Port to be assigned multiple N_Port IDs or FCIDs
over a single physical link. All FCIDs that are assigned can now be managed on an FC fabric
as unique entities on the same physical host. NPIV is beneficial for connectivity between core
and edge SAN fabrics.
NPIV can also be used on hosts. In a virtual machine (VM) environment where many VM
operating systems (OSs) or applications are running on a physical server and using the same
physical host bus adapter (HBA) to access the SAN fabric, each VM can now be managed
independently for zoning, aliasing, and security.
N_Port Virtualization
NPV is an extension of NPIV. The NPV feature allows top-of-rack fabric switches to behave as
NPIV-based HBAs to the core FC IBM c-type Director. The edge switch aggregates the locally
connected host ports or N_Ports into one or more uplinks (pseudo-ISLs) to the core switches.
NPV is primarily a switch-based technology that is designed to reduce switch management
and overhead in larger SAN deployments. This IBM c-type software feature supports
industry-standard NPIV, which allows multiple N_Port fabric logins concurrently on a single
physical FC link.
NPV is a complementary feature that reduces the number of FC domain IDs in core-edge
SANs. Fabric switches operating in the NPV mode do not join a fabric; they only pass traffic
between core switch links and end devices as gateways, which eliminates the domain IDs for
these switches. NPIV is used by edge switches in the NPV mode to log in to multiple end
devices that share a link to the core switch without merging with the SAN fabric.
The benefits of using NVMe are that it can improve the performance of a centralized storage
infrastructure over a SAN that is built by using IBM c-type switches. NVMe initiators can
access NVMe targets over FC fabrics. In today’s data centers, FC continues to be the
preferred protocol for connecting all-flash arrays due to its flexibility, scalability, availability,
and high-performance plug-and-play architecture. Also, FC offers compatibility with earlier
versions and a phased migration from SCSI to NVMe-based solutions.
FC-NVMe on IBM c-type switches has improved and achieved higher performance by using
flash storage versus hard disk drives (HDDs) for various OSs. Multiprotocol flexibility supports
both NVMe and SCSI in tandem over FC SAN fabrics so that businesses can leverage their
existing infrastructure to deploy new FC-NVMe-capable end devices in a phased approach by
sharing the existing FC SAN.
For this reason, IBM c-type switches implement an advanced feature set to help ensure data
integrity on all data paths. To help ensure the reliable transport of data, IBM c-type switches
use several error-detection mechanisms to provide error-correction capabilities whenever
possible:
Error detection and correction on the supervisor memory.
Cyclic redundancy check (CRC) for frame integrity on the ingress port.
Internal CRC detection for frame integrity at the ingress port of the crossbar fabric module
and the ingress port of the output line card, with automatic isolation of misbehaving
components.
Automatic dropping of corrupted frames.
FEC on ISLs and F-ports.
Syslog-based notifications to the administrator in case anomalies are detected.
Starting with 16 Gbps FC speeds, the robustness of FC networks has been further
strengthened by introducing a new optional feature that is called FEC. The scope of FEC is
not limited to identifying corrupted frames, but includes the possibility of correcting them in
real time. Media such as loose transceivers (SFPs) or dirty cables might result in corrupted
packets on ISLs and even links toward end nodes. FEC helps reduce or avoid data stream
errors that would result in corrupted frames and lead to application performance degradation,
as shown in Figure 2-2.
Standards bodies have made the usage of FEC mandatory on all 32 Gbps FC products and
future higher bit rates.
Despite FEC being optional for 16 Gbps ports, it had a significant adoption in IBM Z
environments, where it can be supported end-to-end, from host to control unit (CU), and not
just across ISLs. For these FICON environments, transmitter training was required in
combination with FEC.
When enabled, FEC on 16 Gbps ports would allow for recovery of up to 11 error bits in every
2112-bit transmission, thus enhancing the reliability of data transmissions. On 32 Gbps ports,
the use of FEC is mandatory and its implementation has become even stronger. The
mathematical algorithm that is implemented is Reed Solomon, which can correct 7 out of 514
symbols, with any symbol being 10 bits. This process translates into an error correction
capability that is more than double what was possible at 16 Gbps.
Using FEC enables more statistics on the relevant ports. When everything is operating
normally, the FEC corrected errors counter stay at zero, but show some value if any bit gets
corrected. This way, even if links are working properly, the SAN administrator can easily see
when FEC is correcting bits, and that is an indication that some action is needed at the next
available maintenance window. In other words, FEC on IBM c-type switches allows the
administrator to identify possible issues on the link even if applications are not yet negatively
affected. This example is one of preventive maintenance.
Because this FEC capability is fully implemented in hardware and uses an in-band approach,
there is no performance impact on throughput and no bit-rate change. The latency
contribution is also maintained under 100 nanoseconds on every link, virtually eliminating any
impact on application performance. All IBM c-type switches can detect and drop corrupted
frames at the switch input, but FEC adds another layer of resiliency to help correct errors
wherever feasible and reduce the number of packet drops. When enabled, this feature
contributes to more resilient end-to-end frame delivery.
The FEC capability can be enabled and monitored through the command-line interface (CLI)
or the DCNM GUI.
Not all NX OS releases are FICON certified, which means that you can run an ISSU only by
jumping from one FICON certified release to another FICON certified release.
Cisco SAN Analytics is an industry-first for storage networking devices. It can inspect FC,
SCSI, and NVMe headers within FC frames, and it can correlate I/O flows and analyze them.
It exposes initiator, target, LUN, and namespace identifiers in multiple views. A recent feature
enhancement also provides virtual machine IDs (VMIDs). It scales up to 20,000 I/O flows for
director line cards or switches, and for every flow it collects more than 70 metrics. It can be
easily enabled on host ports, storage ports, or ISL ports.
Note: Cisco SAN Analytics is not available for FICON traffic. However, Cisco SAN
Analytics can be enabled on ports within an FC VSAN when operating in the FC/FICON
intermix mode.
The recent z15 system extends security, resiliency, and agility to hybrid clouds with
encryption everywhere, instant recovery, and cloud native development.
This solution embeds data security policies and encryption with the data and enforces data
privacy by policy across the entire enterprise, even when that data leaves your data center
and moves to the cloud. This process happens on the same platform that enables you to use
hybrid multicloud services, modernize z/OS applications in place, and integrate with Linux
applications on and off premises.
Pervasive encryption enables customers to protect data and ensure privacy by encrypting
data at the database, data set, or disk level. Using pervasive encryption does not mean that
customers are required to change or adjust applications. Each application contains an
internal encryption-decryption mechanism, allowing clients to apply cryptography without
altering the application itself. These functions go a long way toward addressing data
protection and privacy management challenges that often arise when an enterprise
organization moves to using hybrid IT in a multicloud world.
Security and data protection is a significant operational challenge for any IT organization. The
z15 meets this challenge by using many data-centric audit and protection mechanisms:
Ability to track the location of all your data and the status of all applicable security
mechanisms.
Capability to build data protection and privacy into all applications and data platforms
instead of relying on an assortment of third-party tools.
Security of having data protection and privacy controls that are embedded into every layer
of the computing stack.
Reliability of having a consistent identity management process in place across your hybrid
cloud environment.
Predictability that is delivered by consistently deploying all computing platform elements.
Flexibility to securely move data between infrastructure components and third parties.
Comfort that is enjoyed by being able to meet new data privacy regulations and data
sovereignty laws without fearing the risk and economic loss that is associated with data
security and privacy failures.
The IBM Z family is also a key platform for all organizations with any hybrid multicloud
transition strategy. IBM Cloud® Hyper Protect Services are cloud-specific security services
that provide the following features:
A complete set of encryption and key management services for a specific namespace.
A database on-demand service with the ability to store data in an encrypted format but
without the need for specialized skills.
A secure Kubernetes cluster container service that can come in handy for packaging
applications in a standardized, portable, and scalable way.
The security features that are provided by the recent mainframe systems are matched by a
multitude of security features that are available from IBM c-type networking devices. With the
advent of optical and FCIP solutions that improve high availability (HA) and disaster recovery
(DR), SANs often span outside a single data center, making security concerns even more
important. Third-party IT hosting and colocation services add fuel to an already hot topic.
Network data security is about confidentiality, integrity, and authentication. The security
features on IBM c-type devices can be classified into four main groups:
Security at the device level, both hardware and software
Security for device management access
Security across devices at the fabric level
Security for data in transit
Figure 3-1 represents the four groups of network security features on IBM c-type switches.
Security must be part of any IT design, and it must be enforced by using effective security
management strategies. In the modern digital age, with an expanded attack surface and new
regulations now in place, security is not an optional item anymore. It must be there.
The following sections provide a high-level overview of the security features that are available
on IBM c-type devices with some best practices with regard to implementing them. We do not
push the definition of security to an extreme by including in that category all those features
and best practices that help prevent unintentional service disruption, mostly from human
error. However, to increase your security position, you must take more configuration steps.
Secure boot ensures that the first code that is run on IBM c-type hardware platforms is
authentic and unmodified. Secure boot anchors the microloader in immutable hardware,
which establishes a root of trust and prevents network devices from running network software
that was tampered with. It protects the boot code in the hardware, shows the image hashes,
and provides the secure unique device identification (SUDI) certificate for the device. During
the bootup process, if the authentication of the secure key fails, the line card module or the
switch fail to boot up, which prevent the tampering of BIOS. Secure boot is enabled by default
and no configuration is required. The feature is offered at no cost.
Anchoring the secure boot process in the hardware makes sure that the most robust security
is achieved. In fact, a hardware modification is difficult, expensive, and not easy to conceal
even if hackers have physical possession of the device. This approach is different from the
rest of the industry and testifies to how important security is considered by IBM, making
c-type switches the ideal network infrastructure for mainframe deployments.
Coupled to secure boot, anti-counterfeit measures were introduced on IBM c-type devices
and switching modules with 32-Gbps ports. The anti-counterfeit measures ensure that
IBM c-type hardware platforms with an NX-OS software image are genuine and unmodified,
thus establishing a hardware-level root of trust and an immutable device identity for the
system to build on.
The SUDI is permanently programmed into the Trust Anchor Module (TAM) and logged during
the closed, secured, and audited manufacturing processes. This programming provides
strong supply chain security, which is important when the final products see contributions
from multiple, component-level suppliers.
ACT2_AUTH_FAIL: ACT2 test has failed on module 9 with error: ACT2 authentication
failure
Note: Secure boot and anti-counterfeit technologies are available on 32 Gbps switching
modules for IBM c-type Directors but not on IBM Storage Networking SAN50C-R.
The following secure protocols are available on IBM c-type devices to secure management
access by using the management port or in-band:
SNMPv3 (SNMPv1 and SNMPv2c are supported but not recommended) provides built-in
security for secure user authentication and data encryption. There is no need for
certificates.
SSHv2 (telnet is supported but not recommended) provides more controlled security by
encrypting data, user ID, and passwords. By default, NX-OS software generates an RSA
key by using 1024 bits and no X.509 certificates are required. SSH public key
authentication can be used to achieve password free logins.
The SCP, Secure File Transfer Protocol (SFTP), and HTTPS services (TFTP and HTTP
are supported but not recommended) protocols offer secure ways to perform file transfers
or exchange instructions. Bidirectional encryption and digital security certificates are used
by HTTPS to protect against man-in-the-middle attacks, eavesdropping, and tampering.
Typically, only the server is authenticated (by the client examining the server's certificate).
For more information about configuring secure protocols, see Cisco MDS 9000 Series
Security Configuration Guide, Release 8.x.
Passwords should not be trivial or easy to identify, but they should be easy to remember for
the user who created them. Recently, specialized password generators have seen higher
adoption. In this case, passwords are not easy to remember and might include printable
characters that are not accepted by the devices under consideration.
A password should contain at least one alphabetic, one numeric, and a mix of capital and
non-capital letters. It can also contain special characters if they are supported by the device.
When working with IBM c-type and DCNM, do not use any of these special characters in
either usernames or passwords:
The MOTD banner can be used for security purposes. Assuming an intruder is trying to
authenticate into the device, you do not really want to welcome them. Instead, you claim your
exclusive rights on the device and command the non-authorized user to immediately log off.
The MOTD banner also can be used for critical communication, such as informing users
about known contingencies or planned maintenance windows. The MOTD should also
contain the name, email address, and phone number of the device administrator so they can
be easily reached if required.
The MOTD feature is available on both the IBM c-type NX-OS CLI and DCNM management
tools.
The MOTD can be configured from the NX-OS CLI as a simple short message or an extended
multi-line message. When a multi-line message is wanted, enter the command banner motd
followed by the hash character (#) and then the carriage return (enter). Then, you can enter
the text for each line. The same dash character indicates the end of input text.
Example 3-1 shows the output that a user would see when trying accessing the switch.
For more information about configuring the MOTD by using the NX-OS CLI for IBM c-type
devices, see Cisco MDS 9000 Series Fundamentals Configuration Guide, Released 8.x.
The MOTD also can be configured for the DCNM web client splash window, this time in
combination with an optional image. An administrator can do this task by selecting
Administration → DCNM Server → Customization.
Figure 3-3 shows an example of the DCNM splash window with the MOTD included.
In a large organization, you must make sure that only some users can log in to network
devices and the relevant management tools. Due to differences in skills, roles, and
responsibilities, different users have a different set of privileges and are able to perform a
different set of tasks. Moreover, to implement an adequate governance approach, the
management team needs to know who did what and when. Being able to answer this
apparently simple question requires a proper implementation of the overall solution, but can
be vital to identifying wrong behavior within the organization when a network operation is in
jeopardy. Imagine a network outage due to human error. Someone might accidentally shut
down a port and prevent proper communication across the FICON setup. A modern AAA
solution allows for quick determination of which user made the mistake.
All IBM c-type network devices support local AAA services. Users log in, authenticate, and
are authorized to perform some actions on specific resources. All activity is tracked for
accounting purposes. The system maintains the username and password locally and stores
the password information in encrypted form. You are authenticated based on the locally
stored information. Even better, all IBM c-type network devices typically support remote AAA
services, which are provided through the Remote Authentication Dial-In User Service
(RADIUS), Terminal Access Controller Access Control System Plus (TACACS+), or
Lightweight Directory Access Protocol (LDAP) protocols. Remote AAA services offer the
following advantages over local AAA services:
User password lists for each switch in the fabric can be managed more easily.
Remote AAA servers are deployed widely across enterprises and can be easily adopted.
The accounting log for all switches in the fabric can be centrally managed.
User role mapping for each switch in the fabric can be managed more easily.
For most network administrators, the genesis of AAA coincided with the development of the
RADIUS protocol, which is based on UDP port 1812 and 1813. RADIUS became an internet
standard through the Internet Engineering Task Force (IETF) in 1997, and it is still a widely
accepted AAA protocol. Another commonly adopted AAA protocol is TACACS. It is described
in RFC 1492, but it never became an internet standard. TACACS evolved into XTACACS,
which added accounting capabilities, and then into TACACS+, which is the current version.
TACACS+ runs over TCP port 49 and allows for encryption of all transmitted data, not just
passwords, which overcomes the vulnerabilities that are found in RADIUS.
More recently, organizations have considered the widespread utilization of Microsoft Active
Directory as the primary source of access control for all devices, services, and users,
including network administrators. Leveraging the Microsoft version of LDAP with Kerberos
authentication, network administrators can be redirected to a HA cluster of Active Directory
servers for credential validation (authentication), assignment of privileges (authorization), and
tracking of activity (accounting). For companies where Linux and open source software are
preferred, the OpenLDAP client and its Directory Server are an alternative implementation,
but others are available on the market. Secure LDAP is the typical protocol in use.
In essence, AAA represents a complete system for tracking user activities on an IP-based
network and controlling their access to IT resources. AAA is often implemented as a
dedicated and centralized server, sometime referred to as an access control server. Devices
talk to the AAA directory server through a AAA daemon, following a classical client/server
approach.
Sometimes, multiple AAA servers based on different protocols are used. Complex
multi-domain systems, like those composed of computing, networking, and storage elements,
might welcome the adoption of multiple authentication tools for the various technical domains
and subject matter experts (SMEs). Each AAA server group is specific to one type of protocol
or service or IT resource type.
For step-by-step instructions about configuring remote centralized AAA services on IBM
c-type devices, see Configuring Security Features on an External AAA Server.
These two default roles cannot be changed or deleted. Users belonging to the network-admin
role are authorized to create and customize up to an extra 64 roles and add other users to
those roles.
Each role can be applied to multiple users and typically each user is assigned to a single role.
CLI and SNMP users sit in different local databases but share common roles. You can use
SNMP to modify a role that was created by using CLI and vice versa. Each role in SNMP is
the same as a role that is created or modified through the CLI. It is possible to limit the scope
of authorization to specific virtual storage area networks (VSANs). In other words, custom
roles can be restricted to one or more VSANs as required.
One of the advantages of separating open systems traffic and FICON traffic into separate
VSANs is that you can grant administrative access on a per VSAN basis, which means that if
you have separate FICON and open systems administrative staff, complete administrative
authority can be given to one VSAN while preventing access to a different VSAN. For
example, it is possible to create a role that is called ficon-admin and allow it network-admin
privileges when accessing the FICON VSAN only.
Example 3-2 lists the CLI commands to create, commit, and distribute a custom role.
Role-based configurations must be committed before they take effect and it is best to
distribute them to all switches in the fabric. This task is important for custom roles. To this end,
the Cisco Fabric Services (CFS) infrastructure provides the necessary support to implement
a single point of configuration for the entire fabric.
<snip>
Role: ficon-admin
Description: Custom role with true network-admin privileges, FICON VSAN only
Vsan policy: deny
Permitted VSANs: 30
-------------------------------------------------
Rule Type Command-type Feature
-------------------------------------------------
1 permit attribute-admin *
For more information about configuring RBAC- and VSAN-restricted users, see Configuring
User Accounts and RBAC.
There are two VSANs that are created on any IBM c-type switch by default: VSANs 1 and
4094. VSAN 4094 is referred to as the isolated VSAN. When a VSAN that has active ports is
deleted, the ports are then moved to the isolated VSAN. Ports in the isolated VSAN cannot
communicate with any other ports, including other ports that are in the isolated VSAN.
Because of this behavior, moving all ports into the isolated VSAN at initial configuration is an
effective way to secure ports in the fabric, despite it not being a common practice. Then, ports
would require a manual configuration change to be placed in an active VSAN. By default, all
ports are in the default VSAN, and that is VSAN 1. It is not a best practice to use VSAN 1 as
your production VSAN.
By creating a separate VSAN for your production traffic, you effectively isolate your production
devices from any device that is later connected to the switch. Again, a manual configuration
change is required to move a port from VSAN 1 to an active production VSAN.
Of course, VSANs are also used to separate FICON traffic from FC traffic.
The ease of use that is combined with the administrative and security benefits of VSAN
technology explain why it is rare to find any customer that is not using it.
The use of VSANs does not preclude the use of zoning. The two features are complementary.
The zoning process is per VSAN, which means that creating separate VSANs allows zoning
granularity so that a misconfiguration of the zoning database in one VSAN does not cause a
problem for any of the other VSAN.
In storage networking, zoning is the mechanism in FC fabrics that controls what ports are
allowed to inter-communicate. Zoning is the partitioning of end nodes in an FC fabric into
smaller subsets to restrict interference and add security by preventing unintentional
communication. Although multiple devices are made available to a single device through a
SAN, each system that is connected to the SAN should be allowed access only to a controlled
subset of these devices. For example, we do not want an initiator to talk to another initiator
because only initiator to target communication is useful and should be allowed. Target to
target communication is also possible for data replication. In general, single initiator single
target (SIST) zoning is recommended, which is also known as 1:1 zoning.
There can be only one active zone set per VSAN. Other zone sets can be configured but not
active at a time. Several zones make up a zone set. IBM c-type switches support up to 16,000
zones and 20,000 zone members. Changes to the active zone set can be made
non-disruptively. Zone members can be identified by using the following methods:
Port worldwide name (pWWN): The worldwide name (WWN) of the attached device.
Fabric port (F_Port) WWN: The WWN of the switch port.
FCID of the attached device.
FC alias: The alias name is in alphabetic characters and identifies a port ID or WWN. The
alias can include multiple members.
Device alias: The device alias name is like an FC alias but provides more scalability and
works across VSANs.
Domain and port: The domain ID is an integer 1 - 239. A port number of a non-Cisco
switch is required to complete this configuration.
IP address: The IP address of an attached device in 32 bytes in dotted decimal format
along with an optional subnet mask. If a mask is specified, any device within the subnet
becomes a member of the specified zone.
Internet Small Computer Systems Interface (iSCSI) Qualified Name (IQN): A unique
identifier that is used in iSCSI to identify devices.
Interface and sWWN: Based on a switch interface number and sWWN.
FICON requires the default zone to be set to permit because in FICON environments the
devices that are allowed to communicate are explicitly defined in the Hardware Configuration
Definition (HCD) file on the mainframe and security is derived from there.
Important summary: Default zoning is set to deny by default for open system VSANs.
When you configure a FICON VSAN by using the CLI, you must change the zoning
manually to permit. When using DCNM or Device Manager (DM) instead, the zoning is
changed to permit automatically.
There is an alternative zoning approach that is called smart zoning, which is a valid
alternative at scale. It provides a simpler operational environment while keeping the same 1:1
approach at an implementation level.
With smart zoning, the administrator creates large zones with both initiators and targets in
them, grouping devices with some logical criteria of preference. Then, the “smart” capability
applies 1:1 zoning to devices in agreement with best practices. This process is possible
because when end nodes first register in an IBM c-type switch, they declare who they are
(initiators versus targets), and this information is used by smart zoning to establish
communication. This process is different than a default zoning permit, where any to any
communication is allowed.
Just like VSANs, zoning can be considered a security feature. To make it better, IBM c-type
devices support only hard zoning that is hardware-enforced, which means any frame entering
a switch port goes through an inspection process and is allowed to reach only the end nodes
that are configured in the ASIC TCAM table. This type of zoning is different from the so-called
soft zoning, where there is no hardware enforcement and some control plane level of
obfuscation of end node addresses. In this case, an intruder might have a frame reach a
forbidden end node if they know the address. The hardware has no way to prevent that
incident from happening.
Figure 3-6 on page 83 highlights the relationship between VSANs and zones.
For more information about zoning and a step-by-step guide about how to configure it, see
Configuring and Managing Zones.
We have only scratched the surface of the FC-SP capabilities. For more information about
this advanced security feature, see Configuring FC-SP and DHCHAP.
When port security is enabled, all fabric login and initialization requests from unauthorized
devices, including (Nx ports) and switches (xE ports), are rejected and the intrusion attempts
are logged.
To enforce port security, you must configure the devices and switch port interfaces through
which each device or switch is connected. You can use either the pWWN or the node
worldwide name (nWWN) to specify the Nx port connection for each device. For switches, you
use the switch worldwide name (sWWN) to specify the xE port connection. Each Nx and xE
port can be configured to restrict a single port or a range of ports.
Enforcement of port security policies is done on every activation and when the port tries to
initialize. The port security feature requires all devices connecting to a switch to be part of the
port security active database. The switch uses this active database to enforce authorization.
You can instruct the switch to automatically learn (auto-learn) the port security configurations.
The auto-learn option allows any switch in the IBM c-type family to automatically learn about
devices and switches that connect to it. Using this feature to implement port security saves
tedious manual configuration for each port. Auto-learn is configured on a per-VSAN basis. If it
is enabled, devices and switches that are allowed to connect to the switch are automatically
learned, even if you have not configured port access. Learned entries on a port are cleaned
up after that port is shut down.
By default, the port security feature is not activated. When you activate the port security
feature, the auto-learn option is also automatically enabled. You can choose to activate the
port security feature and disable auto-learn. In this case, you must manually configure the
port security database by individually adding each port.
This feature helps prevent unauthorized switches from joining the fabric or disrupting current
fabric operations. It uses the Exchange Fabric Membership Data (EFMD) protocol to ensure
that the list of authorized switches is identical in all switches in the fabric.
Fabric binding requires that you install either the MAINFRAME_PKG license or the
ENTERPRISE_PKG license on your switch. You do not need both licenses.
Port security and fabric binding are two independent features that can be configured to
complement each other. The main difference is that fabric binding works at the switch level
and port security works at the interface level. From an administrative point of view, port
security can benefit from the auto-learn feature. The CFS distribution is not available for fabric
binding.
To enforce fabric binding, configure the sWWN to specify the xE port connection for each
switch. Enforcement of fabric binding policies is done on every activation and when the port
tries to come up. In a FICON VSAN, the fabric binding feature requires that all sWWNs are
connected to a switch and that their persistent domain IDs are part of the fabric binding an FC
VSAN, only the sWWN is required (the domain ID is optional).
Example 3-4 shows the usage of an sWWN and domain ID for a FICON VSAN.
In a FICON environment, the purpose of the fabric binding feature is to ensure that ISLs in
FICON cascaded topologies are enabled only for switches that are configured in the fabric
binding database, which includes FC ISLs, FCIP ISLs, and port channels made up of these
ISLs. Each FICON switch that is allowed to connect to the fabric must be added to the fabric
binding database of every other FICON switch in the fabric. Activating fabric binding is a
prerequisite for enabling FICON on a VSAN.
In a FICON cascaded topology the fabric binding database contains the sWWN and domain
ID of all the switches that are authorized to join the fabric. The fabric binding authorization is
enforced per VSAN because each VSAN is a logical fabric. In a FICON point-to-point
switched topology, fabric binding is still required, but the fabric binding database is empty
because defining the local sWWN and domain ID in the fabric binding database is not
required.
Attention: The force option must be used with discretion and care. In fact, it is easy to
have a mistake in the configured fabric binding database, use the force option, and cause
isolation to occur in the fabric.
The EFMD protocol makes sure that all switches in the fabric have identical fabric-binding
databases when ISL links are started. The protocol does not distribute the database when it
is changed on a single switch. The fabric binding database of each switch in the fabric or
VSAN must be manually updated with the sWWN and domain ID of every other switch in the
fabric.
When an ISL is initialized in a FICON VSAN, the following checks are performed:
Is the peer sWWN present in the active fabric binding database?
Does the domain ID of the peer switch match what is present in the active fabric binding
database?
Is the active fabric binding database of the peer switch identical to the active fabric binding
database in the local switch? Again, this is the purpose of the EFMD protocol.
If during an ISL link negotiation the databases from the two switches do not match, the link
does not allow the FICON traffic to flow. When switches are added to an existing fabric, all the
switches must be configured to incorporate the new switches in their active databases.
Fabric binding configuration starts by enabling the feature by running the feature
fabric-binding command. For more information about how to configure fabric binding, see
Configuring Fabric Binding.
This approach is sometimes referred to as port mode security, and it is intended to protect
edge ports from becoming ISL ports. It can be coupled with RBAC so that only certain users
have the required privileges to change the port mode.
Best practice: Configure port mode security on your IBM c-type FICON switches.
All current IBM c-type switches support peer authentication according to the FC-SP standard
by using the DH-CHAP, but this process does not prevent unwanted activities such as traffic
interception. To help ensure data integrity and privacy, data should also be encrypted.
For FC links, the capability to encrypt traffic on the wire is known as TrustSec Fibre Channel
Link Encryption. This capability is an extension of the FC-SP standard and uses the existing
FC-SP architecture. It enables either AES-Galois Counter Mode (AES-GCM) or AES-Galois
Message Authentication Code (AES-GMAC). AES-GCM authenticates and encrypts frames
with the 128-bit Advanced Encryption Standard (AES) algorithm, and AES-GMAC
authenticates only the frames that are passed between the two peers. Encryption is
performed at the line rate by encapsulating frames at switch egress. At switch ingress on the
other side of the link, frames are decrypted and authenticated with integrity checks (a hop by
hop encryption mechanism). Only E-ports and TE-ports that are configured between IBM
c-type switches and their affiliates can support encryption.
Note: IBM Storage Networking SAN50C-R supports peer authentication but not in-transit
data encryption for native FC ports.
There are two primary use cases for TrustSec Fibre Channel Link Encryption:
Customers are communicating outside the data center over native FC (for example, dark
fiber or some type of wavelength division multiplexing).
Encryption is performed within the data center for security-focused customers, such as
defense, military, or financial institutions.
The beauty of TrustSec Fibre Channel Link Encryption is the simplicity of enabling it and the
scale at which it can be enabled without affecting SAN performance. Both the NX-OS CLI and
DCNM allow you to configure and provision this feature. To perform encryption between the
switches, a security association (SA) must be established. An administrator must manually
configure the SA before the encryption can take place. The SA includes parameters such as
encryption keys and a salt (a 32-bit hexadecimal random number that is used during
encryption and decryption). Up to 2000 SAs are supported per switch. Key management is
not required because keys are configured and stored locally on the switches.
TrustSec Fibre Channel Link Encryption requires specific paths within the ASICs, so it is
available only to a limited set of ports. For example, IBM c-type switches can support up to 12
encrypted FC ports on the 48-port 32 Gbps switching module for a total of 384 Gbps. This
amount is much encrypted traffic on a single switching module, and is three times higher than
the industry average. It is worth pointing out that enabling encryption on an FC port does not
reduce the number of buffer-to-buffer (B2B) credits that are available for distance extension.
That limitation, often found on other FC products, does not affect IBM c-type switches.
TrustSec Fibre Channel Link Encryption is competitively unique and a clear differentiator for
high-security accounts. The TrustSec Fibre Channel Link Encryption feature requires the
ENTERPRISE_PKG license.
Best practice: Configure TrustSec Fibre Channel Link Encryption with the affected ports
administratively shut down.
For more information about how to configure TrustSec encryption, see Configuring Cisco
TrustSec Fibre Channel Encryption.
3.14 IP Security
For securing FCIP links, the IP Security (IPsec) protocol framework of open standards, which
is developed by the IETF, is used to provide data confidentiality, integrity, and peer
authentication. Therefore, with IPsec, data can be transmitted across a public network without
fear of observation, modification, or spoofing, making it an ideal feature when doing
long-distance DR implementations. The IPsec feature is highly recommended on
long-distance FCIP links because they are the most exposed to traffic interception. The
encryption capability can be enabled natively on IBM c-type switches or alternatively on data
center exit routers, depending on specific situations.
IPsec is composed of two protocols: one for key exchange and one for data flow encryption.
On IBM c-type switches, IPsec uses the Internet Key Exchange (IKE) protocol to handle
protocol and algorithm negotiation and to generate the encryption and authentication keys
that are used by IPsec. IKE provides authentication of the IPsec peers, negotiates IPsec SAs,
and establishes IPsec keys. SAs are per direction, so a full duplex link has two of them.
Conceptually, the SA provides all the parameters that are required to establish how data must
be protected. Instead, the security policy indicates what traffic must be protected and inserted
into the IPsec tunnel.
On IBM c-type switches, IPsec uses the Encapsulating Security Payload (ESP) protocol to
achieve data confidentiality (encryption), integrity (hash), and peer authentication (signature
or certificates). The 256-bit AES algorithm is typically used for encryption (this is referred to
as the ESP-AES 256 transform set). Although there are two modes of operation that are
allowed by the IPsec framework, the NX-OS implementation is tied to its specific security
gateway use case, so supports only IPsec tunnel mode (and not IPsec transport mode, which
is more suited for hosts). The IPsec tunnel mode encrypts and authenticates the entire IP
packet, including its original header. It works by adding an outer IP header and the ESP
header before the original IP packet and adding an ESP trailer and authentication header
after it, essentially creating an envelope around the original IP packet. Because the outer IP
header is in clear text, this approach remains compatible with NAT solutions.
IPsec is applied to the physical interfaces of IBM c-type switches so that FCIP tunnels inherit
this capability.
From a networking point of view, IPsec inserts an extra header into an existing TCP/IP packet.
To avoid fragmentation and achieve higher performance, the encrypted packet must fit into
the Ethernet interface maximum transmission unit (MTU). The maximum FC frame is 2148
bytes, and an extra 100 bytes are needed to accommodate IPsec encryption.
Best practice: Configure an MTU of 2300 bytes on long-distance links when using IPsec.
Some IP networks do not support jumbo frames and the Ethernet MTU value is set to 1500
bytes. In this case, an FC frame does not fit inside and fragmentation occurs, adding load on
the transmitting and receiving switches and contributing to some extra latency. Even in this
case, IPsec for FCIP links can be used, but performance is reduced.
IPsec is supported by all IBM c-type devices supporting FCIP, specifically the 24/10-port SAN
extension module and the IBM Storage Networking SAN50C-R SAN extension switch.
For more information about how to configure IPsec encryption, see Configuring IP Security.
Figure 3-9 illustrates the combined use of TrustSec Fibre Channel Link Encryption and IPsec
features in a design with three data centers.
These devices included an IBM Z server, an IBM System Storage DS8870 server, third-party
Direct Access Storage Device (DASD) systems, an IBM TS7760 Virtual Tape Server (VTS),
and IBM c-type Fibre Channel Protocol (FCP)/ IBM Fibre Connection (FICON) switches and
management software.
The test topologies are representative of the technologies that are required to implement
specific functions, but they do not represent recommended deployments. They do not include
redundant FICON/FCP fabrics and are used for demonstration and illustrative purposes only.
In the local FICON topology, the IBM Storage Networking SAN384C-6 switch, which is
domain 0x20 in virtual storage area network (VSAN) 40, is connected to the IBM Z interfaces
Channel Path ID (CHPID) 18 and CHPID 20 by using switch interfaces 0x00 and 0x30. The
local DS8870 is connected to the switch interfaces 0x10 and 0x40, as shown in Figure 4-3.
In the cascaded FICON topology, there are two switches cascaded together through two
physical interfaces, 0x2F and 0x5F, which are configured to form logical port channel 5
(0xF0). The function of the port channel is to provide a high availability (HA) connection
between the two switches to prevent a single point of failure (SPOF) in the case of a link
failure.
For this topology, VSAN 40 is used for the cascaded FICON traffic. The IBM Z interfaces are
CHPID 20 and CHIPD 30, and they are connected to switch interfaces 0x22 and 0x42
respectively on the IBM Storage Networking SAN384C-6 switch. The remote DASD is
connected to the IBM Storage Networking SAN192C-6 interfaces 0x0A and 0x4A, as shown
in Figure 4-5 on page 95.
Each interface and tunnel on the switches use different IP addresses and subnets, and they
use the default FCIP TCP port of 3225. Additionally, the two FCIP interfaces are combined
into a logical port channel (0xF5) for HA and redundancy.
The IBM Z interfaces CHPID 70 and CHPID 78 are connected to the IBM Storage Networking
SAN384C ports 0x02 and 0x07, and the IBM TS7760 VTS is connected to the remote
IBM Storage Networking SAN192C-6 switch on ports 0x01 and 0x05, as shown in Figure 4-7.
Figure 4-7 Cascaded VTS FCIP configuration with two physical interfaces
The second FCIP topology that is deployed depicts using two logical FCIP interfaces that are
configured over a single physical interface. In this example, the physical interface is
configured with logical subinterfaces with one 5 GbE FCIP tunnel that is defined for each
subinterface. Each subinterface is mapped to a different VLAN and requires a connection to
an Ethernet switch that understands VLAN tagging.
To make identification easier, the subinterface is named with the VLAN number as part of its
name. So, VLAN 1000 is on subinterface IPS7/3.1000, and VLAN 1010 is on subinterface
IPS7/3.1010. As in the previous example, each interface, in this case subinterface, has its
own IP address and subnet and is associated to a single FCIP tunnel, as shown in Figure 4-8
on page 97.
The third FCIP topology that is deployed shows using two logical FCIP interfaces that are
configured over a single physical interface with no subinterfaces. In this example, each 5 GbE
FCIP tunnel is defined to the same physical interface and uses the same IP address, but is
differentiated in the configuration by using TCP ports 3225 and 3226, as shown in Figure 4-9.
Figure 4-9 Cascaded VTS FCIP configuration with one physical interface and two FCIP interfaces with
different TCP ports
An IBM mainframe computing system (also referred to as a central processor complex (CPC))
consists of a set of hardware products, including a processor unit (PU), and software
products, with the primary software being an operating system (OS) such as IBM z/OS. The
central processor (CP) is the functional hardware unit that interprets and processes program
instructions. One or more CPs work together with other system hardware that can be shared
between multiple CPs, such as I/O channels and storage. A single CP can process only one
instruction at a time. To achieve more processing power, CPCs can contain multiple CPs, and
an OS running on the CPC can use multiple CPs. When an OS has multiple CPs at its
disposal, it can process instructions in parallel, which increases performance.
The mainframe is critical to commercial databases, transaction servers, and applications that
require high resiliency, security, and agility. First introduced more than 50 years ago,
mainframes are still omnipresent today. They handle massive amounts of heterogeneous
processing tasks, reliably, securely and with great redundancy, and they offer compatibility
with earlier programs and applications. Since 1998, there is support for Linux as an
alternative to native mainframe OSs. The result was a unique combination of earlier and
modern technologies. In fact, it is now commonplace to see mainframes that run COBOL
applications on z/OS alongside Docker containers on Linux by using IBM z/VM®. Mainframes
are in use at 92 of the world's top 100 banks, 23 of the 25 top airlines, all the world's top 10
insurers, and 71% of Fortune 500 companies.
FICON is a storage area network (SAN) communication protocol that is used on IBM Z
mainframe computers to exchange data with their storage arrays. The underlying transport
network uses the same hardware as Fibre Channel Protocol (FCP) SANs, but there are some
unique and critical differences. It is necessary to understand the differences, and their
implications in a SAN design, before describing FICON design considerations and best
practices. Currently, FICON is the most common method that is used to connect the
mainframe to its auxiliary I/O devices, but Direct Attached Storage Devices (DASDs) were
popular long before networked solutions became prevalent. Recently, a new implementation
was introduced for mainframe to storage direct connection, and it is know as IBM zHyperLink.
It complements, not replaces, FICON channels when there is a need for low-latency
communication.
IBM Parallel Sysplex® technology represents a synergy between hardware and software and
is composed of Parallel Sysplex capable servers, CFs, coupling links, Server Time Protocol
(STP) and more. Parallel Sysplex technology is a highly advanced, clustered processing
system. It supports high-performance, multisystem, read/write data sharing, which enables
the aggregate capacity of multiple z/OS systems to be applied against common workloads.
Figure 5-1 shows the connectivity options with the IBM Z platform.
For more information about connectivity options for the IBM Z platform, see IBM Z
Connectivity Handbook, SG24-5444.
This section covers some fundamentals of the FICON protocol, and its origin, purpose, and
terminology. It also explains why a FICON switch is different from a Fibre Channel (FC)
switch.
A FICON switch (or director) supports I/O that contains Fibre Channel Single Byte (FC-SB)-6
payloads, supports the Fibre Channel Framing and Signaling (FC-FS) Extended Link
Services (ELS) that FICON requires, and has support for the IBM Control Unit Port (CUP)
function. All these items are described later.
On the other end of the communication link, a control unit (CU) was needed to interact and
convert between the channel requests and the functions of devices, such as disk, tape,
printers, and card readers. A CU provides the logical capabilities that are necessary to
operate and control an I/O device, and it adapts the characteristics of each device so that it
can respond to the standard form of control commands that are provided by the CSS. A CU
may be housed separately, or it may be physically and logically integrated with the I/O device,
the CSS, or within the server itself. Behind the CU are the I/O devices representing the
communication target of the mainframe.
In z/OS environments, the combination of a channel, a CU, and an I/O device behind it is
named a channel path, and it is statically defined. For switched environments, the channel
path becomes a logical entity that is established through the switched fabric, and it must be
statically defined too, as shown in Figure 5-2 on page 105.
Figure 5-3 I/O in S/360: How a program controls the complete path for an I/O operation
This static approach was perfectly adequate in the period when storage devices were directly
attached to the mainframe on a SCSI bus and the concept of storage networks had yet to
come.
In this way, white space such as blanks did not need to be stored, and the storage media,
such as disk, could be filled to the limit with little space left unused.
As an improvement on the CKD method, Extended Count Key Data (ECKD) introduced
support for nonsynchronous operation. In other words, the transfer of data between the
channel and the CU is not synchronized with the transfer of data between the CU and the
device, as shown in Figure 5-4.
The channel sends the sense instruction to the CU, which sends it to the device. When the
device finds the next record ID under its magnetic read/write head, it sends the ID to the CU.
If the CU sends a negative response back to the channel, it indicates that the requested ID, X,
was not found in the current record. If the channel receives a negative response, it runs the
next instruction, TIC *-8, sending the sense instruction back to the CU, which in turn sends
the command to the device. The disk rotates and eventually the next record's ID is under the
read head, and the process continues.
As you can see, this method of I/O is a real-time dialog with many commands and responses
back-and-forth between the channel and the CU, and between the CU and the device. The
order of operations is critical because a read or write must follow immediately after the correct
record ID is located.
FBA, as the name suggests, uses a consistent block size on I/O devices. The location of the
block can be determined by the block's address and the rotation of the device, and the device
can store the I/O request until the record falls under the head. On one hand, FBA is inefficient
because it wastes disk space. The block size could be 512 bytes (4 K bytes in recent
hardware), but that block might store only 1 byte of user data and the rest of the block's space
is wasted. Also, it requires not one but multiple I/Os for large chunks of data.
However, FBA is simple and the channels (that is, host bus adapters (HBAs)) and devices are
thus less expensive. An I/O operation can read/write a block of data in almost any order
independent of the location of the devices' read/write head. For example, the writing of three
blocks such as A, B, and C can be required by the program, but because of the rotational
delay of the storage media, the order of write operations is B, then A, then C. Using FBA, the
program, the HBA, and the device care much less about the order of I/O commands. Of
course, there is no complete freedom about that order, but it is a lot less stringent than with
the ECKD method.
To summarize, FBA (also known as Logical Block Addressing), can be described this way:
All data blocks are the same size.
Data blocks are sequentially numbered.
The starting point is at Cylinder 0, Track 0, Record 0.
The end point is device-dependent.
32-bit addressing with 512-byte sectors allows for 8 TB per device (more with 4 K byte
sectors).
The similar Cylinder, Head, and Sector (CHS) format is 24 bits.
I/O adapters are often packaged in hardware units (called cards, blades, or modules), so they
can be added, removed, relocated, or replaced as needed. Often, there is more than one
channel adapter in the card for space and cost concerns. For example, ESCON cards often
have eight channels, and FICON cards have as many as four, but most often come with two
channels.
The Channel Path ID (CHPID) is a logical value that is assigned to each channel path of the
system that uniquely identifies that path. The CHPID number range is hexadecimal 00 - FF
(256 elements) and must be unique within a CSS. With IBM Z servers, you can define more
than 256 CHPIDs in the system by using multiple CSSs. CHPIDs are logical values and
provide a level of abstraction that enables easier migration of programs from machine to
machine. CHPIDs are not preassigned on IBM Z platforms. The administrator must assign
CHPIDs to PCHIDs by using the appropriate tools.
5.2.9 IOCP
To implement source-based routing, the channels and the I/O subsystem (IOS) of the OS
must have the configuration of channels, CUs, and devices. This task is done by defining all
these elements and how they are attached.
Figure 5-7 shows a simple example of a definition statement from one channel, 3, to a CU,
1000. The 256 unit addresses on the CU provide device addresses in the range 1000 - 10FF.
The UNIT=2105 parameter defines the type of CU.
The list of all definition statements is saved in a input/output definition file (IODF) that serves
as the input to the IOCP. The IOCP creates a compiled and specially formatted version that is
loaded in to memory and used by the channels. This compiled version is called the
Input/Output Configuration Data Set (IOCDS). The IOCP also invokes the Multiple Virtual
Storage Configuration Program (MVSCP) to create the version that is used by Multiple Virtual
Storage (MVS).
Note: MVS was one of the primary mainframe OSs on IBM S/390® computers and the
precursor of z/OS. Older MVS releases are no longer supported by IBM, but z/OS supports
running older 24-bit and 31-bit MVS applications and newer 64-bit applications.
There can be many IOCDSs, one of which is selected at Initial Machine Load (IML) or at Initial
Program Load (IPL). Mainframes are resilient, so an IML from a powered-down machine is
rare, but IPLs of individual LPARs are far more common. The IOCDSs are kept at the Service
Element (SE) and are attached to the mainframe, which provides direct management and
support services for the mainframe.
HCD can make dynamic I/O configuration changes for both hardware and software. An I/O
configuration defines the hardware resources that are available to the OS and the
connections between these resources. The resources include the channels, the
ESCON/FICON Directors (switches), the CUs, and the devices.
To summarize, the HCD element of z/OS supplies an interactive user dialog box to generate
the IODF and the IOCDS. The validation checking that HCD performs as data is entered
helps eliminate errors before the new I/O configuration is implemented. The output of HCD is
an IODF, which is used to define multiple hardware and software configurations to the z/OS
OS.
When you activate an IODF, HCD defines the I/O configuration to the CSS and the OS. With
the HCD activate function or the MVS activate operator command, you can change the
current configuration without performing an IPL of the LPAR software or a power-on reset
(POR) for the hardware. Making changes while the system is running is known as dynamic
reconfiguration.
Figure 5-8 shows the relationships of the definition statements, processes, data sets (or files),
and memory areas.
Mainframe environments are designed for reliability. MPIO has redundant paths to the same
device, which allows I/O to a device to continue if there is any failure of a single channel, the
channel card, the fiber connections, the switch, the I/O adapter on the CU, or the card on the
CU. If both MPIO channels to a CU are on the same channel card, a single failure of the card
can cause the loss of both channels, as shown in Figure 5-9.
The data center hardware planner is responsible for ensuring that the paths in the FICON
Directors and the CUs that are used in MPIO are not all on the same card and that the
different cards are not all in the same hardware element.
FICON Directors and CUs also have multiple adapters on the same card. They also have
multiple and separate hardware control elements to provide redundancy in case of hardware
or power failures. Mainframe environments should always have multiple SAN or FICON
switches, and you should ensure that the MPIO paths are distributed and not all on the same
switch.
Important: CMT does not examine the location of the ports on CUs or FICON Directors.
The data center hardware planner is responsible for ensuring that the paths in the FICON
Directors and the CUs that are used in MPIO are not all on the same card and the different
cards are not all in the same hardware element.
Here we present a brief history of the evolution of mainframe I/O to explain the changes and
what was needed to accommodate compatibility with earlier versions:
CKD was improved and extended (ECKD), which required changes at both the channel
and the CU to support new I/O commands. However, old programs did not need to change
when used with ECKD commands.
The scope of the I/O constructs greatly increased:
– CUs and devices were merged into one box with virtual CU and device images. The
different logical CU images each had a CU address (CUADDR).
– Multiple OS images can run concurrently on the same physical mainframe in separate
LPARs. The different LPARs each have a Multiple Image Facility Identifier (MIF ID).
Note: We use the term channel unless there is an important technical reason for the
distinction.
A PCHID enabled more than 256 physical channels on a mainframe and also enabled
mainframes to run more LPARs on the same machine. A program's view of a channel is a
CHPID, but CHPID 3 might be mapped to PCHID 127.
Pipelining was introduced, where some channel commands could be transmitted without
waiting for a response to the prior command. Many channel programs were reading or
writing multiple blocks of data consecutively in one channel program, so why require
waiting for a response after each one? This change increased the speed of I/O, especially
over longer distances.
Interconnection methods such as PCI Express (PCIe) and InfiniBand increased the speed
and reach of mainframe I/O.
Despite all these changes, the essence of CKD's I/O flow remained and old channel
programs, and the programs that invoked them, could run unmodified.
5.2.13 ESCON
In 1990, IBM introduced the first SAN with Enterprise I/O System Connection (ESCON). In
addition to using fiber optics rather than copper cables, it also introduced the dynamic I/O
switch, also known as the ESCON Director. It was called dynamic because the physical
connections of channels (or HBAs) and CU adapters were static (unchanging), but the logical
connections were made in the switch between channels and CUs during an I/O operation and
then released so that they could be used to connect to different channels or CUs in the next
I/O operation. The mainframe and its source-based routing required the addition of a
destination link address for each CU and the source link address from each channel, as
shown in Figure 5-10 on page 115, to tell the switch which ports are involved when making a
new dynamic connection. The destination and source link addresses (each 1 byte) were
added to the I/O operation.
The IOCP definition statements were extended to allow the user to specify these link
addresses, as shown in Figure 5-11.
ESCON also introduced Link-Level and Device-Level functions. The Link-Level functions
allowed end units (such as channels, ESCON Director ports, and CU adapters) to initialize,
configure, validate, and monitor hardware ports. Only after the Link-Level functions had
verified the hardware could any channel or CU send Device-Level functions, such as channel
program commands and responses (status and data).
5.2.14 FCP
The FCP enabled migration from a SCSI bus I/O architecture to a SAN by using many of the
inventions of ESCON. FCP allowed for direct connect arbitrated loop (a topology analogous to
the SCSI bus), but most importantly, FCP enabled multiple SAN switches to be connected to
each other to create a fabric of I/O connectivity. Because of these changes, the link address
increased from 1 to 3 bytes:
1 byte (8 bits) for the domain ID (the unique switch identifier in the fabric)
1 byte (8 bits) for the link address (inherited from ESCON)
1 byte (8 bits) for the arbitrated loop port address (AL_PA)
With FCP, a new switch entering a fabric can dynamically obtain a random domain ID from a
principal switch or use one that is pre-specified. Internally, fabric-based routing must know the
domain IDs, but not the external initiators and targets.
An FCP fabric might have different routes from an initiator to a target, with the best one
chosen by the internal Fabric Shortest Path First (FSPF) algorithm. If one path should fail,
other paths might exist, and I/O operations can be rerouted by using FSPF and continue, as
shown in Figure 5-12.
SCSI had few bus addresses, usually 8, so commands were broadcast to all units on the bus.
Broadcasts did not work well with large, scalable fabrics. To group sets of initiators and
targets into small broadcast domains and isolate host-to-device sets from other hosts, FCP
introduced zones. Consider a zone to be an access control list (ACL) or an allowlist, where
only the members of the list are allowed to communicate. The zones are established in the
switch fabric by SAN administrators. Zone sets or zoning configurations contain the zones that
are used in the fabric. Aliases are used to simplify the zones by giving WWNs more human
friendly identifiers. There can be hundreds of zones in a zone set, each one allowing its
members to keep the simple addressing of a SCSI bus.
SCSI has only eight addressable units on the same bus, meaning only 3 bits of the AL_PA
field are required (the five high-order bits are unused). With 10-bit addressing, a single switch
can have up to 1024 ports. The N_Port ID Virtualization (NPIV) technology, which is used
sometime by FCP initiators and targets, relies on the AL_PA bits too.
5.2.15 FICON
Just as ESCON addressed constraints of SCSI and engendered FCP, FCP solved some
constraints in ESCON, such as duplex communication, which improves both throughput and
latency. Compared to the days of ESCON, a typical 4:1 reduction in the number of paths to
each CU was possible with FICON and still provides adequate I/O bandwidth. Even with four
times fewer channels, the FICON native configuration is capable of more I/O operation
concurrency than an ESCON configuration.
FICON was the mainframe version of FCP. The FC specification for FICON is FC-SB. As the
name indicates, only the link address field is used, which is 1 byte, as it was in ESCON.
Several restrictions were placed on the FCP protocol so that mainframes could provide
compatibility with older ECKD I/O programs in an FCP-style SAN. ESCON used only 1 byte
for a link address, so FICON kept the limit of 256 link addresses per switch domain, but FCP
allowed some AL_PA bits to be used. FICON cannot use 10-bit addressing.
In FICON, each channel and CU is mapped to an N_Port. When the CU N_Port is attached to
a fabric, the CU and its devices can be accessible to all channels that are attached that fabric.
A CU can communicate simultaneously with more than one channel, much like a channel can
communicate with more than one CU.
IBM originally announced the commercial availability of FICON channels for the S/390 9672
G5 processor in May of 1998, 2 years later than the relevant specification. Since then, FICON
evolved from 1-Gbit to the 16-Gbit FICON Express16S channels, announced in January 2015
with the IBM z13. Each generation of FICON channels offered increased performance and
capacity. Additionally, new topologies were allowed, like cascading and multihop cascading,
and new routing mechanisms provided better load-balancing on Inter-Switch Links (ISLs).
Major improvements also occurred with IBM High-Performance FICON for IBM System z®
(zHPF) and transport mode. We revisit all these changes with a historical perspective.
Figure 5-15 shows the different scenarios for FICON Converter and FICON Native mode.
Cascading expands the number of CUs that a channel can access, and the number of
channels that a CU adapter can access.
Cascading increases the distance between a channel and a CU (or CTC for FCTC, or CU
to CU for background data replication solutions like Peer-to-Peer Remote Copy (PPRC) or
Symmetrix Remote Data Facility (SRDF))
Most importantly, FICON Cascading allowed multiple data centers in a campus
environment to be connected as one, which was useful for the first level of disaster
recovery (DR) because the data centers could be on different power grids, different
cooling systems, and so on. Also, the data could be replicated from one data center to
another, either by dual write, eXtended Remote Copy (XRC), or PPRC or SRDF.
FICON Cascading required 2-byte link addresses, where the first byte specified the
switch's Domain ID, and the second byte specified the link address on the specified switch.
The name of the FICON specification, FC-SB (where SB stands for single byte) remained,
even though a 2-byte link address made it no longer technically correct.
Figure 5-17 on page 121 shows the FICON Link Addressing schema.
A link-level function, Query Security Attributes (QSA), is used at link initialization by all FICON
channels. If the response to QSA shows that the switch has IDID and fabric-binding, the
channel can use RSCN to be informed of fabric changes, complete initialization, and then be
available for I/O operations.
If the channel is attached to a switch that does not have one or more of the required
restrictions, it will have the FICON INCOMPLETE status, as shown in Figure 5-19.
After you set IDID and fabric-binding at the switch, you might have to toggle the channel at the
HMC or SE to force it to go through the link-level initialization again.
Among the data replication techniques, some use only FCP, and others use FICON. For
example, PPRC or SRDF use only FCP, while XRC uses FICON. IBM Geographically
Distributed Parallel Sysplex® (IBM GDPS®) uses many of these data replication techniques.
ECKD, even with pipelining and persistent IU pacing, is still highly controversial. ECKD
response time suffers with increased distances because the next channel command cannot
be sent until the response from the CU is received, which is after one or more round trips.
To counteract this effect, FC-SB-4 introduced Transport Mode, where the previous approach
was known as Command Mode. In Transport Mode, a channel sends the entire channel
program to the CU, which is possible because the CUs are increasingly sophisticated and
many channel programs follow a pattern, so now every possible channel program is
addressed. In simple terms, Transport Mode is set up in the initial channel program to the CU,
the channel program is sent (and possibly data, for a write operation), and the response is
received (and possibly data, for a read operation). There is far less interaction than with
ECKD and the response time improvement is significant.
This enhancement is called zHPF. zHPF is not enabled by default so that compatibility with
earlier systems is possible. During link initialization, both the channel and the CU indicate
whether they support zHPF or not. As you might expect with such a significant change to how
mainframe I/O is performed, zHPF requires mainframe, CU hardware, OS, and CU microcode
updates.
Using zHPF with the FICON channel, the z/OS OS, and the CU reduces the impact of the
FICON channel. This reduction is achieved by protocol optimization and reducing the number
of IUs that are processed, which results in more efficient usage of the fiber link. Currently, all
mainframes and almost all disk arrays support zHPF.
This enhancement is called System z Dynamic Auto Discovery (zDAC). zDAC is not enabled
by default. zDAC brings plug-and-play capabilities to the mainframe and can speed up and
simplify the process of adding storage devices.
FC-SB-5 also introduced bidirectional transport mode to make zHPF even faster.
In FCP and FICON, a read or write operation with much data is broken up because the
largest payload in an FCP or FICON frame is 2112 bytes. The frames are chained together in
a sequence so that they arrive at the program or device in the correct order. A channel
program, either FICON or SCSI over FCP, must be sent in order (we have described ECKD,
but the SCSI channel program for a single FBA block must also be in the correct order).
Multiple sequences, such as a complete channel program, compose an exchange. With many
channel programs from the same channel or a CU adapter that is accessed by many
channels, the frames have identifiers of their sequences and exchanges, and can be
interleaved.
With FIDR, the ISL is chosen by using SID, DID, and originator exchange ID (OXID). The
advantage is that the ISLs are far more evenly used, especially over time. Also, the traffic from
multiple VSANs, even FICON and FCP VSANs on the same physical switch, can use the
same ISL port channel because the load is redistributed for each I/O.
Some FICON configurations have a hop of no consequence, typically when using smaller
switches for FCIP. In that situation, all the switches in the path must be in the fabric-binding
database, and paths cannot be dynamically rerouted to other paths in a fabric, as shown in
Figure 5-22 on page 127.
FICON Multi-hop removed the 2-switch restriction for a set of specific fabric configurations
that many IBM customers requested because they are useful for multi-site connectivity. These
configurations are similar to the hop of no consequence. These configurations are shown in
Figure 5-23.
FICON and the FC-SB standard have maintained compatibility with earlier versions. The
characteristics of FICON are integrity, security, flexibility, availability, serviceability,
transactions, efficiency, and reliability, which can be remembered by using the mnemonic IS
FASTER.
Paths through the fabric (up to 239 inconsistent and variable I/O Limit of two switches between end
switches) can be dynamically response times based on the path points (extended in some cases with
changed to adjust to link or switch through the fabric. multi-hop).
failures.
Domain IDs can be dynamically The host-based specification of the Static Domain ID.
determined. destination domain ID requires a
consistent and static ID.
Link addresses are dynamic and can The host-based specification of the Assign a port number.
change as ports or modules are destination link address requires a
added to a switch, or if an I/O entity is consistent and static address.
moved.
If a switch has more than 256 physical The host-based specification of the The FCID last byte is 0.
ports, link addresses can use destination link address has a
high-order bits from the AL_PA. maximum of 8 bits.
Switch and port failures cause The channel does not know about a Use a Registered Link Incident
rerouted traffic patterns. device path failure until the time of the Report (RLIR) and RSCN.
I/O operation.
The first switch with the static domain The domain ID alone is not sufficient Use an allowlist of switches (that is, a
ID will win. to guarantee that the second switch is fabric-binding database).
correct (and not a malicious copy).
Any switch can connect to any other The channel cannot verify that the All switches must verify that any
switch in the fabric. second switch in the path has the attached switch has the same
same required characteristics of the restrictions (static domain ID,
attached switch (that is, a static fabric-binding database, and Cisco
domain ID and allowlist). Fabric Services (CFS) distribution).
A route change in the network can I/O on the new path might arrive Use In Order Delivery (IOD) (only
introduce a path that might be faster before previously sent I/O commands when LIOD is not configurable).
or less congested than the old route. to the same device. For example, with FCIP links, when
When a link change occurs in a port the simpler LIOD feature is enabled
channel, the frames for the same and a port channel link change
exchange or the same flow can switch occurs, the frames crossing the port
from one path to another faster path. channel are treated as follows:
Frames that use the old path are
delivered before new frames are
accepted.
The new frames are delivered
through the new path after the
switch latency drop period has
elapsed and all old frames are
flushed.
Frames that cannot be delivered in
order through the old path within the
switch latency drop period are
dropped.
Load-balancing attributes indicate the An exchange (OXID) within an I/O The default load-balancing scheme is
use of the source-destination ID might use a different route and arrive SID/DID. (SID/DID/OXID is used if
(src-dst-id) or the OXID (src-dst-ox-id, before a previously sent exchange. FIDR is supported on all nodes within
which is the default) for that specific FICON VSAN.)
load-balancing path selection.
For a mainframe to access a device, it must have a definition for a channel that connects to a
CU, which accesses the device. In a FICON environment, there are channels that are
attached to a FICON Director that can be used for I/O. An internal (not physical) port is used
as the destination link address for the virtual CU of the director. This port is called the
director's CUP. ESCON port addresses were 1 byte long, so the range was hexadecimal 00 -
FF. In the first ESCON Director, only 60 ports were physically implemented by using the range
of hexadecimal C0 - FC. Port address FF was reserved for broadcast (but never
implemented). Port address FE was chosen for the CUP. There also must be a device
definition for the ESCON Director, but CNTLUNIT and IODEVICE are collectively called the
CUP. FICON Directors also use port address FE for the CUP. ESCON Directors are unit type
9032, and FICON Directors are unit type 2032, but otherwise the definitions are the same.
Note: Port addresses FE and FF are reserved. FF cannot be used as the destination port
address for any CU. FE can be used only as the destination port address for the CUP.
There were programs such as ESCON Manager (for ESCON Directors) or IBM Tivoli®
IOOPS (for FICON Directors) to use CUP for control and data access. These functions were
incorporated into the Z/OS IOS. The System Management Facility (SMF) collects information
from any defined CUP. Also, any defined CUP can send I/O alerts and error messages to the
system console to warn about port failures, for example.
Caution: FICON introduced cascading, but the relatively unsophisticated device driver for
director was created before cascading when there could be only one director that was
accessible by a channel. Cascading logically allows a single channel to access multiple
CUPs, but CUP support was never extended to support that feature.
Figure 5-25 on page 131 shows the IOCP statements that allow for two-CUP access (at the
top) and IOCP statements that are allowed as valid statements, but only one CUP can be
accessed by the channel program (at the bottom). For multiple CUPs, you must use a
different channel for each one.
In summary, the CUP feature on FICON Directors is a legacy of CUP on ESCON. Simply put,
it is an in-band management capability. The CUP function allows z/OS to manage a FICON
Director with a high level of control and security. Host communication includes control
functions like blocking and unblocking ports, and performance monitoring and error reporting
functions. FICON Directors have an internal N_Port that is reserved for CUP. The address of
this port is defined (thee FE address), and by combining it with the DID of a switch, CUP can
work well in single switch or cascaded topologies.
For more information about the various reports beyond RMF that are available after FICON
CUP is enabled and defined as a device, see 8.7, “FICON Director Activity Report” on
page 426.
When using multiple FICON Directors in a data center, especially if they are cascaded, they
should be on the same qualified level of firmware. When this setup is not possible, make sure
to refer to release notes for details and verify that they are never more than two levels of
difference. Generally, most levels of firmware are compatible, but they might support only the
functions and features of the lower level.
Note: The replication methods affect the FICON and FCP SAN configuration.
In the diagrams that follow, for brevity and readability, we use abbreviations like F=FICON,
fc=FCP, M=Metro Mirror (MM), and so on.
Figure 5-26 Storage Replication: Local I/O direct attach or through a FICON Director
If there is a failure of the primary device (the A copy), HyperSwap or EMC AutoSwap will
redirect I/O (dashed line) to the secondary device (the B copy) so that in most cases the
application does not see any I/O issues. Because the I/O is not complete until both copies are
complete, the A and B copies are relatively close to limit any increase in response time due to
distance-induced latency. They might be in another data center in the same campus, but in
different buildings with separate power and cooling sources.
Global Mirror (GM) is asynchronous replication that also is configured and managed by the
CUs. Asynchronous replication can have a delay before new source data updates are
reflected in the remote device. FCIP or DWDM systems extend the distance beyond the limit
of an ISL, as shown in Figure 5-28.
As shown, FCIP can be connected to local area network (LAN) switches or routers, then
consolidated to wide area network (WAN) routers. WAN routers can either be attached to
remote sites over a network (dotted network line) or over a DWDM (dashed network line) that
is carrying other inter-site traffic. FCIP switches can also be attached directly to a DWDM
system (dotted FCIP line).
XRC is another replication technique that uses an application that is called Sysplex Data
Mover (SDM) in a remote site. Using FICON I/O, Figure 5-30 shows that SDM (1) monitors
the devices in the main site (2) for updates and copies them back to the remote site. The
updates are collected on Indexed Journal devices (3), and when the updates on devices that
are associated in a consistency group are all copied, they are copied by using FlashCopy to
the device that holds the remote copy (4).
In a director-level device, the FICON VSAN and the FCP VSAN can all be placed on one
physical chassis, and both can use the same FCIP connections to the remote site. FICON is
also between sites to allow the local channel on CEC 1 to access the remote GM Copy, or to
allow the SDM if using XRC. Network connections also are between sites and can be on the
same LAN and WAN as FCIP, but should be on separate circuits to ensure that FCIP has
dedicated network capacity. Furthermore, this approach shows only one item of each
component (for example, channel, device, FICON Director, or WAN router), where at least two
of each component is necessary for fault tolerance.
The following items should be considered during the installation planning phase for the IBM
c-type family in a FICON environment:
Which topology must be implemented?
Best practice: Create a reasonably detailed picture of both the current and proposed
topology during the planning phase. It is useful to describe the deployment to other
individuals and facilitate disambiguation efforts (that is, verify that different terms or
concepts that are used by different technical areas are describing the same or different
things).
Note: Open systems and FICON can coexist in separate logical fabrics by using VSAN
technology. A pure FICON environment can be built by using a single VSAN, with
multiple paths in the fabric managed by z/OS. IBM c-type switches have native support
for VSAN technology. It is always enabled, much like LPARs on modern mainframes.
Note: A minimum of two FICON paths from an IBM Z host to the IBM Storage
system is required. Also, there is a minimum of two FCP paths that are needed for
any open systems traffic, which cannot share the FICON paths.
– Add more FCP connections for concurrent open systems access to the IBM Storage
system.
– Other storage subsystems also have similar minimum pathing requirements.
– For more than 8 TB of IBM Z capacity, use to six or eight FICON paths to the
IBM Storage system from the IBM Z host.
– With multiple IBM Z hosts, route their paths through one or more FICON Directors, and
use the same rules as above based on the IBM Storage capacity for the total number of
paths from the directors to the IBM Storage system.
– Mainframe channels are defined as either FICON or FCP because they cannot be
both. CU adapters are also defined as either FICON or FCP because they cannot be
both. An ISL on a switch can be trunked to carry both FICON and FCP VSANs.
How many FICON channels and CU ports will be connected to each FICON Director?
– The number of channels and CU ports to be connected to each director depends on
the number of FICON channels on the server or servers and CU ports on the devices,
and the individual performance, availability, and growth requirements. It is possible to
install and define more than eight paths to any physical CU (from the same IBM Z or
S/390 processor image) when the physical CU has two or more logical CUs. A
maximum of only eight channel paths may be defined to any one logical CU. This
approach can be used for physical CUs that support greater than eight concurrent I/O
transfers and that have a customer requirement for a high I/O rate, such as the
IBM DS8000® or IBM Enterprise Storage Server®. For example, each of the eight
FICON paths to an Enterprise Storage Server may be used to address eight different
logical CUs in that Enterprise Storage Server.
– Each FICON path can address up to 16,384 device addresses, and each FICON path
can support multiple (up to 16 or more) concurrent I/O operations.
Note: The speed of light latency of 5 microseconds per kilometer can, at sufficient
distance, exceed the maximum allowable response time for I/O or applications. The
response time is highly variable and depends on many factors within the application,
and it cannot be stated as a standard limit.
Should the switch IDs in the IOCP definitions and domain ID in the FICON Directors
match?
– There are two locations where the switch ID is specified in the IOCP:
• LINK= on the CNTLUNIT statement
• SWITCH= on the CHPID statement
– For the LINK= parameter when identifying a CU adapter, the domain ID of the
destination FICON Director must match that of the switch ID value that is defined at the
FICON switch:
• The domain ID is required in a cascaded environment.
• The domain ID is optional in a noncascaded environment.
• If the domain ID in LINK= is incorrect, the I/O will fail, or worse, go to the wrong CU.
Important: VSAN isolation, not zoning, is the recommended way to isolate FICON
traffic.
Should the out-of-band management port of the FICON Director or Directors and Data
Center Network Manager (DCNM) server be connected to a separate LAN or to the
corporate network?
The directors and the DCNM server should be reachable on a separate LAN to isolate
director management traffic from other IP traffic. When remote access is required to
operate and maintain the FICON Director or Directors, connect the DCNM server to your
corporate network through an IP router.
What Small Form-factor Pluggable (SFP) transceivers should be used for FICON?
The recommended pluggable transceivers to be used in a FICON environment are
16 GbE or 32 GbE LWL transceivers for single-mode fiber with LC connectors.
As part of system planning activities, you must decide where to locate the equipment, how it
will be operated and managed, and what the business continuity requirements are for DR,
tape vaulting, and so on. The types of software (OSs and application programs) that are
intended to be used must support the features and devices on the system.
As a historical note, in the early days of FICON, you had more CU ports than CHPIDs. The
ratio was 1:10 typically, which was the opposite of FC, where initiators were 10 times more
than targets. Today, things are different after the introduction of logical images. A CHPID is
only one adapter or port of a processor. A single processor has many CHPID ports. A CU has
many HBAs or CU ports.
A FICON switch is used as a generic term for indicating an FC switch or director that supports
the transfer of frames containing FC-SB-6 payloads, supports the FC-FS ELS that FICON
requires, and has an internal logical N_Port that supports CUP.
Some models of the IBM c-type switches support FICON, FCP, and FCIP capabilities within a
single, high availability (HA) platform. This combination simplifies the migration to shared
mainframe and open systems storage networks, and it went through an extensive period of
integration testing to meet the most stringent IBM FICON qualification requirements.
FICON is supported on the following IBM c-type Director-class and multiservice switches:
IBM Storage Networking SAN192C-6 mission-critical director
– IBM 48-Port 32-Gbps FC Switching Module (01FT644)
– IBM SAN Director Supervisor Module 1 (01FT600)
– IBM SAN Director Supervisor Module 4 (02JD753)
– IBM 24/10 Port SAN Extension Module (01FT645)
IBM Storage Networking SAN384C-6 mission-critical director
– IBM 48-Port 32-Gbps FC Switching Module (01FT644)
– IBM SAN Director Supervisor Module 1 (01FT600)
– IBM SAN Director Supervisor Module 4 (02JD753)
– IBM 24/10 Port SAN Extension Module (01FT645)
IBM Storage Networking SAN50C-R multiservice switch
Note: The grace period is the amount of time that an application can continue functioning
without a license. In this case, the grace period is set to 120 days from the first occurrence
of configuring any licensed feature without a license package. The grace period starts with
the first check-out, and will be counted only for the days when that feature is enabled and
configured (even if not used). If you remove configuration for this feature, the counter for
the grace period stops incrementing.
FICON traffic may also be carried over 1- and 10-gigabit Ethernet (GbE) links between
switches that have IP storage ports. There is no requirement for a specific SAN_EXT license
to enable FCIP on IBM c-type switches.
When the FICON feature on a VSAN is enabled, the switch IPL file is created automatically
with a default configuration. The IPL file contains specific settings for FICON-enabled VSANs
that are applied at restarts. The IPL file contains port configuration information about each
FICON port regarding what other FICON ports are allowed to communicate with this port
(prohibit function), whether this port is isolated from other FICON ports (block function), and
the descriptive identifier of this FICON port (port name).
The IPL file also includes the port number mapping for port channels and FCIP interfaces,
port number to port address mapping, port and trunk allowed VSAN configuration, in-order
delivery guarantee, the static domain ID configuration, and the fabric binding configuration.
This information is not stored in the startup-config or running-config of the switch as other
configuration information is.
In general, changes to the active configuration (HCD settings) are saved to the IPL too. You
can save up to 16 FICON configuration files on each FICON VSAN. The files are in Extended
Binary-Coded Decimal Interchange Code (EBCDIC) format, and they are saved in persistent
storage so they persist after a reload of the switch. This IPL file works specifically with the
CUP feature, but it can also be edited by using the NX OS command-line interface (CLI) or
DCNM tool, as shown in Figure 5-35.
Conversely, FICON addressing is static, with a limit of 256 link addresses per FICON Director
or FICON VSAN. In z/OS environments, the IOCP is used to configure logical channel paths
through the fabric, assigning link addresses, logical switch addresses, and Channel Path IDs
(CHPIDs) by using the appropriate macros and parameters.
Port identification on networking devices can have physical or logical relevance. In IBM c-type
switches, physical ports are identified based on the front panel location of the port and the
specific slot in which the switching module resides. Considering a 48-port line card that is
inserted in the upper slot of a modular chassis (slot 1 of an IBM Storage Networking
SAN384C-6, for example), we refer to physical interface fc1/1 for the upper left port, and we
refer to physical interface fc1/48 for the lower right port in the line card, as shown in
Figure 5-36. The last port of an IBM Storage Networking SAN384C-6 Director, fully populated
with 48-port line cards, is interface fc10/48 (the supervisors take slots 5 and 6).
Figure 5-36 Interface numbering schema for a 48-port switching module in slot 1
Similarly, physical Ethernet ports that carry FCIP traffic are identified, for example, physical
interface IPStorage 3/1. This type of port identification is not configurable, and it is unique per
chassis. When referring to logical interfaces that have no clear match to a physical location,
we refer to them with the logical interface type and a sequential integer number, like logical
interface fcip 3 or logical interface port-channel 9. The port identification method is used for
FCP and cabling activities but not for FICON.
Referring to the FICON feature, ports in IBM c-type switches are identified by a statically
defined 8-bit value (256 combinations) known as the FICON Port Number. It is not the same
as the numbering of FC port positions that start at the number one for each module. FICON
Port Numbers are assigned in two ways:
Automatically by NX-OS software based on switch type, the actual slot position of a
module in the chassis, and the relative port position on the module.
Manually forced by the administrator. Before assigning, changing, or releasing a port
number, the port should be in the admin shut state.
For example, if a 24/10 SAN Extension module is inserted into slot 2 in an IBM Storage
Networking SAN192C-6 Director that can potentially take a 48-port module, the port numbers
56 - 79 that are associated with positions 25 - 48 on that system board are considered
uninstalled and are not used by a module that is installed into slot 3 in the same chassis.
Because the IBM Storage Networking SAN384C-6 Director can host more than 256 ports,
more than one FICON VSAN is required on those large chassis to use all ports.
On IBM Storage Networking SAN50C-R, there are up to 40 FC ports, and the FICON Port
Numbers that are assigned by default are as follows:
0x00 - 0x27 (0 - 39) for ports FC 1/1-1/40.
0xF0 - 0xFD (240 - 253) are reserved for logical interfaces (FCIP and port channels).
0xFE (254) is reserved for CUP and cannot be assigned to any other interface.
0xFF (255) is a reserved port and cannot be assigned to any interface.
On IBM Storage Networking SAN192C-6, there are up to 192 FC ports, and the FICON Port
Numbers that are assigned by default are as follows:
0x00 - 0x2F (0 - 47) for ports FC 1/1-1/48.
0x30 - 0x5F (48 - 95) for ports FC 2/1-2/48.
0x60 - 0x8F (96 - 143) for ports FC 5/1-5/48.
0x90 - 0xBF (144 - 191) for ports FC 6/1-6/48.
0xF0 - 0xFD (240 - 253) are reserved for logical interfaces (FCIP and port channels).
0xFE (254) is reserved for CUP and cannot be assigned to any other interface.
0xFF (255) is a reserved port and cannot be assigned to any interface.
On IBM Storage Networking SAN384C-6, there are up to 384 FC ports, and the FICON Port
Numbers that are assigned by default are as follows:
0x00 - 0x2F (0 - 47) for ports FC 1/1-1/48.
0x30 - 0x5F (48 - 95) for ports FC 2/1-2/48.
0x60 - 0x8F (96 - 143) for ports FC 3/1-3/48.
0x90 - 0xBF (144 - 191) for ports FC 4/1-4/48.
0xC0 - 0xEF (192 - 239) for ports FC 7/1-7/48.
0xF0 - 0xFD (240-253) are reserved for logical interfaces (FCIP and port channels).
0xFE (254) is reserved for CUP and cannot be assigned to any other interface.
0xFF (255) is a reserved port and cannot be assigned to any interface.
Supervisor modules do not have FICON Port Number assignments, but FCIP interfaces and
port channels, despite being logical and not physical interfaces, need a FICON Port Number
that is different from any of the front panel ports that were described previously. Even CUP is
a logical port that needs a FICON Port Number. FICON Port Numbers for the FCIP interfaces
and port channels are allocated from the address space, which is reserved for logical ports
(beyond the range of the maximum number of physical FICON ports). As an example, here is
how to assign the FICON Port Number 234 to the interface port-channel 1:
ficon logical-port assign port-numbers 234
interface port-channel 1
ficon portnumber 234
To facilitate daily operations, Datacenter Network Manager and its embedded Device
Manager (DM) can toggle the display between interface numbers and FICON Port Numbers.
The default visualization with interface labels is shown in Figure 5-37.
Figure 5-37 Default visualization from Device Manager with interface labels
After clicking the toggle icon, the corresponding FICON Port Number visualization opens, as
shown in Figure 5-38.
Figure 5-38 Device Manager visualization of FICON Port Numbers and Device Type
FICON Port Numbers are automatically assigned and cannot be changed. They represent the
first and physical level of addressing in a FICON setup. A second and virtual level of
addressing is introduced for IBM c-type FICON switches. For every FICON Port Number,
there is one associated FICON Port Address.
By default, FICON Port Numbers are the same as FICON Port Addresses. More precisely,
FICON Port Numbers in the CLI of IBM c-type switches are expressed in decimal format.
When expressed in hexadecimal format, they are the same as FICON Port Addresses.
So, what is the reason for having both a FICON Port Number and a FICON Port Address for
every FICON interface? There is a good reason: FICON Port Numbers cannot be changed,
but you can swap the FICON Port Addresses by using the FICON Port Swap feature.
All traffic routing in FICON is accomplished based on the FICON Port Address (it is the value
that forms the link address). So, what happens when a physical interface on a switch goes
bad with the above architecture? To replace a single bad interface, most of the time you must
replace the module it is on, which means that you could be affecting 24 or 48 ports,
depending on the number of ports on the module. Thus, you would need a potentially large
maintenance window to repair a single port issue.
This situation is what the FICON Port Swap function was created for. The FICON Port Swap
function allows you to swap the FICON Port Addresses for two FICON Port Numbers. As an
example, say that you start with two ports: 0x01, which is used for an active FICON
connection, and 0x2F, which is a spare port and unused. Before the swap, each of these ports
has the same FICON Port Number and FICON Port address, that is, FICON Port Address
0x01 is on physical FICON Port Number 0x01, and similarly for 0x2F. If we swap these two
FICON Ports, we exchange the FICON Port Addresses for the FICON Port Numbers. This
process means that FICON Port Number 0x01 will have FICON Port Address 0x2F, and
FICON Port Number 0x2F will have FICON Port Address 0x01. To restore the I/O for FICON
Port Address 0x01, move the fiber connection from FICON Port Number 0x01 to 0x2F and
bring the port back online. The customer can be running I/O with their unchanged IOCP
configuration on the mainframe without any major maintenance window.
Note: Port swapping is not supported for logical ports (port channels and FCIP links).
The above explanation is appropriate for FICON Directors with less than 255 usable ports
and was used many years ago. But, the situation is more complex for larger directors, so a
slightly different addressing technique is required. These many FICON ports must be split in
to at least two FICON VSANs, but you are still constrained by the FICON Port Number limit of
256 values (8 bits) and assignment based on port location. Thus, FICON Port Numbers on
IBM c-type FICON switches are now virtual, with the software allowing customers to assign
the valid FICON Port Numbers to whatever physical interfaces fit their needs. For
convenience, these virtual FICON Port Numbers are still associated with the same value for
the FICON Port Addresses by default.
Of course, you cannot put two interfaces with the same FICON Port Number into the same
FICON VSAN, but you have now an incredible level of flexibility, as shown in Figure 5-39 on
page 147.
The FICON Director is displayed by DM when the toggle icon is clicked. The FICON Port
Addresses are showing above each physical interface. On the first 48-port module (module
1), the FICON Port Numbers and FICON Port Addresses are configured as 0x00 - 0x2F. The
FICON Port Numbers (and the associated FICON Port Addresses) continue sequentially
through module 7, where the final port has FICON Port Number of 0xEF. Module 8 and
Module 9 are reusing FICON Port Numbers 0x00 - 0x5F. The final module shows the ultimate
flexibility of the virtual FICON Port Numbers because they can be placed and varied to
nonsequential sequences. Now, two FICON Port Number 0x00s (the first port in module 1
and the first port in module 8) must be in two different FICON VSANs.
Figure 5-40 shows the static FCID allocation for switched and cascaded FICON topologies.
Figure 5-40 Static FCID allocation for switched and cascaded FICON topologies
Thus, a FICON Port Number is assigned, and it must remain the same even if the FICON
Director restarts for some reason. The Port Number cannot be dynamic or randomly assigned
by the switch.
Sometimes, there is confusion regarding the FCID and the FICON Port Number. To
understand the differences, read the following points:
FICON uses source-based routing because it explicitly identifies the switch ID and FICON
Port Number on the destination switch based on the information in the I/O configuration
that is specified in HCD/IOCP. Thus, the FICON Port Number is static for FICON I/O.
FC uses fabric-based routing because the fabric determines the path and provides the
requester (that is, the N_Port) with the destination FCID. As a result, the FCID can change
if the N_Ports (that is, server HBAs or device I/O adapters) move. This change will not
affect any N_Port configurations.
Despite IBM c-type switches having a dynamic FCID allocation scheme, when FICON is
enabled on a VSAN, all the ports are changed to static FCIDs.
The fabric binding feature helps prevent unauthorized switches from joining the fabric or
disrupting current fabric operations. Fabric binding is configured by using a set of switch
worldwide names (sWWNs) and a persistent (static) domain ID and binds the fabric at the
switch level. Enforcement of fabric binding policies is done on every activation and when the
port tries to start. However, enforcement of fabric binding at the time of activation happens
only if the VSAN is a FICON VSAN. A user-specified fabric binding list contains a list of
sWWNs within a fabric. If a sWWN attempts to join the fabric and that sWWN is not in the
user-specified list or the sWWN is using a domain ID that differs from the one that is specified
in the allowed list, the ISL between the switch and the fabric is automatically isolated so that
VSAN and the switch is denied entry into the fabric.
An interface can be assigned to a FICON VSAN. By using a different CLI command or DCNM
action, an interface is assigned a FICON Port Number. There is no single command or action
that assigns an interface to a FICON VSAN and also assigns the FICON Port Number. It is up
to the administrator to know the relationship among the following items:
Interfaces (physical or logical)
Assignment to each FICON VSAN
Assignment of FICON Port Numbers
To offer a practical implementation guide when more than one FICON VSAN is configured,
we show three examples about managing FICON VSANs and assigning FICON Port
Numbers. For brevity, we show only the required configuration steps by using the CLI, but
everything can be done with DCNM DM too.
Like all ports in a FICON VSAN, a port channel must have a FICON Port Number. Recall that
a port channel is logical, so its FICON Port Number must come from the range of reserved
Port Numbers. To determine the FICON Port Number for the ISL Port-Channel, use the
following string:
Switchname# show ficon port-numbers assign
ficon slot 1 assign port-numbers 0-23, 0-23
ficon logical-port assign port-numbers 240-249
Switchname# show ficon first-available port-number
Port number 240(0xf0) is available
Now, you can define a FICON port channel on the first two physical interfaces of the module
and assign the first available FICON Port Number to it by using the following string:
Switchname# show port-channel usage
no port-channel number used
Switchname# config t
Switchname(config)# interface port-channel 9
Switchname(config-if)# channel mode active
When you enable the FICON feature in a VSAN, the switches always use the startup FICON
configuration file, that is, an IPL. This file is created with a default configuration immediately
after FICON is enabled in a VSAN. Multiple FICON configuration files with the same name
can exist in the same switch if they belong to different FICON VSANs. When FICON is
disabled on a VSAN, all its FICON configuration files are irretrievably lost.
FICON configuration files contain the following configuration options for each implemented
FICON Port Address:
Block / Unblock flag
Prohibit / Allow mask
Port Address name
You cannot prohibit or allow an ISL, a port channel, or an FCIP interface. If an interface is
configured in E or TE mode and you try to prohibit that port, the prohibit configuration is
rejected. Similarly, if a port is not up and you prohibit that port, the port is not allowed to come
up in E mode or TE mode. You cannot block or prohibit the CUP port (0XFE).
You cannot directly assign a FICON Port Number to a physical interface. You assign a range
of FICON Port Numbers to the module slot by using the following command:
ficon slot 1 assign port-numbers 0-47
However, it is possible to assign FICON Port Numbers to logical interfaces such as FCIP
interfaces and port channels:
interface port-channel 9
ficon portnumber 0xf0
Best practice: In general, it is recommended to have the same NX-OS release on all
devices that are part of the same fabric.
The recommended NX-OS releases for both FC and FICON protocols are documented and
regularly updated at Recommended Releases for Cisco MDS 9000 Series Switches.
Recommended releases are based on field-proven evidence of stability and lack of significant
issues. Because it takes some time before a statistically significant installation base is proven
to operate with no issues, the recommended NX-OS release is almost always not the latest
version. At times, customers that are highly concerned by security-related aspects might
decide to deploy the most recent NX-OS version, even if it is not the recommended one. In
general, FICON releases go through the longest and most extensive testing and qualification
efforts and meet the highest-quality standards.
FICON capabilities enhance certain models of IBM c-type switches by supporting both open
systems and mainframe storage network environments. When FICON is configured on
devices, only those NX OS releases that are FICON qualified should be considered for code
upgrades. In simple terms, FICON deployments tend to be more prescriptive in terms of what
NX-OS versions are allowed.
Here is an example: NX-OS Release 8.1(1b) and Release 8.4(1a) are IBM-qualified FICON
releases for IBM c-type devices. From the hardware point of view, NX-OS Release 8.4(1a)
introduces FICON support on the IBM Storage Networking SAN192C-6 Crossbar Fabric-3
Switching Module (DS-X9706-FAB3), IBM Storage Networking SAN384C-6 Crossbar
Fabric-3 Switching Module (DS-X9710-FAB3), and Supervisor-4 Module (DS-X97-SF4-K9).
Several new capabilities also have come to fruition with the new software version. Customers
that use NX-OS 8.1(1b) release might decide that they need to upgrade their switching
infrastructure to NX-OS 8.4(1a) release. Is any intermediate NX-OS release required to
perform the upgrade? Is there any impact on traffic during the upgrade?
Figure 5-42 shows the correct upgrade path for FICON on qualified IBM c-type devices.
When operating in a FC/FICON intermix environment, you can mix FCP and FICON traffic on
a single VSAN and use zoning to separate the two types of traffic, but it is not a best practice.
In fact, one of the advantages of using IBM c-type switches is the intrinsic ability to separate
FICON and FCP traffic into separate VSANs, which is considered a best practice. Using
separate VSANs provides several benefits:
Isolation is improved.
VSAN-based roles for administrative access can be created.
In-order delivery can be set per VSAN.
Load-balancing behavior can be set per VSAN.
Default zoning behavior can be set per VSAN.
Persistent FCIDs can be set per VSAN.
Domain ID allocation (static or dynamic) behavior can be set per VSAN.
FC timers can be set per VSAN.
5.5 Topologies
This section covers the following topics:
Protocols and network topologies
Resiliency and redundancy in Fibre Channel networks
FICON topologies
Figure 5-43 Ethernet networks in the campus, data center, and WAN
In general, Ethernet network topologies are based on the combination of some basic models.
Within data centers, the most common approach is the spine-leaf topology, which gradually
replaced the N-tiered topology of the past decade.
Figure 5-44 represents various possible Ethernet topologies, including the spine-leaf
approach.
FC networks are built and optimized for storage. Being single-purpose, FC networks are
suitable for specific optimizations that make them ideal for connecting hosts to their data.
Traffic patterns are defined and traffic always flows from initiators (servers) to targets
(storage) and vice-versa. FC networks also are within the perimeter of data centers and
occasionally must cross a WAN to reach another data center. The scale of FC networks is
also limited compared to their Ethernet counterparts, and having 10,000 ports in a single
fabric is unique.
The most typical topology is the core-edge design, where initiators connect to the edge and
targets to the core. An appropriate quantity of ISLs are used between the core and edge
switches to ensure that there is enough bandwidth to serve the workloads. If possible, with all
flash arrays, the aggregate bandwidth reaching the target devices should be less or equal to
the aggregate bandwidth on ISLs.
The core-edge topology is like the leaf-spine topology for Ethernet, but the traffic patterns are
different. The core-edge topology is the recommended SAN topology to optimize
performance, management, and scalability. With the servers and storage on different
switches, this topology provides ease of operations and flexible expansion. Good
performance is possible because initiator to target traffic always traverses a single hop from
the edge to the core, making the solution deterministic.
In the past, the edge-core-edge topology was used to satisfy even larger port counts for a
single fabric. Targets were connected to a set of edge switches, initiators were connected to
another set of edge switches, and core switches established the communication paths. This
topology, despite being scalable, comes with some drawbacks:
The number of ports that are used on ISLs becomes significant, and the ratio of useful end
ports to total ports goes down.
End-to-end latency is higher because of the extra hop, and congestion on ISLs might
occur.
Troubleshooting becomes more complex when something negative happens, and a
slow-drain phenomenon might occur.
The need for edge-core-edge topologies was reduced high port count switches and directors
became available. For smaller size networks, the collapsed core-edge topology is often used.
In this case, the network is a single switch in the form of a modular device where both
initiators and targets are connected. There is no ISL.
Other network topologies exist. For example, a fully connected MeSH topology is sometimes
adopted, but can lead to location-dependent, asymmetric performance levels. A tree topology
also is sometimes used, and it is the result of a poor initial planning and inorganic growth over
multiple years. In the end, the choice of the best topology depends on many actors, such as
the workloads to be served, fixed versus modular platforms, cost, scalability, and
oversubscription level.
For organizations that want to achieve business continuance under both foreseeable and
unforeseeable circumstances, the elimination of any SPOFs should be a top priority, which is
why a share-nothing approach should be used at the highest level of fabric design: The whole
network should be redundant, with two separate fabrics and no network equipment in
common. The use of logical segmentation (VSAN technology) on top of a single physical
fabric can protect from human errors and other software-related issues, but it cannot provide
the same degree of availability as two physically separated infrastructures.
Figure 5-46 contrasts and compares a physically redundant network with a logically
redundant network.
Servers and storage devices should be connected to both physical fabrics. Data traffic should
flow across both networks transparently in either active-active or active-passive mode,
depending on the settings that are applied to the multipath I/O (MPIO) solution.
MPIO is responsible for helping ensure that if one path on a host fails, an alternative path is
readily available. Ideally, the two fabrics should be identical, but during migrations, differences
in the way the fabric networks are designed and in the products that are used to build them
are common. Generally, these two fabrics, which are identified as SAN A and SAN B, are in
the same location. However, to provide greater robustness at the facility level, they are
sometime kept in separate data center rooms.
Enterprises also might rely on secondary data centers to achieve business continuance or
DR, depending on the distance between data centers and the recovery point objective (RPO).
Using two fabrics locally and two separate locations within the territory provides an excellent
approach for achieving complete redundancy.
This level of redundancy on ISLs also prevents fabric segmentation even if a link shuts down
under a failure condition. Active mode should be preferred as a setting for port channels, and
is the default on IBM c-type switches with NX-OS 8.4.1 and later so that recovery occurs
automatically without the explicitly enabling and disabling the port channel member ports at
either end of the link.
SAN extension line cards should be redundant within a single mission-critical director, and
traffic should be shared among them. An IBM c-type mission-critical director can be filled with
SAN extension line cards with no limitation on the number that can be accommodated.
Members of the same FC port channel should be placed on different line cards, on different
ASICs, or in different port groups whenever possible. Creating a logical bundle across ports
that are served by the same ASIC has no positive effect on network availability and should not
be considered as a best practice.
To improve operational ease and achieve greater availability, configuration and cabling should
be consistent across the fabric. For example, do not configure ISLs at the upper left ports in
one chassis and on the lower right ports in another chassis: mirrored configurations are
recommended.
Here is a list of best practices for proper and reliable SAN design to help ensure application
availability on FC networks:
Avoid a SPOF by using share-nothing redundant fabrics.
Use MPIO-based failover for server-to-storage connectivity by using redundant fabrics.
Use redundancy features that are built in to individual fabrics.
Use mirrored cabling and configurations for ease of operation and troubleshooting.
IBM c-type fabrics have resiliency features that are built in that are derived from the NX-OS
OS, which is the software that runs on all IBM c-type switches. The self-healing capabilities of
NX-OS can quickly overcome most failures and repair the network. For example, when a link
between switches fails, the FC port channel technology immediately moves all traffic flowing
through that member link to the surviving member links. If an entire FC port channel fails
(very unlikely), the FSPF process immediately recalculates the distribution of all traffic flows.
All these functions require a second route to be available by using redundancy that is built in
to the fabric design.
NX-OS also includes other capabilities that help make networks that are built by using
IBM c-type devices resilient and HA. For example, processes can be gracefully shut down
and restarted. VSANs isolate traffic flows at the hardware level to the point that a
misconfiguration in the zoning database in a VSAN does not affect the others. When an
individual port is administratively shut down, the process occurs gracefully, with buffers
cleared and no packets lost.
A FICON channel in FICON native (FC) mode uses the FC communication infrastructure that
is supported by IBM Z to transfer channel programs (CCWs) and data through its
FICON/FICON Express adapter to another FICON adapter node, such as a storage device,
printer, or another server (CTC).
When in a cascaded configuration, up to 16 ISLs between the two adjacent FICON Directors
may be grouped and become a FICON port channel.
A channel path that consists of a single link interconnecting a FICON channel in FICON
native (FC) mode to one or more FICON CU images (logical CUs) forms a point-to-point
configuration. A point-to-point configuration is permitted between a channel and CU only
when a single CU is defined on the channel path or when multiple CU images (logical CUs)
share an N_Port in the CU. The channel N_Port and the CU N_Port are responsible for
managing the access to the link among the logical images. A maximum of one link can be
attached to the channel in a point-to-point configuration. The maximum number of CU images
that is supported by the FICON architecture over the FC link to CU is 256, so the maximum
number of devices that can be addressed over a channel path that is configured point-to-point
is equal to 256 times 256, or 65,536.
Multiple channel images and multiple CU images can share the resources of the FC link and
the FC switch so that multiplexed I/O operations can be performed. Channels and CU links
can be attached to the FC switch in any combination, depending on the configuration
requirements and available resources in the FC switch. Sharing a CU through an FC switch
means that communication from several channels to the CU can take place either over one
switch to CU link (when a CU has only one link to the FC switch) or over multiple link
interfaces (when a CU has more than one link to the FC switch). Only one FC link is attached
to the FICON channel in a FICON switched point-to-point configuration, but from the switch
the FICON channel can communicate with (address) several FICON CUs on different switch
ports.
In a cascaded FICON topology, at least three FC links are involved: One is between the
FICON channel on the mainframe and the local FICON Director; the second is between the
FICON Directors; and the third is from the remote FICON director and the CU.
The FICON channel in FICON native (FC) mode supports multiple concurrent I/O
connections. Each of the concurrent I/O operations can be to the same FICON CU (but to
different devices) or to a different FICON CU.
For cascaded connections, the HCD defines the relationship between channels and a director
(switch ID) and specific switch ports. However, HCD does not define the ISL connections, and
the management of the traffic over ISLs is controlled exclusively by the directors. In fact,
during initialization, the directors identify their peers and create a routing table so that frames
are forwarded to the correct director, which means extra ISL bandwidth can be added to a
topology without any modification to the HCD definitions.
In the basic implementation, one FICON Director is connected to another one through an ISL
in each fabric. A variation of this dual-device deployment uses FCIP for the ISL. Another
possibility keeps the FCIP function on a dedicated pair of switches, and the director connects
to them through an ISL. Even though this topology now includes four switching devices, it is
still a single-hop FICON cascade deployment.
These topologies can be supported when port channels are used instead of individual ISLs.
Note: FICON Multihop is supported only by using traditional static routing methods. FIDR
is not supported.
Figure 5-52 on page 165 shows some alternative FICON Multihop topologies.
There are some design implications that you should follow when implementing a FICON
Multihop environment:
Regarding bandwidth planning and allocation for failure scenarios, extra bandwidth should
be provisioned on all routes to compensate for the eventual loss of connectivity between
two switches and the subsequent rerouting of traffic over the nonfailed ISL paths.
Regarding performance and the latency impacts of traffic that is rerouted to longer paths,
there is little that can be done to mitigate these items, so there is a tradeoff for better
availability.
FICON Multihop imposes some hardware requirements on all elements of the topology:
The mainframe must be a z13 or later.
The storage system must be an IBM DS8870 or later, or equivalent third-party storage
system.
The director must be an IBM c-type SAN192C-6, SAN384C-6, or SAN50C-R.
The eventual DWDM infrastructure must be explicitly approved for this use.
Software releases must meet a minimum version.
FICON Multihop deployments may use both native ISLs or FCIP extension networks. When
native ISLs are used, the usual distance limitation of 10 km is imposed, unless colored SFPs
or transponder-based DWDM equipment is adopted. When deploying long-distance FC ISLs,
you might need to increase the number of buffer credits beyond the default values, which is
possible by using the extended B2B credits feature with the enterprise package license. The
number of buffer credits depends on distance, speed, and average frame size. For FICON
traffic, consider using an average frame size of about 1 KB instead of 2 KB.
With FCIP, longer distances are possible. Only one FCIP hop is allowed per FICON Multihop
configuration. Concurrently, you also must ensure that we do not exceed the FICON timeout
limitations, even in the worst-case scenarios of a path loss. For both native ISL and FCIP
configurations, the longest distance a FICON packet can traverse is 300 km.
All these topologies are valid for the different mainframe generations, from z13 to the z15.
Direct connections between IBM Z FICON adapters and IBM c-type 32 Gbps optics can
operate only at speeds of 16 Gbps and 8 Gbps because there are no 32 Gbps IBM Z FICON
adapters at the time of writing. Links running at 32 Gbps are tested over ISLs between
IBM c-type Directors. Targets can run at 32 Gbps too.
For more information about using FICON Multihop, its requirements, and supported
configurations, see FICON Multihop Requirements and Configurations.
The IBM Storage Networking SAN50C-R switch is qualified for FICON and offers up to 40 FC
ports. It is typically used for SAN extension, but it also can be considered for local switching
inside a data center.
Mainframe environments can be sized at deployment phase, but this situation is not always
true for FC environments, so it is good to have some more port expansion flexibility. In our
mixed FC/FICON deployment with IBM Storage Networking SAN50C-R, we could initially
allocate 28 ports to FICON and eight ports to FC, leaving four ports unused. One VSAN is
configured for FICON and one for FC. If we need more ports for the FC VSAN, we can create
an ISL port channel between this device and another switch, like the IBM Storage Networking
SAN48C-6. The new switch has no port in the FICON VSAN, and it is not FICON qualified.
However, the proposed topology is valid and supported and also can be used for NVMe/FC
traffic on the FC VSAN.
Figure 5-54 shows an example of a single-hop, cascaded FICON topology. It has three ISLs
between the FICON Directors.
Because we have multiple ISLs, how is the traffic distributed among them? To provide an
answer, we must explain how frames are routed in an FC network.
The assignment of traffic between directors over the ISLs is controlled by the director. The
HCD defines the relationship between channels and directors and the specific switch port.
But, the HCD does not define the ISL connections, and the distribution of traffic over ISLs is
controlled exclusively by the director.
The FSPF protocol is a link-state-path selection protocol that directs traffic along the shortest
path between the source and destination. The metric that is used to determine what path is
the shortest is not the distance, but its administrative cost. FSPF has a notorious counterpart
on IP networks that is called OSPF that follows the same school of thought. The FSPF
protocol goes beyond determining the shortest route for traffic: It also detects link failures;
updates the routing table; provides fixed routing paths within a fabric; and maintains the
correct ordering of the frames.
After FSPF is established, it programs the hardware routing tables for all active ports on the
switch. After a path is assigned to an ISL, that assignment is persistent. However, every time
that a new ISL is added to the fabric, the ISL traffic assignments change. For this reason, this
technique is not attractive to mainframe administrators because they cannot prescribe how
the paths to a subsystem are mapped to the ISLs. In the worst case, all the paths to a
subsystem might be mapped to the same ISL and overload it.
FSPF tracks the state of the links on all switches in the fabric and associates a cost with each
link. The protocol computes paths from a switch to all the other switches in the fabric by
adding the cost of all links that are traversed by the path, and chooses the path that minimizes
the costs. This collection of the link states, including costs, of all the switches in the fabric
constitutes the topology database or link state database.
The topology database is replicated and present in every switching device in the FICON SAN
fabric. Each switching device uses information in this database to compute paths to its peers
by using a process that is known as path selection. The FSPF protocol provides the
mechanisms to create and maintain this replicated topology database. When the FICON SAN
fabric is first initialized, the topology database is created in all operational switches. If a new
switching device is added to the fabric or the state of an ISL changes, the topology database
is updated in all the fabric’s switching devices to reflect the new configuration.
A Link State Record (LSR) describes the connectivity of a switch within the topology
database. The topology database contains one LSR for each switch in the FICON SAN fabric.
Each LSR consists of an LSR header and one or more link descriptors. Each link descriptor
describes an ISL that is associated with that switch. A link descriptor identifies an ISL by the
Domain_ID and output port index of the “owning” switch and the Domain_ID and input port
index of the “neighbor” switch. This combination uniquely identifies an ISL within the fabric.
LSRs are transmitted during fabric configuration to synchronize the topology databases in the
attached switches. They are also transmitted when the state of a link changes and on a
periodic basis to refresh the topology database.
Associated with each ISL is a value that is known as the link cost, which reflects the cost of
routing frames through that ISL. The link cost is inversely proportional to the speed of the link:
Higher-speed links are more wanted transit paths, so they have a lower cost. The topology
database has entries for all ISLs in the fabric, which enables a switch to compute its least cost
path to every other switching device in the FICON SAN fabrics from the information that is
contained in its copy of the database.
As typical with other routing protocols, hello messages are used to establish bidirectional
communication over an ISL. Hello messages are transmitted on a periodic basis on each ISL
even after two-way communication is established to detect a switch or an ISL failure.
After two-way communication is established through the hello protocol, the switches
synchronize their topology databases. During the initial topology database synchronization,
each switch sends its entire topology database to its neighbor switches. When it receives an
acknowledgment, topology database synchronization is complete, and the switches are said
to be “adjacent” on that ISL. The ISL is now in the “full state” and may be used for frame
delivery.
Although the entire topology database is exchanged during this initial synchronization
process, only updates are exchanged for the database maintenance phase to reflect eventual
topology changes in the FICON SAN. This process makes the protocol more efficient and
faster. The topology database must be updated whenever the state of any ISL changes (ISL
failure or addition).
After the topology database is created and a switch has information about available paths, it
can compute its routing table and select the paths that will be used to forward frames. FC
uses a least-cost approach to determine the paths that are used for routing frames. Each ISL
is assigned a link-cost metric that reflects the cost of using that link. The default cost metric is
based on the speed of the ISL, but it can be administratively changed for traffic engineering
purposes. When multiple paths are available to a destination but one has the lowest cost, the
routing decision is easy, and that path selected by FSPF. The only way to influence this path
selection is to administratively force a different cost on some of the paths.
But what happens when there are multiple paths with the same cost? Which one is selected?
When there are multiple paths with the same cost to the same destination (which is typical for
many cascaded FICON SAN architectures), a switching device must decide how to use these
paths. The switch might select one path only (not ideal) or it might attempt to balance the
traffic among all the available equal-cost paths and avoid congestion of ISLs, which is known
as equal-cost multipathing. If a path fails, the switch may select an alternative ISL for frame
delivery, which is where the different types of ISL routing options come into play. Over the
past several years, multiple techniques beyond simple FSPF were introduced for routing
traffic on FICON ISLs. These techniques fall under two categories: FICON static routing and
FIDR.
IBM c-type SID/DID routing optimizes routing path selection and utilization based on a hash
of the SID and DID of the path source and destination ports. Therefore, the ingress ports that
are passing traffic locally only and not using the ISLs are not assigned to an ISL. The effect of
this enhanced approach is a better workload balancing across ISLs with the guarantee that
exchanges between a pair of devices would always stay in order. However, the ISL utilization
level could still be unequal, depending on traffic patterns. Moreover, the routing table could
change each time that the switch is initialized, leading to unpredictable and nonrepeatable
results.
Despite not being perfect, this static routing approach has been used successfully for many
years. It also works well with a slow-drain device. Static routing has the advantage of limiting
the impact of a slow-drain device or a congested ISL to a small set of ports that are mapped
to that specific ISL. If congestion occurs in an ISL, the IBM Z channel path selection algorithm
detects the congestion because of the increasing initial CMR time in the in-band FICON
measurement data. The CSS steers the I/O traffic away from congested paths and toward
better performing paths by using the CMR time as a guide. More host recovery actions are
also available for slow-drain devices.
High workload spikes resulting from peak period usage or link failures can also be dealt with
more easily with FIDR. FIDR improves utilization of all available paths, thus reducing possible
congestion on the paths. Every time that there is a change in the network that changes the
available paths, the traffic can be redistributed across the available paths.
One example of such a dynamic routing policy is IBM c-type OXID routing. With FIDR, the
routing assignments are based on the SID/DID and the FC OXID. Essentially, FIDR enables
ISL routes to be dynamically changed based on the FC OXID parameter, which is unique for
each I/O operation. With FIDR, an ISL is assigned at I/O request time, so different I/Os from
the same source port going to the same destination port may be assigned different ISLs.
The adoption of FIDR is reflected on both the NX-OS CLI and DCNM.
Figure 5-57 shows FICON VSANs with FIDR enabled as seen by DM. In fact, the
LoadBalancing column indicates the SrcID/DestId/OxId schema.
For many years IBM has recommended segmenting the traffic types by keeping FCP traffic
(such as PPRC/MM) and FICON traffic on their own dedicated ISLs or group of ISLs. With
FIDR, it is now possible to share ISLs among previously segmented traffic, which leads to
cost savings for hardware: fewer ISLs means fewer FICON Director ports or DWDM links.
Cost savings on the bandwidth between data centers might be even greater. On the
downside, there are two behaviors that might occur in a FICON SAN with FIDR enabled:
dilution of error threshold counts, and the impact of slow-drain devices.
Therefore, the error threshold counters can become diluted, that is, they are spread across
the different OS counters, which make it entirely possible that either the thresholds are not
reached in the period that is needed to recognize the faulty link, or that the host thresholds
are reached by all CUs that cross the ISL in question, which results in all the channel paths
being fenced by the host OS. To prevent this behavior, the user should use the FICON SAN
switching devices capabilities to set tighter error thresholds internal to the switch and
fence/decommission faulty ISLs before the OS’s recovery processes are invoked.
Slow-drain devices
When slow-drain devices are present, FICON SANs are likely to lack buffer credits at some
point in the architecture. This lack of buffers can result in FICON switching device port buffer
credit starvation, which in extreme cases can result in congested or even choked ISLs.
Frames that are “stuck” for long periods (typically for periods >500 milliseconds) might be
dropped by the FICON SAN fabric, which results in errors being detected. The most common
type of error in this situation is an IBM C3® discard. When a slow-drain device event occurs
and corrective action is not taken in short order, ISL traffic can become congested as the
effect of the slow-drain device propagates back into the FICON SAN. With FIDR policies
being implemented, a slow-drain device might cause the B2B credit problem to manifest itself
on all ISLs that can access the slow-drain device. This congestion spreads and can
potentially impact all traffic that must cross the shared pool of ISLs. With static routing
policies, the congestion and its impact are limited to the one ISL that accesses the slow drain
device.
IBM Z and z/OS have capabilities that mitigate the effect of slow-drain devices, such as
channel path selection. The algorithms steer the I/O workload away from the paths that are
congested by the slow-drain device toward the FICON channels in a separate, redundant
FICON SAN. Best practices for IBM Z I/O configurations require at least two separate and
redundant FICON SANs. Many users use four, and the largest configurations often use eight.
For MM traffic, best practices call for the IBM Z user to use FIDR in the fabric for predictable
and repeatable performance, resilience against workload spikes and ISL failures, and optimal
performance. If a slow-drain device situation occurs in a FICON SAN fabric with MM traffic, it
impacts the synchronous write performance of the FICON traffic because the write operations
do not complete until the data is synchronously copied to the secondary CU. Because FICON
traffic is subject to the slow-drain device scenarios today, using FIDR does not introduce a
new challenge to the user and their FICON workloads.
Note: FIDR is not supported by FICON Multihop topologies. Use static routing instead.
Using FIDR maintains a per exchange IOD. Exchanges can be delivered out of order, which is
described as loose IOD as opposed to strict IOD.
More considerations
In any cascaded FICON architecture, it is a best practice to perform a bandwidth sizing study.
Such a study can help you determine the bandwidth that is required for the cascaded links
and the number of ISLs. Anticipated storage needs, type of supported traffic, replication
method, need for GDPS or HyperSwap and other considerations make this sizing study
complicated. As a best practice, use design tool for bandwidth sizing.
For proper operation, dynamic routing must be supported at all endpoints, the channel and
connected devices, and FICON switches. IBM Health Checker for z/OS can identify any
inconsistencies in the dynamic routing support within the SAN. When dynamic routing is
enabled in the SAN, IBM Health Checker for z/OS verifies that the processor and attached
DASD, tape, and non-IBM devices that are defined as type CTC support dynamic routing and
identifies those endpoints that do not. z/OS uses the CUP device to gather information from
the switch (such as topology and performance statistics for RMF). As part of the information
that is returned from the CUP, there is an indication about whether dynamic routing is enabled
for the SAN fabric or not.
Figure 5-58 IBM Health Checker for z/OS with no inconsistencies detected
Figure 5-59 is an example of the output where not all devices support FIDR. Because FIDR
inconsistencies were detected, FIDR should not be enabled for this fabric.
Figure 5-59 IBM Health Checker for z/OS with inconsistencies detected
In summary, FIDR provides significant technical improvements over the older FICON static
routing policies, but it also introduces new concerns. Customers may decide what they
consider best. Given that the main behavior of concern, that is, the impact of slow-drain
devices, can be managed by the user by using good discipline, the IBM Health Checker for
z/OS, and the IBM c-type FICON SAN management tools, the benefits of implementing FIDR
should far outweigh the potential issues.
For more information about host and CU requirements for using FIDR, see FICON Dynamic
Routing (FIDR): Technology and Performance Implications.
LIOD works only on port channels and not in ECMP scenarios. The port channel is drained by
all packets in flux when a chaser frame is sent to drain the queue on each ISL that is still up.
While this task is happening, frames are queued on the interface but nothing is dropped.
When the last chaser reply is received from the alternate side, the port channel is rehashed
with the member interfaces that are present and traffic is released for normal flow. Traffic is
halted for about 2x the round-trip time (RTT) of the port channel, which for most
implementations is measured in μsec and does not affect response time in a serious way.
Even a 10 km link has a hit of about 100 μsec for a single flow, and then the response time
would return to normal.
With LIOD, if a port within a port channel is administratively taken down or brought up, there
will be no drops at all and be truly lossless. If there is a surprise cut in the fiber, there will be a
few frame drops, but 100 times less than during the 500 ms freeze time of IOD. LIOD takes
effect by default when IOD is enabled on a VSAN.
LIOD works much like IOD, but it is different in the way it operates, which results in it being
much faster. LIOD sends a command (chaser frame) to a peer switch to flush the queues
instead of waiting for the 500 msec timer. LIOD works only for FC port channels. It does not
work for FCIP port channels. FICON implementations with IBM c-type benefit from LIOD on
FC port channels and achieve IOD with virtually no frame drops.
For more information about IOD and LIOD, see In-Order Delivery.
5.8 FCIP
The IBM c-type switches transparently integrate FCP, FICON, and FCIP in one system. The
FICON implementation on IBM c-type Directors and IBM Storage Networking SAN50C-R
multiprotocol switches supports IP tunneling to efficiently consolidate SANs over WAN
distances. IP tunnels enable a globally accessible storage infrastructure. Using the FICON
over FCIP capability enables cost-effective access to remotely located mainframe resources.
With the IBM c-type platform, IBM storage replication services can be extended over
metropolitan to global distances by using the existing IP infrastructure and further simplify
business continuance strategies.
To facilitate FICON traffic over an IP network, the participating IBM c-type Directors must
each have a 24/10 SAN Extension switching module with two or more FCIP ports in use to
provide a redundant path capability. No license is required to activate the FCIP ports.
Alternatively, you can use the IBM Storage Networking SAN50C-R switch for FCIP
connectivity. The IBM Storage Networking SAN50C-R switch is interoperable with the 24/10
SAN Extension switching module.
The implementation of FCIP on IBM c-type switches is advanced. The TCP/IP protocol stack
was enhanced to offer better performance and higher resiliency over unstable long-distance
WANs. Moreover, data compression can be enabled to minimize WAN bandwidth usage. Data
security is also possible with IPsec technology.
When configuring IBM z15 Fibre Channel Endpoint Security (FCES) connections across an
FCIP tunnel, it is a best practice to turn off compression because the FCES encrypted data is
not compressible.
Topics such as FCIP capacity planning and tuning are influenced by factors that are unique to
each customer's configuration and are not served well with general rules. IBM, Cisco, and
others offer professional services to assist with these complex topics. IBM Z and IBM
LinuxONE Lab Services help clients build and deploy solutions on IBM Z and LinuxONE
infrastructures. For more information, see IBM IT Infrastructure.
IBM c-type Director-class switches always come with front-to-back air flow, which cannot be
changed. With the port side of the chassis acting as the air intake, the installed SFPs operate
at the minimum possible temperature, which boosts their reliability. The IBM Storage
Networking SAN50C-R switch follows the same approach and offers port-side intake for air
flow. Other IBM c-type switches, which are not qualified for FICON, come with both airflow
direction options, which you can select when ordering.
Figure 5-60 illustrates the air flow direction and slot numbering for an IBM Storage
Networking SAN384C-6 mission-critical director.
Figure 5-60 IBM Storage Networking SAN384C-6 air flow direction and slot numbering
IBM c-type networking devices use third-party certified power supplies. The 80Plus Platinum
certification ensures the best energy efficiency in the industry (>94% top efficiency) and
meets the stringent requirements of organizations undergoing IT green initiatives.
Figure 5-61 80Plus testing report for IBM c-type Directors 3 KW AC power supplies
Note: IBM Storage Networking SAN50C-R does not offer 80Plus Platinum certified power
supplies.
The maximum rating of the supported power supplies should not be considered as the
maximum value for switch power consumption. In many cases, power supplies are oversized
to accommodate specific engineering and manufacturing requirements. Moreover, the
number of installed power supplies is determined by the power redundancy schema that is
assumed. As a result, you should not be surprised if an IBM Storage Networking SAN384C-6
switch comes with six 3-KW AC power supplies but reaches a maximum power consumption
of 5 KW and a typical power consumption of only 2.5 KW when fully populated.
All IBM c-type switches can be hosted in standard 2-post or 4-post 19” racks. Some clearance
is required in the front and back of the chassis to facilitate serviceability. Director-class
switches may accommodate many ports, and appropriate brackets ensure that fiber paths are
optimized.
Figure 5-62 A professionally deployed IBM Storage Networking SAN384C-6 Director with plastic
brackets in the front
For more information cabling and other best practices, see Cabling Considerations in Storage
Area Networks.
FICON migrations can lead to performance improvements and money savings in a mainframe
environment. There are several considerations, which are described in the following sections.
Both ends of any connection must have the same wavelength. The same fiber can carry
different bit rates, so there is no requirement or investment for the cabling when migrating
from 8 Gb to 16 Gb, for example. Reusing an existing infrastructure and migrating only one
end of an existing connection are strong reasons to continue using the long wave (or short
wave) technology.
The SFP optical components are individually replaceable. SFPs for FCP and FICON can
auto-negotiate their speed from their maximum to two speed levels below it. For example, a
4 GbE SFP could connect to a 4 GbE, a 2 GbE, or a 1 GbE SFP. A 32 GbE SFP can connect
to a 32 GbE, a 16 GbE, or an 8 GbE SFP. This capability allows for migration of only one end
of a connection, rather than requiring that both ends are replaced concurrently, which would
make migrations more difficult. So, a switch can be upgraded to allow a higher speed, but all
the processors and storage can keep their current speed and be upgraded independently
later.
Also, the SFPs in a processor, switch, or CU are independent and can be different speeds or
wavelengths. A FICON switch could have a mix of 8 GbE, 16 GbE and 32 GbE SFPs of both
850 nm shortwave and 1310 nm longwave. A 16 GbE-capable FICON switch could be
upgraded to a 32 GbE capable switch, but continue to use mostly 16 GbE SFPs, and upgrade
to 32 GbE only when the channel or device can accept that speed.
Although it is not a best practice, it is possible to move the SFPs from an older device to the
new replacement to cut costs. The cost benefit must be balanced by considerations on the
maximum achievable speed and degradation of components. In fact, SFPs are constantly
transmitting “idle to keep in sync when they are not involved in active I/O operations, so they
might fail sooner than the modules into which they are plugged.
In general, to get the best use from a new feature (such as a speed increase from 8 GbE to
16 GbE), all components must be upgraded. However, rolling upgrades are also a possibility,
particularly when there is one infrastructure element with a growth trend higher than others,
or when the lease life is different among elements. FICON Directors usually support features
such as higher speeds before channels and CU adapters, so that they are not a bottleneck
that slows the migration of processors and storage, and they can be purchased to support
either processors, storage, or both when they are upgraded, either at the same time or later
than the switch.
Tape
As an example (and ignoring security and privacy exposures), imagine you are a bank, and
every evening you read in the day's charges from different credit cards, searching for charges
from people who have accounts in your bank so that you can deduct the charges from their
accounts. The charge history is coming as a flow from a source that you cannot control, so
you cannot pause for the deduct process and then resume the search. You have a “bucket of
accounts” for the program that is processing Visa, another bucket for the program that is
processing Mastercard, and another for AMEX. After all the programs end, you can look
through the buckets and process the charges for your customers. If these buckets are on disk
storage, how much space should you allocate? Using disks would require constantly
monitoring of the size of the allocated data set and managing reallocation and movement to a
larger data set if it filled up. Tape provides a large amount of storage, which is useful when the
amount of required storage cannot be predetermined.
There are many programs that used the reconciliation processing model. IBM mainframe OSs
enable old programs to still run, so many companies still run those processes with a “why fix
what isn't broken” attitude. However, with advances in disk capacity and a massive decrease
in cost, virtual tape servers (VTSs) or virtual tape libraries (VTLs) were created that appeared
to the program as a tape drive because they respond to tape I/O commands, but the device is
a large amount of managed disk storage in front of physical tape drives. Disk storage is
smaller than tape and much faster in responding to commands (for example, the rewind
command does not have to physically move a long tape). Other uses of tapes such as
backups or offloading archived files are still common but much faster with VTLs. Cybernetics
and StorageTek made many VTLs. IBM had several virtual tape drive systems starting in
1996 with the 3495-B16 MagStar, and the latest one is IBM VS7700, which no longer has any
physical tape that is attached, and flash storage replaced rotating disk storage to make it even
faster and more capacious.
Thus, tape devices are still vital in mainframe environments and are attached by using FICON
adapters, and their migration must be considered, and DASD, processors, and switches.
By the end of this chapter you will be making a remote connection and continuing the
remainder of the switch configuration by using the Data Center Network Manager (DCNM)
GUI or NX-OS command-line interface (CLI).
For the example in this book, we use a Windows 10 workstation running PuTTY. PuTTY is
open source software that is available to download and use for no charge within the terms of
its license.
IBM c-type switches do not come with a default IP address. To make an initial connection from
a workstation to a switch requires a serial (RS-232) port. Modern workstations no longer have
built-in serial ports, so you must obtain a USB-to-serial port adapter with a USB male
interface on one end, and a DB-9 male interface on the other end. Figure 6-1 shows an
example of such an adapter.
To install the adapter, follow the instructions that are provided with it. Then, you must connect
a rollover cable between the adapter on your workstation and the switch.
The accessory kit that is provided with the switch contains essential cables, including the
rollover cable. One end has a DB-9 female interface, and the other end has an RJ-45 male
interface. A rollover cable is not a straight-through or crossover cable. Neither of these cables
will work, so you must use a rollover cable with the correct pin-outs.
A rollover cable gets its name from the fact that they have opposite pin assignments on each
end of the cable. Typically, they come with a turquoise jacket, although other colors do exist.
Figure 6-2 on page 185 shows an example of a rollover cable.
Connect one end of the rollover cable to the serial port on your workstation and the other end
to the storage area network (SAN) switch. Figure 6-3 shows the serial port on an IBM Storage
Networking SAN50C-R switch. This port is also referred to as the console port (RS-232 port).
Note: Ensure that you connect to the correct port on the SAN switch because both the
serial (console) and Ethernet (management) ports are the same shape.
The serial port on your workstation receives a COMn number, where n has a numeric value,
for example, 1, 2, 3, and so on. Use the required tools within your OS to determine what
COMn port number is assigned. In Windows, this task can be done by using Device Manager
(DM).
Use your terminal session client to set the serial port with the following parameters:
Speed = 9600
Data bits = 8
Stop bits = 1
Parity = None
Flow control = None
Note: The above settings work if the switch is set to its default parameters, which will be
the case if it is the first connection. If the settings do not work on an existing switch, then
check with your SAN administrator to discover the correct settings.
Figure 6-5 on page 187 shows the configuration in PuTTY. To get to these options, click
Serial, and then click Open.
If the switch is not turned on, then turn it on by connecting it to the power line. When the
switch initially starts, it goes through numerous checks, which scroll by on the screen until the
first prompt appears. The following examples show the initial setup on an unconfigured
IBM Storage Networking SAN384-C switch that started at the first prompt after the boot
procedure completes.
Other switches have a similar start procedure. Most of the prompts show the default answer in
square brackets [y] or [n]. If the Enter key is pressed without you explicitly typing yes or no,
then the default answer is selected. To explicitly select yes or no, type yes, y, no, or n, and then
press Enter.
Register Cisco Multilayer Director Switch (MDS) 9000 Family devices promptly with
your supplier. Failure to register might affect response times for initial
service calls. MDS devices must be registered to receive entitled
support services.
Press Enter at any time to skip a dialog. Use ctrl-c at any time
to skip the remaining dialogs.
Would you like to enter the basic configuration dialog (yes/no): yes
After you enter the basic configuration dialog, you get a series of prompts, as shown in
Example 6-2. Depending on the version of code and options that you select, the prompts
might differ slightly from the ones in this example.
Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa
Enter the type of drop to configure congestion/no_credit drop? (con/no) [c]: con
A summary of the basic configuration to be applied is shown in Example 6-3. If this summary
is not correct, select yes to go through the options again. Otherwise, select no to accept the
configuration and continue.
Example 6-4 shows the configuration being saved and applied, and then the login prompt
appears.
[########################################] 100%
Copy complete.
The initial setup is now complete. Log in to the switch and run the show hardware command to
verify that everything is as expected. Example 6-5 shows a truncated version of this
command and output.
Software
BIOS: version 2.6.0
kickstart: version 8.4(1a)
system: version 8.4(1a)
BIOS compile time: 05/17/2019
kickstart image file is: bootflash:///m9700-sf4ek9-kickstart-mz.8.4.1a.bin
kickstart compile time: 10/31/2019 12:00:00 [11/30/2019 19:14:41]
system image file is: bootflash:///m9700-sf4ek9-mz.8.4.1a.bin
system compile time: 10/31/2019 12:00:00 [11/30/2019 20:38:44]
Hardware
IBM SAN384C-6 8978-E08 (8 Module) Chassis ("Supervisor Module-4")
Last reset
Reason: Unknown
System version: 8.4(1a)
Service:
plugin
Core plug-in, Ethernet plug-in
--------------------------------
Switch hardware ID information
--------------------------------
Switch is booted up
Switch type is SAN384C-6 8978-E08 (8 Module) Chassis
Model number is 8978-E08
H/W version is 1.2
Part Number is 01FT565 E08
Part Revision is A0
Manufacture Date is Year 0 Week 13
Serial number is 000013C506E
CLEI code is CMM3N00ARA
--------------------------------
Chassis has 10 Module slots and 6 Fabric slots
--------------------------------
Module1 ok
Module type is 4/8/16/32 Gbps Advanced FC Module
0 submodules are present
Model number is 01FT644 48x32 FC
H/W version is 1.1
Part Number is 01FT644 48x32 FC
Part Revision is B0
Manufacture Date is Year 23 Week 12
Serial number is JAE23120F1A
SAN384C-6#
The remainder of the switch configuration can be continued remotely from your desk.
Configuration can be performed by using the GUI or CLI tools.
To use the CLI, you need an SSH client, which you use to connect to the switch by using the
IP address and user credentials that were configured in the initial setup.
For a GUI configuration, use the licensed version of DCNM because all the GUI configuration
examples that are shown in this book are done so by using the licensed version of DCNM.
The no charge version of DCNM starting at release 11.3 has a 60-day trial of all licensed
features. Starting at DCNM release 11.5, the trial period is up to 120 days.
DM Yes Yes
Reporting - Yes
Health Score - - -
There are several prerequisites for POAP to work, and the main ones are listed below. For a
full list, see the Cisco documentation.
A USB drive that is formatted with FAT32 or a combination of DHCP server and TFTP/SCP
server. Either of these formats must have the configuration files and software images.
A switch that supports NX-OS Release 8.1(1b) or later.
No existing configuration on the switch (otherwise, it boots from this configuration).
The POAP feature is on by default. When a switch starts, if there is no onboard configuration
file to boot from, it checks for a USB drive in USB1. If the switch finds one and all the
conditions are met, it configures itself and starts from the configuration therein. If there is no
USB drive or the correct conditions are not met, then the switch looks for a DHCP server and
TFTP/SCP server. If these servers are found and the conditions are met, the switch
configures itself and starts from the configuration therein. If a POAP environment was
configured but the switch cannot configure itself from either method, then it is necessary to
troubleshoot until any problems are resolved.
Note: Allow access takes priority over deny access. For example, if a user2 has role4
assigned that denies access to debug commands but also has role5 assigned that allows
access to debug commands, then user2 has access to debug commands.
Example 6-6 on page 195 shows the default roles that are configured along with the default
rules that are configured for each role.
Role: network-operator
Description: Predefined Network Operator group. This role cannot be modified.
VSAN policy: permit (default)
-------------------------------------------------
Rule Type Command-type Feature
-------------------------------------------------
1 permit show *
2 permit exec copy licenses
3 permit exec dir
4 permit exec ssh
5 permit exec terminal
6 permit config username
There are more default roles that are not shown here that are system-defined privileged roles.
When a custom user role is created, it does not in itself permit access to functions. Rules
must be configured for the user role before it becomes functional.
Example 6-8 shows how to create a user, add a password, and set an expiry date.
By default, a new user is assigned the role of network-operator, but it is always best to
explicitly enforce it. For more information about creating users and user advanced security
such as password policies, see the online IBM or Cisco documentation.
Now, the developer role is created, but there are no rules that are assigned. Example 6-10
shows how to add rules to the developer role.
Example 6-11 Committing changes to the database and distributing them to the entire fabric
SAN384C-6# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
The itsouser user now has both network-operator and developer privileges.
san-admin Administers SAN features, this role was introduced with the Cisco
Nexus 5000 series switches.
After you are logged in to the DCNM web client user interface, you see a window like
Figure 6-6. Your window might look slightly different depending on your version of DCNM and
access privileges. Click Administration.
Figure 6-6 User administration from within the DCNM web client user interface
The window that is shown in Figure 6-7 on page 199 opens. Click Local.
The window that is shown in Figure 6-8 opens. Click the + icon.
The window that is shown in Figure 6-10 on page 201 opens. Confirm that all the options are
correct, and then click Add.
The user is added and shows up in the list of users within DCNM, as shown in Figure 6-11.
Local users that are configured on DCNM can access only DCNM. The same local users can
also access the c-type switches only if they are explicitly configured on the switches
themselves.
The date and time can be set manually on individual devices, or it can be gathered from a
reliable time source by using the NTP. Your environment can use either method or a mix of
both. Setting and maintaining time manually on individual devices can be time-consuming,
and it is more prone to error and likely to lead to inaccurate time (at some point) because the
devices are not synchronized with each other. It is better to have a reliable time source that all
devices synchronize to. To improve reliability, there should be multiple reliable time sources.
When using NTP, it is important to work within certain guidelines. The list below (not
exhaustive) shows some of these guidelines. The word server refers to both servers and
peers.
Always get permission to use upstream time servers.
Use time servers that are as close as possible to the time clients.
Spread the workload between different time servers so that no single one is overloaded.
Try to ensure that different upstream time servers get their time from different sources in
case that source (further upstream) goes down.
Configuring NTP
Start the DCNM web client and select Device Manager → Admin → NTP, as shown in
Figure 6-12.
In Figure 6-14, enter the IP address, mode (Server or Peer), and whether it is preferred. Click
Create.
Figure 6-15 on page 205 shows the newly created NTP Server or Peer. The dialog box
remains open to allow more NTP Servers or Peers to be created. Click Create or Close as
required.
Verifying NTP
There are several commands that can be run to determine the status of NTP. Example 6-23
shows one of them.
version 8.4(1a)
logging level ntp 6
ntp server 10.122.107.100 prefer
ntp server 10.122.107.101
ntp peer 10.122.107.96 prefer
At the time of writing, DCNM Server is distributed with Java Runtime Engine (JRE) 11.0.2. It
installs into the following folder:
<dcnm_root>/java/jdk11
Note: DCNM version 11.3(1) does not support the Oracle 12c pluggable database version.
Note: Before installing DCNM, it is a best practice to create a DCNM Admin user locally on
all the switches. This user will be used by DCNM to log in and manage the switch
infrastructure.
The DCNM 11.3(1) installer comes packaged with a PostgreSQL database. The installer file
for the version that is being installed in this example is
dcnm-installer-x64-windows.11.3.1.exe.zip. Use PostgreSQL for production enterprise
environments unless you have a supported installation of Oracle database that is available,
and you consider it to be a better option.
Figure 6-19 shows the OEM vendor options. Select IBM, and then click Next.
Note: The DCNM DB User password is a standard Windows user, so the password must
be compliant with the password policy.
A third remote authentication method is the Lightweight Directory Access Protocol (LDAP).
This method can be configured only after DCNM is running, but not during deployment.
After remote authentication is configured, the Local DCNM credentials do not work. In our
example, we selected the local database for user authentication. Click Next.
Start your web browser, and in the address field, type the DCNM IP address that was used
during installation. The login window opens. If you changed the default (443) port, then you
also need to input the port into the address field. If DCNM does not start, read any error
messages and check that all the required services are running.
The login window opens, as shown in Figure 6-34. Enter the username and password that
was defined at installation time, and then click Login. Our example shows local
authentication.
From this point onward, DCNM is ready to use. For a comprehensive guide to using DCNM,
see Cisco Nexus Dashboard Fabric Controller (Formerly DCNM).
The installation of the licenses for an IBM c-type switch is a two-step process. The first step is
to use the Product Activation Key (PAK) that is provided with the system to create the license
key files. After this task is done, these license key files must be moved to the switches and
installed by using either DCNM or the CLI.
Figure 6-38 shows the window where we select the Enter PAK or Token ID radio button and
enter the PAK number that is printed on the PAK letter. Click OK to continue.
Now, you see the newly added PAK in the window, as shown in Figure 6-40. Select this PAK
by checking the box in the first column and click Get Licenses.
Important: Enter the full serial number, including any leading zeros. It must be 11 digits.
Failure to do this task prevents the license from being installed.
As you see in Figure 6-46 on page 235, the PAK now shows as fulfilled because the license
file was delivered. The license file is packaged in compressed format, so before the file is
moved to the switch, it must be decompressed.
You must move the license file to the bootflash on the IBM c-type switch. There are several
applications that can be used to transfer files, but in this example we use WinSCP. Whichever
file transfer tool that you use, you should be familiar with it and know how to transfer files.
Launch WinSCP, as shown in Figure 6-47, and log in to your switch. There are several
protocols that you can use, and the one that you choose might depend on the policies in
place. In many cases, unsecure protocols such as TFTP are banned. In this example, we use
Secure File Transfer Protocol (SFTP).
The WinSCP navigation window opens. Go to the correct locations on both the local and
remote devices. Highlight the file to be transferred and drag it from the local to the remote
location. In Figure 6-50, this action will be from the pane on the left to the pane on the right.
As a best practice, copy the license file to the bootflash, and after it is copied, the licenses
automatically are duplicated to both supervisors if this device is a director (a switch has only
one supervisor).
As a best practice, back up the license key file to a remote server in case it will be needed
again.
The Inventory / Discovery / SAN Switches window opens, as shown in Figure 6-54. Click + to
add the first switch, which is also known as the seed switch.
After a short delay for discovery, the new switch appears in the inventory, as shown in
Figure 6-56. Click Topology.
On the SAN topology window that is shown in Figure 6-57 on page 243, you see the first
switch for the lab environment. Double-click the icon for the switch.
Now, you see the summary information for this switch in Figure 6-58. Click Show More
Details.
You see the Device view in DM for the switch. Select Admin → Licenses, as shown in
Figure 6-60.
in Figure 6-61 on page 245, you see the Licenses window. You can see all the licenses that
are applicable to this model of IBM c-type switch, but there are no licenses that are installed.
Click the Install tab.
Figure 6-62 shows the Install tab for the Licenses window. Select the drop-down menu for
URI to see the licenses that are on the switch and available for installation. Select one of them
as required.
After a few seconds, the license installation is successful, as shown in Figure 6-64. Repeat
the same steps to install all the licenses for this switch.
Now, click the Features tab. In Figure 6-65 on page 247, you now see the three installed
licenses for the IBM Storage Networking SAN384C-6 switch for the lab environment.
To build the lab environment, add all the licenses for the other switches.
In the Switch Licenses window, click Upload License files. The window that is shown in
Figure 6-68 on page 249 opens.
In the Bulk Switch License Install window, ensure that the correct file transfer protocol is
selected. Select either TFTP, SCP, or SFTP to upload the license file. Not all protocols are
supported for all platforms. TFTP is supported for Windows or RHEL DCNM SAN installation,
but only SFTP and SCP are supported for all installation types. Click Select License File.
The window that is shown in Figure 6-69 opens.
Select one or more license files. In our example, we select only one file because the other
one was previously installed. After the file or files are selected, click Open.
Click Upload to upload the selected file or files. The license file is uploaded, and the switch IP
address to which the license is assigned is extracted along with the file name and feature list.
The window that is shown in Figure 6-71 shows this information.
Select the licenses to be installed. In our example, we have only one file, but we could have
selected multiple files. After the license is selected, the Install button becomes active, as
shown in Figure 6-72 on page 251.
Click Install Licenses. The window that is shown in Figure 6-73 opens. The initial status is
INSTALLING.
Log in to the switch by using an SSH client. In our examples, we used PuTTY. Example 6-24
lists the contents of the bootflash folder. We ensure that the correct file, which in our example
is MDS20210127200818220.lic, is there.
Example 6-25 shows how to run the license installation and view the license information.
MDS20190823150219038.lic:
SERVER this_host ANY
VENDOR cisco
INCREMENT SAN_ANALYTICS_PKG cisco 1.0 22-aug-2022 uncounted \
VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>L-D-M91S-AXK9</SKU> \
HOSTID=VDH=JPG224600GU \
NOTICE="<LicFileID>20190823150219038</LicFileID><LicLineID>1</LicLineID> \
<PAK></PAK>" SIGN=38DF102A41F0
INCREMENT FM_SERVER_PKG cisco 1.0 permanent uncounted \
VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M91ENTDCNMX-K9</SKU>
\
HOSTID=VDH=JPG224600GU \
NOTICE="<LicFileID>20190823150219038</LicFileID><LicLineID>2</LicLineID> \
<PAK></PAK>" SIGN=9BC81CC4EDE0
INCREMENT ENTERPRISE_PKG cisco 1.0 permanent uncounted \
VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M91ENTDCNMX-K9</SKU>
\
HOSTID=VDH=JPG224600GU \
NOTICE="<LicFileID>20190823150219038</LicFileID><LicLineID>3</LicLineID> \
<PAK></PAK>" SIGN=0067A8188CE6
MDS20210127200818220.lic:
SERVER this_host ANY
VENDOR cisco
INCREMENT PORT_ACTIV_9148T_PKG cisco 1.0 permanent 24 \
VENDOR_STRING=<LIC_SOURCE>MDS_SWIFT</LIC_SOURCE><SKU>M9148T-PL12</SKU> \
HOSTID=VDH=JPG224600GU \
NOTICE="<LicFileID>20210127200818220</LicFileID><LicLineID>1</LicLineID> \
<PAK></PAK>" SIGN=1EB58EDC2C26
MDS20210127200818220.lic
Feature Ins
Lic Status Expiry Date Comments
Count
--------------------------------------------------------------------------------
FM_SERVER_PKG Yes - Unused never -
ENTERPRISE_PKG Yes - Unused never -
Note: DCNM and DM installation is described in Chapter 6, “Initial connectivity and setup”
on page 183.
The FICON mainframe interface capabilities benefit the IBM c-type family by supporting
FICON storage area network (SAN) environments.
For all the examples in the following sections and the HCD definitions, we use the topology
that is shown in Figure 4-1 on page 92.
Figure 7-1 shows the Hardware Configuration screen. We select option 1, Design, Modify, or
View Configuration Data.
Starting at the Design, Modify, or View Configuration Data screen, select option 2 - Switches,
as shown in Figure 7-2.
We specify the switch type as 2032, which means that we are defining a FICON Director.
For the installed port range, it is a best practice to enter the complete architectural port range
that is possible (0x00 - 0xFD). This range may be smaller, but there is no particular advantage
to making it smaller, and because FICON Port Addresses on IBM c-type switches are all
virtual, you have flexibility for the future.
We enter the switch control unit (CU) number and the switch device number. These items are
the definitions of the IBM Control Unit Port (CUP) device. In our example network, we set the
FICON CUP CU number and the CUP device number as the same, but this approach is not
required.
After completing all the fields as shown in Figure 7-4, we press Enter.
Figure 7-5 on page 259 shows the results of defining the first switch. It is expected that after
this switch is defined, HCD sends a message that the CUP CU and device definitions are
created, but we still have work to do on them. We describe this work after we define the
FICON channels and CUs.
Repeat this process for each of the FICON VSANs on the two switches in the test
environment. Add the following IDs:
Switch ID 0x21 for FICON VSAN 40 on switch IBM Storage Networking SAN192C-6
Switch ID 0x10 for FICON VSAN 50 on switch IBM Storage Networking SAN384C-6
Switch ID 0x11 for FICON VSAN 50 on switch IBM Storage Networking SAN192C-6
We do not need to create a switch definition within the mainframe hardware configuration for
VSAN 100 on either the IBM Storage Networking SAN384C-6 or IBM Storage Networking
SAN50C-R switches that are being used for disk replication because the data connection that
is used for this replication is Fibre Channel Protocol (FCP), not FICON.
7.2.2 Defining Channel Path IDs that are connected to the FICON switches
Now that all the FICON switches are defined to the hardware configuration, we must either
add or modify the existing Channel Path IDs (CHPIDs) to represent how they are connected
in our example network. Starting from the Design, Modify, or View Configuration Data screen,
we select option 3 Processors, and the resulting screen is shown in Figure 7-7.
We select the appropriate CSS number by putting an s next to it (in our environment, it is 0),
and press Enter. The resulting screen is shown in Figure 7-9.
Looking at the example network, the first channels that we must define are CHPIDs 18 and
20, which are used to access the IBM DS8870 system. In our example, these CHPIDs exist
and are being reused for this network. If new channels must be added, use the same dialog.
We type a / next to CHPID 18 and press Enter. Now, we see the action selection dialog for
this CHPID, as shown in Figure 7-10.
We type 2 for the Change option to get to the screen that is shown in Figure 7-11.
For FICON CHPIDs, we must confirm that the Channel path type is defined as FC. Because
in our environment we are sharing the channels between multiple logical partitions (LPARs),
we need the Operation mode to be SHR. We can optionally add a description.
Finally, we specify the Entry port to as the FICON Port Address that is assigned to the
interface on the switch to which it will be connected. The Entry port is also specified in
hexadecimal format. In our example network, the switch ID that is used for CHPID 18 is 0x20,
and the Entry port value is 0x00. We show later how to verify that the FICON Port Address on
the switch definition matches.
Figure 7-12 shows the CHPID change dialog after completion. Press Enter.
The next screen that we see is for the selection of the candidate LPARs, which is another
selection mechanism that is related to which LPARs are allowed to use a particular CHPID.
For our environment, we press Enter. Figure 7-14 shows this screen.
Now, we repeat this process for each FICON CHPID that we will be using in the example
network:
CHPID 20 is attached to FICON Port Address 0x30 on FICON VSAN 40, which is on an
IBM Storage Networking SAN384C-6 switch that uses switch ID 0x20.
CHPID 70 is attached to FICON Port Address 0x02 on FICON VSAN 50, which is on an
IBM Storage Networking SAN384C-6 switch that uses switch ID 0x10.
CHPID 78 is attached to FICON Port Address 0x07 on FICON VSAN 50, which is on an
IBM Storage Networking SAN384C-6 switch that uses switch ID 0x10.
CHPID 28 is attached to FICON Port Address 0x22 on FICON VSAN 40, which is on an
IBM Storage Networking SAN384C-6 switch that uses switch ID 0x20.
CHPID 30 is attached to FICON Port Address 0x42 on FICON VSAN 40, which is on an
IBM Storage Networking SAN384C-6 switch that uses switch ID 0x30.
CHPIDs 70 and 78 are shown in Figure 7-17. Although these CHPIDs are connected to the
same physical chassis (IBM Storage Networking SAN384C-6), the IBM Z hardware thinks
that they are connected to two different switches they are in different VSANs.
In this section, we define a disk array that is attached to the same switch as the host channels
that will be accessing it. This configuration is known as a locally switched device. In the
example network, there is an IBM DS8870 disk array that is attached to an IBM Storage
Networking SAN386C-6 switch with two ports in VSAN 40 (one at FICON Port Address 0x10
and the other at port address 0x40). The host channels that access this disk array are
CHPIDs 18 and 20, and they are connected to the same switch and VSAN, and FICON Port
Addresses 0x00 and 0x30.
We start at the Design, Modify, or View Configuration Data screen and select option 4 for
CUs. The resulting screen is shown in Figure 7-18.
For our network, we are adding a disk array (as opposed to modifying an existing one), so we
press F11 to add it.
For this new CU, we must define the type and characteristics of the device and the
connectivity. In our example, we enter the CU number as 9A00 and specify the device type as
2107. We may enter a description and serial number if we want.
Now, we define the connectivity from the host to this disk CU. We have two ports from the
host going to this CU, and they are both connected to FICON VSAN 40 on an IBM Storage
Networking SAN384C-6 switch, which is defined as domain 0x20. The FICON VSAN number
is not specified anywhere in this screen because HCD and the hardware configuration do not
have visibility to VSANs. For each place where we enter the switch number, we must input the
associated FICON Port Address that is connected to by the disk array.
The fields in the Add Control Unit dialog are shown in Figure 7-19. Now, we press Enter to
move to the next screen.
Now, we see the Select Processor / CU screen. Here, we tell the host configuration which
host CHPIDs will be performing I/O to the disk ports that we defined on the previous screen.
For the IBM Z I/O subsystem (IOS), there may be eight parallel paths to each CU, and you
can specify them in this screen. These CHPID through switch-to-disk connections are
configured as <CC.SSPP>, where CC is the 1-byte CHPID number, SS is the CU switch ID, and
PP is the FICON Port Address for the switch interface where the disk port is attached.
For our example network, we input 18.2010 and 20.2040, which means that this mainframe
can initiate I/O from CHPID 18 and send it to the switch that it is connected to (in this case,
switch ID 0x20) and that the frames that are associated with this I/O are directed to the device
that is connected to FICON Port Address 0x10 on switch ID 0x20.
Figure 7-20 on page 269 shows the completed dialog. Press Enter.
Now, we must define the CU address (CUADDR), base device address, and address range
for this CU. These values are provided by your disk administrator. The values for our example
network are shown in Figure 7-21. We press Enter twice.
Now that the disk CU is defined, the associated disk devices also must be defined. That topic
is beyond the scope of this book, but it is a well-known process, and there are not
associations that must be made between the device definition and the definitions for the
FICON SAN.
We use HCD to define the connection between CHPID 28, which is attached to FICON Port
Address 0x22 on the IBM Storage Networking SAN384C-6 switch, and the disk port, which is
attached to FICON Port Address 0x0A on the IBM Storage Networking SAN192C-6 switch.
The fact that this connection crosses one or more Inter-Switch Links (ISLs) will not be defined
as part of this process, but it is implied.
We start at the Design, Modify, or View Configuration Data screen and select option 4 for
CUs. The resulting screen look like Figure 7-18 on page 267. We press F11 to add a CU.
Like the CU that was configured earlier, we input the CU number as 8A00 and the device type
as 2107. We input a description, and for now leave the serial number blank.
Now, we define the connectivity from the host to this cascaded disk CU. On this first screen,
we define the two disk ports that are both connected to FICON VSAN 40 on the IBM Storage
Networking SAN192C-6 switch, which is defined with domain 0x21. For each of these places,
we enter the switch number, we must input the associated FICON Port Address that is being
connected to by the disk array.
Now, we see the Select Processor/CU screen, which is shown in Figure 7-24.
We must tell the host configuration that host CHPIDs 28 and 30 will be connecting to these
two disk ports. The way that the host configuration knows that this CU is a cascaded CU is
that CHPIDs 28 and 30 are defined as being attached to switch ID 0x20, and the new disk
ports that we are defining are attached to switch ID 0x21.
This mainframe can initiate I/O from CHPID 28 and send it to the switch that it is connected to
(in this case, switch ID 0x20) and that the frames that are associated with this I/O will be
directed to the device that is connected to FICON Port Address 0x0A on switch ID 0x21. The
routing of the frames after they enter the entry switch (where the CHPID is attached) until they
exit from the destination switch is handled by the switches, and the mainframe has no visibility
of this route.
Again, we see that we must define the CUADDR, the base device address, and address
range for this CU. The values for our example network are shown in Figure 7-25. Press Enter
twice to continue.
The CU is defined. After scrolling through the resulting screen, we can see the CU
information. As before with the locally switched CU, the device configuration must be created
and linked to the CU.
The CUs for the IBM TS7760 must be created in a like way. Because the device type for the
IBM TS7760 is different, there will be minor differences in the definition, but the concepts of
how the CHPIDs, switches, and CU FICON Port Address appear in the screens are the same.
Note: There is no difference in defining cascaded CUs that are connected over FCIP links
versus the ones that are connected over more conventional FC ISLs because the
mainframe configuration does not have any visibility into the switch-to-switch connections.
Note: In most cases, the CU number would not be the same as the device numbers that
are defined under it. Because there is a 1:1 relationship between the CUP CU number and
the CUP device number, this configuration is acceptable, and it is the convention that is
used within the lab that we are using.
We define which FICON channels will be used to communicate with the CUP device on each
of the switches. We also define which LPARs will have access to these devices. We start at
the Design, Modify, or View Configuration Data screen and select option 4 for CUs. Then, we
scroll down until CU EF20 is seen, as shown in Figure 7-26.
We press Enter and see the information for the CUP CU, as shown in Figure 7-28.
All the information that was populated as a result of the earlier definition is correct. If you
want, you can add the serial number for the FICON VSAN. In our example, we leave this
serial number field blank for now and press Enter again, as shown in Figure 7-29 on
page 275.
We enter the CHPID and link address paths for this CU. In our example, we define the CUP
devices to be accessible from both CHPID 18 and CHPID 20. The same pattern of <CC.SSPP>
is used, where CC is the 1-byte CHPID number, SS is the CU switch ID, and PP is the FICON
Port Address for the switch internal port. For the CUP CU, we always use the reserved value
of 0xFE for the port (PP), and SS is the switch ID that we are defining (0x20). Because both
CHPIDs have the same destination, we enter 18.20FE and 20.20FE, as shown in
Figure 7-30, and then press Enter.
In Figure 7-32, we confirm the device parameter from the earlier screens, so we press Enter.
In Figure 7-33 on page 277, we verify our settings and press Enter.
We are back on the Control Unit List screen, but now the CUP control CU shows as being
attached to a CSS. We must work on the device. We return to the Design, Modify, or View
Configuration Data screen and select option 5 for I/O Devices. When we scroll down to device
EF20, we see it as shown in Figure 7-34.
On this screen, we verify that the device number is connected to the correct CU, which is
EF20 for both values in this case. Press Enter to see the screen that is shown in Figure 7-36.
On this screen, we select the processor and CSS combination with a / next to the line and
press Enter to move to the next screen (Figure 7-37 on page 279).
Because the information is correct for our environment, we press Enter to go to the screen
that is shown in Figure 7-38.
Mark the needed OS configuration with a /, press Enter, and then choose option 1 Select to
connect, as shown in Figure 7-40.
We choose the device parameters and features to configure, which are modified to match our
lab standards that are shown in Figure 7-41 on page 281. Press Enter.
We see the dialog that is shown in Figure 7-42 for attaching system-defined esoterics to the
CUP device. CUP devices are not part of any normal esoteric group, so nothing is selected.
Press Enter.
We are back at the I/O Device list screen, as shown in Figure 7-44. We see that the CUP
device is complete because it has both CSS and OS connections. This system is ready to talk
to the CUP device for switch ID 0x20.
So that the CUP devices on each of the FICON VSANs within each switch can talk to this
system, we also complete the CU and I/O device definitions in the same way.
The Input/Output Configuration Program (IOCP) data set that is generated by HCD for the
input is shown in Figure 7-45.
Select Topology, double-click SAN384C-6, and then select Show more details, as shown in
Figure 7-46.
Figure 7-51 shows that fabric-binding (highlighted in blue) was changed to enable. Click
Apply to activate the feature.
Now that we have enabled the Fabric Binding feature, we repeat the same steps for the
FICON feature, as shown in Figure 7-53.
The IBM c-type switch allows up to eight FICON VSANs. In addition, all the ports have virtual
port addresses.
Note: There is no affinity between the physical location of a port within the switch and the
FICON Port Address. Any port address can be at any physical location of the switch if there
is not a duplicate port address in any single VSAN.
For this configuration step, we start with DM, as shown in Figure 7-55.
Note: With the FICON sunglasses button in DM, you can toggle between the Standard
view and the FICON view. The Standard view numbers the ports as they are physically on
the hardware line card. The FICON view numbers the ports with the FICON Port
Addresses that are used for FICON routing.
To start the configuration of the FICON Port Address layout, select FICON → Port Numbers,
as shown in Figure 7-57. By default, each c-type switch allocates 48 physical FICON Port
Addresses per slot up to 0xEF. FICON Port Addresses can be arranged in any order to meet
the requirements of the environment.
Figure 7-58 on page 293 shows FICON Port Numbers Logical tab. By default, the switch
allocates the last 14 FICON Port Addresses for the logical pool. In a case where the switch is
not using ISLs, the logical port addresses can be removed and reallocated as physical port
addresses. Because our lab environment has several FCIP links and port channels, we use
the default for logical port addresses.
Important: VSAN 1 should never be used with FICON or open systems devices. New
custom VSANs should be created when configuring environments.
Figure 7-60 on page 295 shows the VSAN window. Click Create to enter the VSAN
information.
Figure 7-61 shows the Create VSAN window. To define a FICON VSAN, select FICON, which
changes the defaults to match the FICON VSAN characteristics.
Provide the VSAN ID, name, and the domain ID, and click Create.
In Figure 7-63 on page 297, we validate that all required FICON parameters will be applied
upon VSAN creation. This action is disruptive only when a VSAN is under modification. Click
Yes to continue.
Note: The VSAN disruption message refers to the VSAN under modification. Other VSANs
are not impacted.
Figure 7-64 shows that the VSAN was created successfully. We can use the same window to
create more FICON VSANs.
Figure 7-70 on page 303 shows the VSAN creation for our lab environment on the other
switch. The VSAN IDs are the same, but the domain IDs are different on this switch because
the VSAN IDs are forming a fabric with the two switches based on the unique domain IDs in
that fabric.
Important: Never use VSAN 1 for FICON or open systems devices. As a best practice,
create custom VSANs when configuring these types of environments.
vsan 40 information
name:FICON_Disk state:active
interoperability mode:default
loadbalancing:src-id/dst-id
operational state:up
vsan 4079:evfp_isolated_vsan
vsan 4094:isolated_vsan
SAN384C-6(config-ficon)# end
Performing fast copy of configurationdone.
SAN384C-6#
Starting at Device Manager, select Security → Fabric Binding for the IBM Storage
Networking SAN192C-6 switch, as shown in Figure 7-71.
Figure 7-73 shows the Create Fabric Binding Config Database window. To populate this
window, we open the Fabric Binding window on the IBM Storage Networking SAN384C-6
peer switch so that we can copy its local sWWN.
Figure 7-74 on page 307 shows the Fabric Binding Database window for the IBM Storage
Networking SAN384C-6 switch. Capture the local sWWN information, which will be used to
create the Database entry on the IBM Storage Networking SAN192C-6 switch.
Select VSAN 40 from the drop-down menu, enter the domain ID 0x20, and paste the local
sWWN for the IBM Storage Networking SAN384C-6 peer switch that we captured. Click
Create, as shown in Figure 7-75.
Select VSAN 40 from the drop-down menu, enter the domain ID 0x21, and paste the local
sWWN for the IBM Storage Networking SAN192C-6 peer switch. Click Create, as shown in
Figure 7-77.
Figure 7-78 on page 309 shows the entries for local and remote switches in VSAN 40.
Important: It is critical to validate that the Fabric Binding database is the same on each
switch. If there is a mismatch, the FICON VSAN will not be allowed to become active
between the switches.
On the Action tab, select ForceActivate from the drop-down menu for VSAN 40, as shown in
Figure 7-79. This action activates the Fabric Binding database. Click Apply. This step must
be repeated on the IBM Storage Networking SAN284C-6 peer switch.
SAN192C-6#
SAN192C-6# show wwn switch
Switch WWN is 20:00:00:2a:6a:a4:1a:80
SAN192C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN192C-6(config)# fabric-binding database vsan 40
SAN192C-6(config-fabric-binding)# swwn 20:00:00:2a:6a:a4:1a:80 domain 0x21
SAN192C-6(config-fabric-binding)# swwn 20:00:00:3a:9c:31:62:80 domain 0x20
SAN192C-6(config-fabric-binding)#
SAN192C-6(config-fabric-binding)# fabric-binding activate vsan 40 force
SAN192C-6(config)# end
Performing fast copy of configurationdone.
SAN192C-6# show fabric-binding database vsan 40
--------------------------------------------------
Vsan Logging-in Switch WWN Domain-id
--------------------------------------------------
40 20:00:00:2a:6a:a4:1a:80 0x21(33) [Local]
40 20:00:00:3a:9c:31:62:80 0x20(32)
[Total 2 entries]
SAN192C-6# show fabric-binding database active vsan 40
--------------------------------------------------
Vsan Logging-in Switch WWN Domain-id
--------------------------------------------------
40 20:00:00:2a:6a:a4:1a:80 0x21(33) [Local]
40 20:00:00:3a:9c:31:62:80 0x20(32)
[Total 2 entries]
SAN192C-6#
Figure 7-81 shows the Topology view of the two lab switches with no FICON ISLs.
Using Device Manager, as shown in Figure 7-82, double-click fc1/48 on the IBM Storage
Networking SAN192C-6 switch to open the Interface window.
When configuring an ISL interface, it is a best practice to provide a description that references
both sides of the link. Change the Mode to E, Speed to 32 GbE, and Admin Status to Up,
and click Apply, as shown in Figure 7-83 on page 313.
We select the Trunk Config tab and populate the allowed VSANs for this ISL, in this case
VSANs 1 and 40, as shown in Figure 7-84. Click Apply.
Note: As a best practice, trunk VSAN 1 in addition to the applicable FICON VSANs on all
FICON ISLs.
Now, we create a second ISL by using ports fc2/48 on each switch in the same manner.
Figure 7-86 shows the ISLs coming online. When the port is double-clicked, you can see that
VSAN 40 is UP.
Figure 7-87 on page 315 shows the Topology view of the two ISLs between our lab switches.
version 8.4(1a)
interface fc1/48
switchport speed 32000
switchport mode E
switchport trunk allowed vsan 1
switchport trunk allowed vsan add 40
no shutdown
SAN384C-6#
SAN192C-6#
SAN192C-6#
SAN192C-6# conf t
SAN192C-6#
SAN192C-6# show run interface fc1/48
version 8.4(1a)
interface fc1/48
switchport speed 32000
switchport mode E
switchport trunk allowed vsan 1
switchport trunk allowed vsan add 40
no shutdown
SAN192C-6#
SAN384C-6#
SAN384C-6# config t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)# interface fc2/48
SAN384C-6(config-if)# switchport mode E
SAN384C-6(config-if)# switchport speed 32000
SAN384C-6(config-if)# switchport trunk allowed vsan 1
Warning: This command will remove all VSANs currently being trunked and trunk only
the specified VSANs.
Do you want to continue? (y/n) [n] y
SAN384C-6(config-if)# switchport trunk allowed vsan add 40
SAN384C-6(config-if)# no shutdown
SAN384C-6(config-if)# end
Performing fast copy of configurationdone.
SAN384C-6#
SAN384C-6# show topology vsan 40
SAN192C-6#
SAN192C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN192C-6(config)# interface fc2/48
SAN192C-6(config-if)# switchport mode E
To configure a Port Channel by using the DCNM, click Configure, and under SAN, click Port
Channel, as shown in Figure 7-88. Click Create New Port Channel.
In our lab environment, there is a single fabric with one pair of connected switches. We have
selected two switches, IBM Storage Networking SAN192C-6 and IBM Storage Networking
SAN384C-6, as shown in Figure 7-90. Click Next.
In Figure 7-91 on page 321, we leave the ISLs that are listed under Selected for the port
channel being created. Click Next.
Figure 7-93 shows that converting ISLs to port channels can be disruptive. Click Yes to create
the port channel.
Figure 7-94 on page 323 shows that port channel configuration was applied successfully.
Figure 7-95 shows the Topology view and that the port channel between the IBM Storage
Networking SAN384C-6 and IBM Storage Networking SAN192C-6 switches was created
successfully.
Example 7-5 Configuring a Fibre Channel port channel by using the CLI
SAN384C-6#
SAN384C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)#
SAN384C-6(config)# int port-channel 5
SAN384C-6(config-if)# switchport mode E
SAN384C-6(config-if)# switchport speed 32000
SAN384C-6(config-if)# switchport description To SAN192C-6
SAN384C-6(config-if)#
SAN384C-6(config-if)# switchport trunk allowed vsan 1
Warning: This command will remove all VSANs currently being trunked and trunk only
the specified VSANs.
Do you want to continue? (y/n) [n] y
SAN384C-6(config-if)# switchport trunk allowed vsan add 40
SAN384C-6(config-if)#
SAN384C-6(config-if)# ficon portnumber 0xf0
SAN384C-6(config-if)#
SAN384C-6(config-if)# no shut
SAN384C-6(config-if)# end
Performing fast copy of configurationdone.
SAN384C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)# interface fc1/48
SAN384C-6(config-if)# channel-group 5 force
fc1/48 added to port-channel 5 and disabled
please do the same operation on the switch at the other end of the port-channel,
then do "no shutdown" at both ends to bring it up
SAN384C-6(config-if)#
SAN384C-6(config-if)# no shut
SAN384C-6(config-if)#
SAN384C-6(config-if)# interface fc2/48
SAN384C-6(config-if)# channel-group 5 force
fc2/48 added to port-channel 5 and disabled
please do the same operation on the switch at the other end of the port-channel,
then do "no shutdown" at both ends to bring it up
SAN384C-6(config-if)#
SAN384C-6(config-if)# no shut
SAN384C-6(config-if)#
SAN384C-6(config-if)# end
Performing fast copy of configurationdone.
SAN384C-6#
SAN384C-6# show interface port-channel 5
port-channel5 is trunking
Port description is To SAN192C-6
Hardware is Fibre Channel
Port WWN is 24:05:00:3a:9c:31:62:80
Admin port mode is E, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
SAN192C-6#
SAN192C-6# config t
Enter configuration commands, one per line. End with CNTL/Z.
SAN192C-6(config)# int port-channel 5
SAN192C-6(config-if)# switchport mode E
SAN192C-6(config-if)# switchport speed 32000
SAN192C-6(config-if)# switchport description To SAN384C-6
SAN192C-6(config-if)# switchport trunk allowed vsan 1
Warning: This command will remove all VSANs currently being trunked and trunk only
the specified VSANs.
Do you want to continue? (y/n) [n] y
SAN192C-6(config-if)# switchport trunk allowed vsan add 40
SAN192C-6(config-if)#
SAN192C-6(config-if)# ficon portnumber 0xf0
SAN192C-6(config-if)#
SAN192C-6(config-if)# no shut
Note: Some configuration steps might vary depending on your environment requirements.
In DCNM, select Topology → Switch → Device Manager. The General tab of the interface
characteristics window opens. For this port, we will be connecting a 16 GbE mainframe
channel. Start by entering the description of the CHPID, Physical Channel ID (PCHID), and
Port VSAN manually or by using the drop-down menu for the available VSANs. It is a best
practice that we use Forward Error Correction (FEC) for the connection for the 16 GbE
mainframe channel, so we set the interface to 16 GbE, as shown in Figure 7-96 on page 327,
under the Speed category. Click Apply.
Click the Other tab and select the Up dialog button for Admin FEC and Admin FEC TTS, as
shown in Figure 7-97. By doing this action, the switch can perform FEC negotiation with the
CHPID when it comes online. Click Apply.
From the mainframe console, we configure the CHPID as online, as shown in Figure 7-99.
Figure 7-100 on page 329 shows that the mainframe CHPID is logged in at 16 GbE (based on
the operational speed and status).
To view the FICON Request Node Identification (RNID) information, click the FICON tab, as
shown in Figure 7-101. As we can see, this node is CHPID 18 on the IBM2965 mainframe
with serial number E8F77, according to the various fields in the RNID. We also can validate
that FEC is operational between the mainframe and the switch. Click Close.
Figure 7-103 shows a summary of all online devices. The first interface column shows both
physical interface fc1/1 on the switch and the FICON Port Address (00) as it would be viewed
from the mainframe host. The RNID information for the CHPID ports is displayed in the
Connected To column.
We configure the IBM storage array ports at FICON Port Addresses 0x10 and 0x40 by using
the same process and bring them online. Figure 7-104 on page 331 shows the IBM storage
ports as being online and identified as CU ports.
Figure 7-105 shows a summary of all online devices. On the IBM storage array CU ports, the
FICON Port Addresses are the same as referenced in the mainframe hardware configuration.
Figure 7-106 Mainframe view of the device paths that are online for CHPIDs 18 and 20
We define the cascaded CHPIDs for the lab environment, as shown in Figure 7-107.
On the IBM Storage Networking SAN192C-6 switch, we configure the cascaded storage array
ports at FICON Port Addresses 0x0A and 0x4A and bring them online. Figure 7-108 on
page 333 shows the cascaded storage ports as being online and identified as CU ports.
Figure 7-109 shows a summary of all the online devices on the IBM Storage Networking
SAN192C-6 switch.
Important: For security reasons, mainframe channels come online in FICON VSANs.
Configuring FICON mainframe disk and tape interfaces by using the CLI
Example 7-6 shows how to configure the FICON mainframe disk and tape interfaces by using
the CLI.
SAN384C-6#
SAN384C-6# config t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)# vsan database
SAN384C-6(config-vsan-db)# vsan 40 interface fc1/1
SAN384C-6(config-vsan-db)# exit
SAN384C-6(config)#
SAN384C-6(config)# interface fc1/1
SAN384C-6(config-if)# switchport description CHPID 18 (PCHID 118)
SAN384C-6(config-if)# switchport mode F
SAN384C-6(config-if)# switchport speed 16000
SAN384C-6(config-if)#
SAN384C-6(config-if)# switchport fec
SAN384C-6(config-if)# switchport fec tts
SAN384C-6(config-if)#
SAN384C-6(config-if)# no shutdown
SAN384C-6(config-if)# exit
SAN384C-6(config)#
SAN384C-6(config)# vsan database
SAN384C-6(config-vsan-db)# vsan 40 interface fc2/1
SAN384C-6(config-vsan-db)# exit
SAN384C-6(config)#
SAN384C-6(config)# interface fc2/1
SAN384C-6(config-if)# switchport description CHPID 20 (PCHID 11C)
SAN384C-6(config-if)# switchport mode F
SAN384C-6(config-if)# switchport speed 16000
SAN384C-6(config-if)# switchport fec
SAN384C-6# config t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)# vsan database
SAN384C-6(config-vsan-db)# vsan 40 interface fc1/17
SAN384C-6(config-vsan-db)# vsan 40 interface fc2/17
SAN384C-6(config-vsan-db)# exit
SAN384C-6(config)#
SAN384C-6(config)# interface fc1/17
SAN384C-6(config-if)# switchport description IBM 8870 Port 1
SAN384C-6(config-if)# switchport mode F
SAN384C-6(config-if)# switchport speed 16000
SAN384C-6(config-if)# switchport fec
SAN384C-6(config-if)# switchport fec tts
SAN384C-6(config-if)# no shutdown
SAN384C-6(config-if)#
SAN384C-6(config-if)# interface fc2/17
SAN384C-6(config-if)# switchport description IBM 8870 Port 2
SAN384C-6(config-if)# switchport mode F
SAN384C-6(config-if)# switchport speed 16000
SAN384C-6(config-if)# switchport fec
SAN384C-6(config-if)# switchport fec tts
SAN384C-6(config-if)# no shutdown
SAN384C-6(config-if)#
SAN384C-6(config-if)# end
Performing fast copy of configurationdone.
SAN384C-6# show int fc1/17
fc1/17 is up
Port description is IBM 8870 Port 1
Hardware is Fibre Channel, SFP is long wave laser cost reduced
Port WWN is 20:11:00:3a:9c:31:62:80
Admin port mode is F, trunk mode is on
snmp link state traps are enabled
Port mode is F, FCID is 0x201000
Port vsan is 40
Admin Speed is 16 Gbps
Operating Speed is 16 Gbps
Rate mode is dedicated
Port flow-control is R_RDY
SAN384C-6#
SAN384C-6# show ficon vsan 40 portaddress 0x10
Port Address 16(0x10) is up in vsan 40
Port number is 16(0x10), Interface is fc1/17
Port name is
Port is not admin blocked
Prohibited port addresses are 255(0xff)
Admin port mode is F
Port mode is F, FCID is 0x201000
Peer is type 002107 model 961 manufactured by IBM
Serial num is 0000000CPZ11, FICON tag is 0x0030
SAN384C-6#
When configuring IPsec and IKE, two security associations (SAs) are required for outbound
and inbound communication so that you can establish bidirectional communication between
two participating switches to encrypt and decrypt IP packets. The security association
database (SAD) stores sets of SA records.
IPsec
IPsec provides data confidentiality, data integrity, and data authentication between two
participating switches. IPsec provides IP layer security services that protect one or more data
flows between a pair of switches that are connected over an FCIP tunnel. IPsec in
combination with IKE generates encryption and authentication keys. IPsec provides security
for transmission at the network layer to protect authenticated IP packets between switches.
IPsec protects data that is transmitted across public networks from observation, modification,
and spoofing, which allows virtual private networks (VPNs), intranets, extranets, and remote
users access.
The Encapsulating Security Payload (ESP) protocol, which is a member of the IPsec suite,
encapsulates the data to be protected and provides data privacy services, data
authentication, and optional anti-replay services.
IKE negotiates IPsec SAs and generates keys for switches by using the IPsec feature and
allows you to refresh IPsec SAs, which provides dynamic authentication of peers and
anti-replay services while supporting a manageable and scalable IPsec configuration.
Note: When implementing IPsec and IKE, each GbE interface on the
IBM Storage Networking SAN50C-R switch and the 24/10-Port SAN Extension Module
must be configured in its own IP subnet to ensure that the IPsec tunnel works.
Note: A best practice for configuring an IP route is to configure a static route to each GbE
interface by designating a subnet mask of 255.255.255.255 when you add the route.
To create and manage FCIP links with DCNM 11, use the FCIP wizard. Make sure that you
can ping the GbE interfaces from local and remote switches to verify connectivity.
Important: If you encounter a problem with your configuration, do not automatically restore
the FCIP and IPsec configuration backup because it restores the entire switch
configuration, which might impact the existing FCIP links that function properly. For help,
contact IBM Support.
To create FCIP links by using the FCIP wizard, complete the following steps:
1. Go to the Welcome window.
2. Select the switch pairs.
3. Specify the IP Address/Route.
4. Specify the Tunnel Properties.
5. Create the FCIP ISL.
7.2.16 Configuring FCIP links per IPS port by using the DCNM
This section shows how to configure a single FCIP link on an IPS port by using the Cisco
DCNM wizard.
We select the two switch end points that we will use to create our FCIP tunnel, as shown in
Figure 7-112.
Figure 7-113 on page 341 shows the selection of which 10 GbE IPS port will be used for the
new FCIP tunnel interface that will provide physical connectivity between our two end-point
switches, IBM Storage Networking SAN384C-6 and IBM Storage Networking SAN192C-6, by
using the 24/10 SAN extension module. We also select (Jumbo Frames). Click Next.
As shown in Figure 7-114, we provide the IP addresses for each of the IP storage ports on
each end point. Routes are not needed in this example because the IP addresses are in the
same subnet. Click Next.
In the tunnel properties window, we select Measure RTT (round-trip time), as shown in
Figure 7-116. Click Close to continue.
Important: Measure RTT is used to test the network connection and provide the time that
it takes for a packet to cross the network and return an acknowledgment.
Note: As a best practice, keep the Profile and Tunnel IDs the same on both switches when
creating an FCIP configuration. In addition, when FCIP links carry FICON traffic, the
FICON Port Addresses should be the same.
Figure 7-119 on page 345 shows a summary of the configuration that will be applied to create
the single FCIP link. To proceed, click Finish.
Figure 7-120 shows that the configuration successfully completed. To proceed, click OK and
then Close.
Configuring the FCIP link on the second IPS port by using the CLI
Example 7-8 shows how to configure an FCIP link on the second IPS port by using the CLI.
Example 7-8 Configuring a single FCIP link on the second IPS port by using the CLI.
SAN384C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)# feature fcip
SAN384C-6(config)# end
Performing fast copy of configurationdone.
SAN384C-6#
SAN384C-6# show feature | incl fcip
fcip 1 enabled
SAN384C-6#
SAN384C-6# config t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)# interface IPStorage7/2
SAN384C-6(config-if)# switchport mtu 2300
SAN384C-6(config-if)# ip address 10.1.2.2 255.255.255.0
SAN384C-6(config-if)# no shutdown
SAN384C-6(config-if)#
SAN384C-6(config-if)# fcip profile 110
SAN384C-6(config-profile)# ip address 10.1.2.2
SAN384C-6(config-profile)#
SAN384C-6(config-profile)# tcp max-bandwidth-mbps 10000
min-available-bandwidth-mbps 9500 round-trip-time-ms 1
SAN384C-6(config-profile)#
SAN384C-6(config-profile)# interface fcip110
SAN384C-6(config-if)# use-profile 110
SAN384C-6(config-if)#
SAN384C-6(config-if)# peer-info ipaddr 10.1.2.1
-------------------------------------------------------------------------------
Tun prof IPS-if peer-ip Status T W T Enc Comp Bandwidth rtt
E A A max/min (us)
-------------------------------------------------------------------------------
100 100 IPS7/1 10.1.1.1 TRNK Y N N N N 10000M/9500M 1000
110 110 IPS7/2 10.1.2.1 TRNK Y N N N N 10000M/9500M 1000
SAN192C-6#
SAN192C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN192C-6(config)# interface IPStorage6/2
SAN192C-6(config-if)# switchport mtu 2300
SAN192C-6(config-if)# ip address 10.1.2.1 255.255.255.0
SAN192C-6(config-if)# no shutdown
SAN192C-6(config-if)#
SAN192C-6(config-if)# fcip profile 110
SAN192C-6(config-profile)# ip address 10.1.2.1
SAN192C-6(config-profile)#
SAN192C-6(config-profile)# tcp max-bandwidth-mbps 10000
min-available-bandwidth-mbps 9500 round-trip-time-ms 1
SAN192C-6(config-profile)# interface fcip110
SAN192C-6(config-if)# use-profile 110
SAN192C-6(config-if)# peer-info ipaddr 10.1.2.2
SAN192C-6(config-if)# tcp-connections 5
Select the switch pairs IBM Storage Networking SAN192C-6 and IBM Storage Networking
SAN384C-6 to participate in the Port Channel creation by using previously created ISLs, as
shown in Figure 7-123 on page 351.
Select fcip 100 and fcip 110 to provide ISL redundancy within the FCIP Port Channel, as
shown in Figure 7-124.
Figure 7-126 shows that Port Channel creation completed successfully. Click Close.
SAN384C-6#
SAN384C-6# show topology vsan 50
Example 7-10 Configuring a Port Channel on the IBM Storage Networking SAN192C-6 partner switch
by using the CLI
SAN192C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN192C-6(config)# interface port-channel 100
SAN192C-6(config-if)# switchport tr
trunk trunk-max-npiv-limit
SAN192C-6(config-if)# ficon portnumber 0xF5
SAN192C-6(config-if)# switchport trunk allowed vsan 1
Warning: This command will remove all VSANs currently being trunked and trunk on
ly the specified VSANs.
Do you want to continue? (y/n) [n] y
SAN192C-6(config-if)# switchport trunk
trunk trunk-max-npiv-limit
SAN192C-6(config-if)# switchport trunk allowed vsan add 50
SAN192C-6(config-if)# no shut
SAN192C-6(config-if)# int fcip 100
SAN192C-6(config-if)# channel-group 100 force
fcip100 added to port-channel 100 and disabled
please do the same operation on the switch at the other end of the port-channel,
then do "no shutdown" at both ends to bring it up
SAN192C-6(config-if)# interface fcip 110
SAN192C-6(config-if)# channel-group 100 force
fcip110 added to port-channel 100 and disabled
please do the same operation on the switch at the other end of the port-channel,
then do "no shutdown" at both ends to bring it up
SAN192C-6(config-if)# Performing fast copy of configurationdone.
SAN192C-6# show in
in-order-guarantee incompatibility-all inventory
inactive-if-config install
incompatibility interface
SAN192C-6# show interface port-channel 100
port-channel100 is trunking
Hardware is IPStorage
Port WWN is 24:64:00:2a:6a:a4:1a:80
Admin port mode is auto, trunk mode is on
snmp link state traps are enabled
Port mode is TE
Port vsan is 1
Speed is 20 Gbps
Logical type is core
Trunk vsans (admin allowed and active) (1,50)
Trunk vsans (up) (1,50)
Trunk vsans (isolated) ()
Trunk vsans (initializing) ()
5 minutes input rate 368 bits/sec, 46 bytes/sec, 0 frames/sec
5 minutes output rate 424 bits/sec, 53 bytes/sec, 0 frames/sec
1455 frames input, 152316 bytes
SAN192C-6#
SAN192C-6# show topology vsan 50
To accomplish this task, we must first configure multiple VLANs on the physical IPS interface,
which in turn creates the VLAN sub-interface. Then, we create the FCIP interfaces, which are
tied to these VLAN sub-interfaces. When using VLAN sub-interfaces, you must match the
VLANs that are on the FCIP IPS interfaces with VLANs that are configured on the Ethernet
switch that is physically attached to the IBM c-type switch.
On the IBM Storage Networking SAN384C-6 device tab, we double-click the IPStorage 7/3
port, which shows the default MTU of 1500, as shown in Figure 7-127.
In Figure 7-129, we select the VLAN tab to add VLANs 1000 and 1010, which match our
Ethernet switch configuration to create a VLAN trunk between the IBM Storage Networking
SAN384C-6 switch and the Ethernet switch. Click Apply.
Note: The naming convention for the “IPStorage 7/3.1000” sub interface starts with the IPS
interface name followed by a period and then the VLAN number.
We select the two switch end points that we will use to create our FCIP tunnel, as shown in
Figure 7-132.
Select View configured → Profiles, and then validate which profiles and TCP ports are in
use, as shown in Figure 7-138 on page 371. These values should be unique per FCIP tunnel
creation.
Select View configured → Tunnels, and then validate which profiles, tunnels, and IP
addresses were created and are in use, as shown in Figure 7-139. These values should be
unique per FCIP tunnel creation.
Figure 7-141 shows that the configuration successfully completed. To proceed, click OK and
then Close.
We select the Enforce IPSEC Security checkbox to enable encryption and input the IKE
authentication key on the second VLAN sub-interface, as shown in Figure 7-144 on page 375.
This setting provides physical encrypted connectivity between our two end point switches
IBM Storage Networking SAN192C-6 and IBM Storage Networking SAN384C-6. We also
select (Jumbo Frames). Click Next.
We provide the IP addresses for each of the VLAN sub-interface ports on each switch end
point, as shown in Figure 7-145. Click Next.
We specify the final parameters for the second FCIP link configuration, as shown in
Figure 7-147 on page 377:
Profile ID: 130, which provides detailed information about the local IP address and TCP
parameters.
Tunnel ID: 130, which is used to create the name of the new FCIP interface fcip130.
FICON Port Address: 0xF4, which is only applicable when FICON is enabled on the
switch. The FICON Port Address must be configured when there is a FICON VSAN
communicating on this FCIP tunnel. As a best practice, use the same value on both
switches for the FCIP tunnel when possible. The value of this attribute must be taken from
the pool of logical FICON Port Addresses.
Trunk Mode: As a best practice, use Trunk.
VSAN List: Should have 1 and the value of any VSANs that require access to the FCIP
tunnel. In this example, it is VSAN 50.
Figure 7-148 shows a summary of the configuration that will be applied to create the second
FCIP link. To proceed, click Finish.
Figure 7-150 shows the Device Manager view of the newly created FCIP 120 and FCIP 130
interfaces.
Example 7-11 Configuring the VLAN sub-interfaces with FCIP by using the CLI
SAN384C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)# interface IPStorage7/3
SAN384C-6(config-if)# switchport mtu 2500
SAN384C-6(config-if)# no shut
SAN384C-6(config-if)#
SAN384C-6(config-if)# interface IPStorage 7/3.1000
SAN384C-6(config-if)# ip address 10.1.3.2 255.255.255.0
SAN384C-6(config-if)# switchport mtu 2500
SAN384C-6(config-if)# no shut
SAN384C-6(config-if)#
SAN384C-6(config-if)# fcip profile 120
SAN384C-6(config-profile)# ip address 10.1.3.2
SAN384C-6(config-profile)# tcp max-bandwidth-mbps 5000
min-available-bandwidth-mbps 4500 round-trip-time-ms 1
SAN384C-6(config-profile)#
SAN384C-6(config-profile)# interface fcip120
SAN384C-6(config-if)# use-profile 120
SAN384C-6(config-if)# peer-info ipaddr 10.1.3.1
SAN384C-6(config-if)# tcp-connections 5
SAN384C-6(config-if)# switchport trunk allowed vsan 1
Warning: This command will remove all VSANs currently being trunked and trunk only
the specified VSANs.
Do you want to continue? (y/n) [n] y
SAN384C-6(config-if)# switchport trunk allowed vsan add 50
SAN384C-6(config-if)#
SAN384C-6(config-if)# ficon portnumber 0xf3
SAN384C-6(config-if)#
SAN384C-6(config-if)# no shut
SAN384C-6(config-if)#
SAN384C-6(config-if)# end
Performing fast copy of configurationdone.
SAN384C-6# show fcip summary
-------------------------------------------------------------------------------
Tun prof IPS-if peer-ip Status T W T Enc Comp Bandwidth rtt
E A A max/min (us)
-------------------------------------------------------------------------------
100 100 IPS7/1 10.1.1.1 TRNK Y N N N N 10000M/9500M 1000
110 110 IPS7/2 10.1.2.1 TRNK Y N N N N 10000M/9500M 1000
120 120 IPS7/3.1000 10.1.3.1 TRNK Y N N N N 5000M/4500M 1000
SAN192C-6
SAN192C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN192C-6(config)# interface IPStorage6/3
Example 7-12 shows how we configured FCIP on VLAN sub-interfaces on the second
sub-interface for the IBM Storage Networking SAN384C-6 and IBM Storage Networking
SAN192C-6 switches with encryption and compression enabled by using the CLI.
Example 7-12 Configuring FCIP on VLAN sub-interfaces with encryption and compression
SAN384C-6#
SAN384C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN384C-6(config)# feature crypto ike
SAN384C-6(config)# feature crypto ipsec
SAN384C-6(config)#
SAN384C-6(config)# interface IPStorage7/3
SAN384C-6(config-if)# switchport mtu 2500
SAN384C-6(config-if)# no shut
SAN384C-6(config-if)#
SAN384C-6(config-if)# interface IPStorage7/3.1010
SAN384C-6(config-if)# ip address 10.1.4.2 255.255.255.0
-------------------------------------------------------------------------------
Tun prof IPS-if peer-ip Status T W T Enc Comp Bandwidth rtt
E A A max/min (us)
-------------------------------------------------------------------------------
100 100 IPS7/1 10.1.1.1 TRNK Y N N N N 10000M/9500M 1000
110 110 IPS7/2 10.1.2.1 TRNK Y N N N N 10000M/9500M 1000
120 120 IPS7/3.1000 10.1.3.1 TRNK Y N N N N 5000M/4500M 1000
130 130 IPS7/3.1010 10.1.4.1 TRNK Y N N Y A 5000M/4500M 1000
SAN384C-6#
SAN384C-6# show int fcip130
fcip130 is trunking
SAN192C-6
SAN192C-6#
SAN192C-6# conf t
Enter configuration commands, one per line. End with CNTL/Z.
SAN192C-6(config)# feature crypto ike
SAN192C-6(config)# feature crypto ipsec
SAN192C-6(config)#
SAN192C-6(config)# interface IPStorage6/3
SAN192C-6(config-if)# switchport mtu 2500
SAN192C-6(config-if)# no shutdown
SAN192C-6(config-if)#
SAN192C-6(config-if)# interface IPStorage6/3.1010
SAN192C-6(config-if)# ip address 10.1.4.1 255.255.255.0
SAN192C-6(config-if)# switchport mtu 2500
SAN192C-6(config-if)#
SAN192C-6(config-if)# fcip profile 130
SAN192C-6(config-profile)# ip address 10.1.4.1
SAN192C-6(config-profile)# tcp max-bandwidth-mbps 5000
min-available-bandwidth-mbps 4500 round-trip-time-ms 1
SAN192C-6(config-profile)#
SAN192C-6(config-profile)# interface fcip130
SAN192C-6(config-if)# use-profile 130
SAN192C-6(config-if)# peer-info ipaddr 10.1.4.2
SAN192C-6(config-if)# tcp-connections 5
SAN192C-6(config-if)# ip-compression auto
SAN192C-6(config-if)# switchport trunk allowed vsan 1
Warning: This command will remove all VSANs currently being trunked and trunk only
the specified VSANs.
Do you want to continue? (y/n) [n] y
SAN192C-6(config-if)# switchport trunk allowed vsan add 50
SAN192C-6(config-if)# ficon portnumber 0xf4
SAN192C-6(config-if)# no shutdown
SAN192C-6(config-if)#
SAN192C-6(config-if)# crypto ike domain ipsec
SAN192C-6(config-ike-ipsec)# policy 1
SAN192C-6(config-ike-ipsec-policy)# key 7 swwxoomi address 10.1.4.2
SAN192C-6(config-ike-ipsec)#
SAN192C-6(config-ike-ipsec)# crypto map domain ipsec crset-fcip130-redbook 1
SAN192C-6(config-crypto-map-ip)# set peer 10.1.4.2
SAN192C-6(config-crypto-map-ip)# match address access_list_fcip130_redbook
SAN192C-6(config-crypto-map-ip)# set transform-set ipsec_default_transform_set
SAN192C-6(config-crypto-map-ip)#
SAN192C-6(config-crypto-map-ip)# ip access-list access_list_fcip130_redbook permit
ip 10.1.4.1 0.0.0.0 10.1.4.2 0.0.0.0
SAN192C-6(config)#
SAN192C-6(config)# interface IPStorage6/3.1010
SAN192C-6(config-if)# crypto map domain ipsec crset-fcip130-redbook
SAN192C-6(config-if)# no shutdown
SAN192C-6(config-if)# end
Performing fast copy of configurationdone.
SAN192C-6# show crypto ike domain ipsec sa
-------------------------------------------------------------------------------
Tun prof IPS-if peer-ip Status T W T Enc Comp Bandwidth rtt
E A A max/min (us)
-------------------------------------------------------------------------------
100 100 IPS6/1 10.1.1.2 TRNK Y N N N N 10000M/9500M 1000
110 110 IPS6/2 10.1.2.2 TRNK Y N N N N 10000M/9500M 1000
120 120 IPS6/3.1000 10.1.3.2 TRNK Y N N N N 5000M/4500M 1000
130 130 IPS6/3.1010 10.1.4.2 TRNK Y N N Y A 5000M/4500M 1000
Figure 7-153 Ethernet ports that are used for the FCIP tunnel
Routes are needed in this example because the IP addresses are not in the same subnet.
Specify the IP Address/Route, as shown in Figure 7-154, and click Next.
Figure 7-161 on page 395 shows the FCIP configuration summary continued.
Click OK and then Close to apply the configuration settings, as shown in Figure 7-162.
Note: Log in to both local and remote switches when validating that the FCIP links are up
and IPsec security is configured.
-------------------------------------------------------------------------------
Tun prof IPS-if peer-ip Status T W T Enc Comp Bandwidth rtt
E A A max/min (us)
-------------------------------------------------------------------------------
200 200 IPS1/2 10.122.118.10 TRNK Y N N Y A 500M/100M 72000
SAN50C-R#
Example 7-14 shows the IPsec and IKE security configuration for FCIP tunnels.
SAN50C-R#
This chapter provides an overview of useful tasks that you might want to consider regarding
your day-to-day operations and any troubleshooting that might be required in your
environment.
To view performance information in your SAN environment, IBM c-type switches use Data
Center Network Manager (DCNM) as the standard base tool for performance monitoring. In
tandem with Device Manager (DM), both tools can provide several mechanisms that you can
use to monitor and view real-time, light-weight, and high-level historical data for IBM c-type
switch performance and troubleshooting. Data can be graphed over time to provide a
real-time insight into the performance of the port, such as the following items:
Real-time SAN Inter-Switch Link (ISL) statistics
SAN modules, ports, and a host of additional SAN elements
The entire SAN fabric health
Ingress and egress Fibre Channel (FC) traffic errors
Class 2 traffic that shows buffer-to-buffer (B2B) and end-to-end credit flow control statistics
Checking for oversubscription
Threshold monitoring
RX and TX utilization percentages
Link failures, InvalidCrcs, InvalidTxWaitCounts, and Sync Losses
IBM Fibre Connection (FICON) data fabrics
DM is used for monitoring and configuring ports on the IBM c-type Family switches. When
gathering DM statistics, you can configure selective polling intervals to monitor the
performance of your SAN environment and troubleshoot any potential problems that exceed
specified thresholds.
A polling interval can be set at 1 hour and 30 minutes or as low as 10 seconds. The results
that you can view are as follows:
Absolute value or Value per second
Minimum or maximum value per second
To configure these settings, you must first log in to DM, as shown in Figure 8-1 on page 399.
The per port monitoring option provides many statistics. We select the Device tab view,
right-click fc1/1, and select MONITOR to view the real-time monitor dialog box, as shown in
Figure 8-2.
When using DM to set the error thresholds, select Threshold Manager, as shown in
Figure 8-6 on page 403.
The Threshold Monitor can trigger an SNMP alert and log messages when a selected statistic
reaches its configured threshold value.
Best practice: Configure the DM thresholds on your IBM c-type Family switches so that
you can monitor the performance of your SAN environment and troubleshoot any potential
problems that exceed the specified thresholds.
DCNM is a management tool that is used for provisioning, monitoring, and troubleshooting
IBM c-type Family SAN environments. It provides a command and control style structured
regime that gives you complete visibility into your entire IBM c-type Family fabric
infrastructures. DCNM provides a centralized, high-level, and web-based view that includes a
complete feature set that meets administrative requirements in data centers by streamlining
IBM c-type management, provisioning, monitoring, and troubleshooting SAN devices.
Best practice: IBM c-type DCNM Advanced includes SAN Insights, which is the
recommended web UI when using the SAN Analytics feature.
Figure 8-8 on page 405 shows the DCNM Advanced login window.
After you log in to DCNM, the dashboard summary opens, which provides storage
administrators with a 24-hour snapshot of their SAN fabric and the ability to focus on key
health and performance metrics on your IBM c-type SAN fabric.
There are many default dashlets that can be customized to provide a visual look into your
SAN environment. These dashlets range from an inventory of switches and modules to ones
like Top CPU, Top ISLs, Link traffic, and Alerts, as shown in Figure 8-9.
Suggested reading:
DCNM SAN Management Configuration Guide 11.5(1)
Cisco MDS SAN Analytics and Telemetry Streaming Configuration Guide
To configure PMON by using DCNM, launch DCNM and select Configure → SAN → Port
Monitoring, as shown in Figure 8-11 on page 407.
The window that is shown in Figure 8-12 opens and shows one of the default policies, which
is named Normal_accessPort.
We create a policy by selecting an existing policy from the drop-down menu that is shown in
Figure 8-12 on page 407 that is closest to the policy that you require, which you then modify
as required. Figure 8-14 shows that we selected the Most-Aggressive_allPort policy and
made several changes.
Now, save the policy under a new name, as shown in Figure 8-15 on page 409.
After the policy is saved, it is available under the CustomPolicy list of policies, as shown in
Figure 8-16.
A window of your environment opens. Our environment is shown in Figure 8-18, where we
selected both fabrics. When all switches are chosen, click Push.
For more information about the results, click Log. If the push to the switches was not
successful, the log provides more information to help with troubleshooting. Figure 8-21 shows
an example of log details. Use the scroll bar on the right to view the full log.
Events can be filtered by selecting the Quick Filter option, as shown in Figure 8-23, and then
selecting a filter. Here, we filter on Warning.
By selecting none, one, or multiple rows in the left column, different actions appear to show
what filters can be applied to the row, such as:
Delete
Clear Selection
Delete All
Acknowledge
Unacknowledge
Suppressor
You can view the active PMON policy on a switch by using DM or the CLI. The following
examples show how to do this task by using DM. Launch DM from DCNM, as shown in
Figure 8-25.
Select Admin → Events → Port Monitor → Show, as shown in Figure 8-26 on page 415.
A window opens and runs the show port-monitor command. Example 8-1 is truncated to
show only the active policy, but inactive policies also can be shown by using this command.
These two configuration files can be different in instances where you want to change the
device configuration temporarily without saving the running configuration changes to
startup-configuration.
Before changing the startup configuration file, save the running-configuration file to the
startup configuration by using the copy running-config startup-config command or copy a
configuration file from a backup copy that is a file server to the startup configuration.
To change the running configuration, use the configure terminal command to enter
configuration mode. After you enter global configuration mode, commands generally run
immediately and then are saved to the running configuration file immediately after the
command runs or when you exit configuration mode.
Best practice: Back up your switch configuration and save a copy to an external location
before making changes.
Alternatively, you can perform a configuration backup of your switch by using DCNM. With this
feature, you can back up device configurations from a running configuration. The backup files
can be stored on the DCNM server or an external location, which is recommended.
Important: For more information about how to back up a device configuration by using
DCNM, see Backup.
Call Home provides email-based notification of critical system events, which can go to your
SAN administrators. The Call Home function is available directly through the IBM c-type
Family switches. Call Home provides multiple Call Home messages, separate potential
destinations, and you can define your own destination profiles, use predefined profiles, and
configure up to 50 email addresses per destination profile.
Best practice: Configure Call Home on all c-type switches in your environment as a
preventive maintenance feature. For more information, see "Configuring Call Home" in the
Cisco MDS 9000 Family Configuration Guide and IBM c-type Family and Cisco MDS 9000
Series Remote Support Overview.
FC fabric connectivity requires multiple electrical and optical components to function correctly,
including cables, transceivers, port ASICs, switching ASICs, and communication buses
internal to the switches. If any of these components are faulty, they affect I/O operations over
the fabric. Today, FC is deployed in mission-critical networks where resiliency and throughput
are high-priority requirements. In such networks, early identification of any faults is critical to
gaining customer trust. For this reason, the IBM c-type Series provides a comprehensive set
of system-and link-level diagnostic capabilities.
This software-based suite of tools, hardware-enabled for some tests, can dynamically verify
whether everything is working as expected. The Generic Online Diagnostics (GOLD)
capability offers a complete suite of tests to verify that supervisor engines, switching modules,
ASICs, communication buses, optics, and interconnections are functioning properly. GOLD
tests can be run at initial system start, periodically at run time, and on demand when invoked
by the administrator.
The start diagnostics run during the start procedure and detect faulty hardware when a new
networking device is brought online. These tests represent an evolution of the power-on
self-test (POST) capabilities once present on similar switches. They verify the checksum on
the boot and switch firmware images, perform internal data loopback testing on all FC ports,
and perform access and integrity checks on management ports and nonvolatile memory
components. During the diagnostics phase, the switch logs any errors that are encountered.
Runtime and on-demand tests are even more specific and implement health-monitoring
diagnostics. Enabled by default, they verify the health of a live system at user-configurable
periodic intervals. The health-monitoring diagnostic tests detect possible hardware errors and
data-path problems without disrupting the data or control traffic. ISL diagnostics are available
to help check the health and performance of ISLs (E and TE ports) before the links are
activated for production traffic, measuring frame round-trip latencies and cable lengths.
Host bus adapter (HBA) diagnostic capability is also available. It is like ISL diagnostics but
supported on F ports. These capabilities can be configured from a CLI or DCNM. Figure 8-29
shows how to configure HBA diagnostics from DCNM.
Host-to-switch connectivity (N and F ports) tests are also available to IBM c-type Family
devices as an extension to the diagnostics suite. For host connectivity probing, the
International Committee for Information Technology Standards (INCITS) T11 FC-LS-4
standard refers to a specific implementation for beaconing the peer port for ease of
identification and a capability to gather detailed information from end nodes. This solution is
based on Link Cable Beacon Extended Link Service (LCB-ELS) and the Read Diagnostic
Parameters (RDP) Link Service command, which is used to query N_port-related link-and
port-level diagnostic parameters. In addition to these intelligent diagnostics features, the
IBM c-type Family offers hardware-based slow-drain port detection and remediation
capabilities that go beyond the capabilities that are offered by competing products.
There are two versions of the RPD feature. The RDP query can be host-originated or
switch-originated, and provides visibility into the operational port and link characteristics of
any other port in the SAN. In both cases, the feature must be supported by the HBA and the
switch and is included at no cost on IBM c-type switches.
The host-originated feature is intended for periodic housekeeping for health and performance
of switch ports locally connecting to HBAs or remote switch ports. Typically, hosts initiate the
RDP request to query the diagnostic parameters of the N_port of the target device.
Switch-originated RDP works in the opposite way. An RDP request can be sent from an
IBM c-type switch to any end device and request the diagnostic parameters of the N_port.
The queried device can be locally or remotely connected to the switch from where the RDP
request is sent.
The FC RDP feature can read port and link diagnostic parameters like link errors, congestion
counters, port names, port speeds, Small Form-factor Pluggable (SFP) diagnostics,
temperatures, Rx power, Tx power, electrical current, Forward Error Correction (FEC) status,
buffer credits, serial number, vendor details, model number, and manufacture date.
The benefit of RDP feature is that link issues can be diagnosed centrally without sending
someone inside data center rooms with optical power meters or other measurements tools,
which potentially might cause further disruption on the links. Congestion situations can be
identified, and appropriate values of buffer credits can be determined for the distances that
are involved. It is also possible to identify links where auto-negotiation did not operate
properly, and the operating speed is lower than expected. This information, when available for
each end of a link or each port in the path from server to disk, gives visibility into the health of
the transport infrastructure. The SAN can be monitored for current failing links and
transceivers, and investigated for proactive maintenance with predictive analyses.
Example 8-3 shows where the switch gets information from the connected host HBA.
FEC Status:
------------------------------
Corrected blocks : 0
Uncorrected blocks : 0
Port Congestion:
------------------------------
Tx Zero Credit Count : 3
Rx Zero Credit Count : 0
Tx Delay Count : 0
Delay Interval : 2500
Tx Discard Count : 0
Tx Discard Interval : 500
Active State Tx LR Count : 0
----------------------------------------------------------------------------
Current Alarms Warnings
Measurement High Low High Low
----------------------------------------------------------------------------
Temperature 26.89 C 75.00 C -5.00 C 70.00 C 0.00 C
Voltage 3.28 V 3.63 V 2.97 V 3.46 V 3.13 V
Current 7.37 mA 10.50 mA 2.50 mA 10.50 mA 2.50 mA
Tx Power -2.49 dBm 1.70 dBm -13.01 dBm -1.30 dBm -9.00 dBm
Rx Power -23.87 dBm 3.00 dBm -15.92 dBm 0.00 dBm -11.90 dBm
----------------------------------------------------------------------------
Note: ++ high-alarm; + high-warning; -- low-alarm; - low-warning
ssss
The RDP feature can be applied to FICON environments, and the same set of parameters
can be collected.
The capabilities in the diagnostics suite of IBM c-type switches might reduce operational
costs. Some of the capabilities are as follows:
Verify infrastructure readiness before going live into production (pre-production).
Provide key insights to troubleshoot production connections (production).
Find issues before they become critical situations.
Reduce operational costs by pinpointing and resolving issues fast.
Figure 8-31 summarizes the benefits of the IBM c-type diagnostics suite.
Optical transceivers can be easily replaced by extracting them from hosting modules with
their mylar tab latch or bale-clasp latch.
The locator ID LED helps you identify line cards, supervisors, power supplies, fans, or
crossbar fabric units. The IBM c-type Family is the only one that offers locator IDs for all
system modules. The administrator can turn on the beacon mode or locator ID LED from the
remote central management station so that the support engineer can quickly identify the
component that requires attention. Enabling the beacon mode or locator ID LED has no effect
on the operation of the interface or module.
Figure 8-33 shows a locator ID LED on the 3-kW AC Power Supply Module.
Specific critical events, error conditions, and important statistics are automatically recorded
with their timestamps in non-volatile random access memory (NVRAM) onboard the
IBM c-type Family switch and director line cards. This OBFL capability provides an event data
recorder for networking devices and is useful for performing root-cause analyses of
slow-drain situations even after they are cleared. Post-mortem analysis of failed cards or
failed switches is possible by retrieving the stored information. OBFL is enabled by default on
all IBM c-type Family switches and director line cards.
The OBFL process on each line card runs separately at (typically) 20-second intervals and
records any counter that changed value in the last interval. When it detects a counter that
changed value, it records the following information:
Interface or interface range
Counter name
Current counter value
Date and time of when OBFL detected the counter's changed values
Each of these recorded events can be displayed starting at a specific date and time and
ending at a specific date and time. This capability allows problems that occurred even months
ago to be investigated. These events are often the first place to look after a problem occurs.
OBFL is a unique feature of IBM c-type storage networking devices and is considered
valuable by support specialists. It is one of those features that under normal conditions is
often ignored but becomes critical when you need it.
Example 8-5 shows how frame drops and the TxWait counter would be timestamped so that it
is easier to correlate frame drops within the switch with external notification of drops and
application issues. A simple counter of drops with no timestamp would not serve this purpose.
Example 8-5 shows that two counters of the F32 ASIC keep incrementing over time.
------------------------------------------------------------------------------------------------
ERROR STATISTICS INFORMATION FOR DEVICE DEVICE: FCMAC
------------------------------------------------------------------------------------------------
Interface | | | Time Stamp
Range | Error Stat Counter Name | Count |MM/DD/YY HH:MM:SS
| | |
------------------------------------------------------------------------------------------------
fc9/17 |F32_TMM_PORT_TIMEOUT_DROP |18032 |04/13/21 16:08:18
fc9/17 |F32_MAC_KLM_CNTR_TX_WT_AVG_B2B_ZERO |4357 |04/13/21 16:08:18
fc9/17 |F32_TMM_PORT_TIMEOUT_DROP |11817 |04/13/21 16:07:58
fc9/17 |F32_MAC_KLM_CNTR_TX_WT_AVG_B2B_ZERO |4206 |04/13/21 16:07:58
fc9/17 |F32_TMM_PORT_TIMEOUT_DROP |6161 |04/13/21 16:07:38
fc9/17 |F32_MAC_KLM_CNTR_TX_WT_AVG_B2B_ZERO |4055 |04/13/21 16:07:38
fc9/17 |F32_TMM_PORT_TIMEOUT_DROP |223 |04/13/21 16:07:18
fc9/17 |F32_MAC_KLM_CNTR_TX_WT_AVG_B2B_ZERO |3933 |04/13/21 16:07:18
fc9/17 |F32_TMM_PORT_TIMEOUT_DROP |195 |04/13/21 16:06:58
fc9/17 |F32_MAC_KLM_CNTR_TX_WT_AVG_B2B_ZERO |3808 |04/13/21 16:06:58
Resource Measurement Facility (RMF) is the IBM strategic product to present the system
activity, and it uses SMF records and z/OS monitoring services for its functions.
SMF record type 74 has several subtypes. Subtype 7 is used to collect FICON Director data,
which it gets by communicating with the FICON IBM Control Unit Port (CUP). Both RMF and
CMF can produce a FICON Director Activity Report from that data.
By default, SMF 74.7 records are not saved and FICON Director Activity reports are not
produced.
Figure 8-34 on page 427 shows the SMF Record Types for I/O.
To capture SMF 74.7 records and create FICON Director Activity reports, you must complete
the following tasks:
1. Capture SMF 74.7 records:
a. Parmlib entries:
Parmlib member SMFPRMxx:
Add (or change) a parameter to a record:
LSNAME(SYS1.SMF.PERF,TYPE(30,89,74.7))RECORDING(LOGSTREAM)
b. Operator commands:
i. Use the SETSMF command to dynamically change (add) the TYPEs that are
collected.
ii. Use the D SMF command to verify what you specified.
2. Enable FICON Director Activity Report by using RMF:
a. Parmlib entries:
i. Parmlib member ERBRMFxx:
Add (or change) a parameter to FCD (FICON Director Analysis). It is not on by
default (ERBfRMF00) when the member contains NOFCD /* NO FICON DIRECTOR
MEASURED */.
ii. Optionally, parmlib member IECIOSxx:
Add the parameter FICON STATS=NO on any system where you do not want these
records collected. You can put FCD in all systems, and identify which focal point
system collects them.
b. Operator commands:
Run the D IOS,FICON command to see what you have.
Figure 8-35 shows an example of a FICON Director Activity Report from RMF. The UNIT
column identifies the following items:
SWITCH for an ISL
CHP for a FICON Channel Path ID (CHPID)
CHP-H for an IBM High-Performance FICON for System z (IBM zHPF) FICON CHPID
CU for a CU interface.
Figure 8-36 on page 429 shows an example of a FICON Director Activity Report from RMF
where there are two switches with ISLs.
The report can be used to develop a diagram of connectivity and data flow, as shown in
Figure 8-37.
Figure 8-37 Diagram of I/O flow based on FICON Director Activity Report
The read/write information is from the FICON Director perspective. For example, Switch 18
port 00 is reading at 114.89 MBps, which means that Channel AD is writing. Switch 18 port
01 is an ISL (the unit is SWITCH) and is writing 114.89 MBps to switch 17 port 01, which is
reading 114.88 MBps. Switch 17 port 00 is writing 114.88 MBps to the device, which is
reading.
You can get a good idea about how your I/O flows are performing by examining several RMF
or CMF reports:
FICON Directory Activity Report.
Device Activity Report
I/O Queueing Report
Look for port utilization (port bandwidth divided by link speed) and I/O frame pacing, which
can indicate that the port is overutilized. Adding paths to the CU, if possible, can help. Errors
might indicate a physical problem. Long I/O queues indicate a heavily loaded device that
might need more paths. For more information, see the following resources:
z/OS Version 2 Release 3MVS System Management Facility (SMF), SA38-0667-30
z/OS Version 2 Release 3Resource Measurement Facility User's Guide, SC34-2664-30
z/OS Version 2 Release 3Resource Measurement Facility Report Analysis, SC34-2665-30
z/OS Version 2 Release 3 MVS Initialization and Tuning Guide, SA23-1379-30
SFTP Secure File Transfer Protocol VSAN virtual storage area network
SG24-8468-00
ISBN 0738460214
Printed in U.S.A.
®
ibm.com/redbooks