18CS822 Storage Area Networks Module-3 IP SAN FCoE
ACADEMIC YEAR (2021-2022-EVEN
SEMESTER)STORAGE AREA
NETWORKS(18CS822)
Module 3
Module 3: IP SAN and FCoE: iSCSI: Components of iSCSI , iSCSI host connectivity,
iSCSI topologies, iSCSI protocol stack ,iSCSI PDU, iSCSI Discovery, iSCSI names,
iSCSI session, iSCSI command sequencing, FCIP: FCIP protocol stack, Topology,
performance and security.
Network-Attached Storage: General-Purpose Servers versus NAS Devices,
Benefits of NAS ,File Systems and Network File Sharing: Accessing a file system,
Network file sharing, Components of NAS, NAS I/O Operation.
NAS Implementations: Unified NAS, Gate way NAS , Connectivity, Scale–out ,Scale-out
Connectivity, NAS File-Sharing Protocols , Factors Affecting NAS Performance.
Text Books:
1.EMC Education Services, “Information Storage and Management”, Wiley
India Publications, 2009. ISBN: 9781118094839.
Reference Books:
1. Paul Massiglia, Richard Barker, "Storage Area Network Essentials: A
Complete Guide to Understanding and Implementating SANs Paperback",
1st Edition, Wiley India Publications, 2008.Text Book1:Ch 3.1 to 3.6, Ch4.1,4.3,
Ch 5.1 to 5.3 .RBT L1,L2
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
IP SAN and FCoE
3.1 iSCSI: iSCSI iSCSI is an IP based protocol that establishes and manages
connections between host and storage over IP, as shown in Figure 3.1. iSCSI
encapsulates SCSI commands and data into an IP packet and transports them using
TCP/IP. iSCSI is widely adopted for connecting servers to storage because it is
relatively inexpensive and easy to implement, especially in environments in which an
FC SAN does not exist.
Fig 3.1: iSCSI Implementation
3.1.1 Components of iSCSI
An initiator (host), target (storage or iSCSI gateway), and an IP-based network are the
key iSCSI components. If an iSCSI-capable storage array is deployed, then a host with
the iSCSI initiator can directly communicate with the storage array over an IP network.
However, in an implementation that uses an existing FC array for iSCSI communication,
an iSCSI gateway is used. These devices perform the translation of IP packets to FC
frames and vice versa, thereby bridging the connectivity between the IP and FC
environments.
3.1.2 iSCSI Host Connectivity
A standard NIC with software iSCSI initiator, a TCP offload engine (TOE) NIC with software
iSCSI initiator, and an iSCSI HBA are the three iSCSI host connectivity options. The
function of the iSCSI initiator is to route the SCSI commands over an IP network. A
standard NIC with a software iSCSI initiator is the simplest and least expensive
connectivity option. It is easy to implement because most servers come with at least one,
and in many cases two, embedded NICs. It requires only a software initiator for iSCSI
functionality. Because NICs provide standard IP function, encapsulation of SCSI into IP
packets and decapsulation are carried out by the host CPU. This places additional
overhead on the host CPU. If a standard NIC is used in heavy I/O load situations, the
host CPU might become a bottleneck. TOE NIC helps alleviate this burden. A TOE NIC
offloads TCP management functions from the host and leaves only the iSCSI functionality
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
to the host processor. The host passes the iSCSI information to the TOE card, and the
TOE card sends the information to the destination using TCP/IP. Although this solution
improves performance, the iSCSI functionality is still handled by a software initiator that
requires host CPU cycles
3.1.3 iSCSI Topologies
Two topologies of iSCSI implementations are native and bridged. Native topology does
not have FC components. The initiators may be either directly attached to targets or
connected through the IP network. Bridged topology enables the coexistence of FC with
IP by providing iSCSI-to-FC bridging functionality. For example, the initiators can exist in
an IP environment while the storage remains in an FC environment.
Native iSCSI Connectivity
FC components are not required for iSCSI connectivity if an iSCSI-enabled array is
deployed. In Figure 3-2 (a), the array has one or more iSCSI ports configured with an IP
address and is connected to a standard Ethernet switch.
Fig 3.2: iSCSI Topologies
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Bridged iSCSI Connectivity
A bridged iSCSI implementation includes FC components in its configuration. Figure
3-2 (b) illustrates iSCSI host connectivity to an FC storage array.
Combining FC and Native iSCSI Connectivity
The most common topology is a combination of FC and native iSCSI. Typically, a storage
array comes with both FC and iSCSI ports that enable iSCSI and FC connectivity in the
same environment, as shown in Figure 3-2 (c).
3.1.4 iSCSI Protocol Stack
Figure 3-3 displays a model of the iSCSI protocol layers and depicts the encapsulation
order of the SCSI commands for their delivery through a physical carrier. SCSI is the
command protocol that works at the application layer of the Open System Interconnection
(OSI) model. The initiators and targets use SCSI commands and responses to talk to each
other. The SCSI command descriptor blocks, data, and status messages are encapsulated
into TCP/IP and transmitted across the network between the initiators and targets.
Fig 3.3: Protocol Stack
3.1.5 iSCSI PDU
A protocol data unit (PDU) is the basic “information unit” in the iSCSI environment. The
iSCSI initiators and targets communicate with each other using iSCSI PDUs. This
communication includes establishing iSCSI connections and iSCSI sessions, performing
iSCSI discovery, sending SCSI commands and data, and receiving SCSI status. All iSCSI
PDUs contain one or more header segments followed by zero or more data segments.
The PDU is then encapsulated into an IP packet to facilitate the transport. A PDU includes
the components shown in Figure 3-4. The IP header provides packet-routing information
to move the packet across a network. The TCP header contains the information required
to guarantee the packet delivery to the target. The iSCSI header (basic header segment)
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
describes how to extract SCSI commands and data for the target. iSCSI adds an optional
CRC, known as the digest, to ensure datagram integrity. This is in addition to TCP
checksum and Ethernet CRC. The header and the data digests are optionally used in the
PDU to validate integrity and data placement. As shown in Figure 3-5, each iSCSI PDU
does not correspond in a 1:1 relationship with an IP packet. Depending on its size, an
iSCSI PDU can span an IP packet or even coexist with another PDU in the same packet.
Fig 3.4 : iSCSI encapsulated in IP packet
Fig 3.5: Alignment of iSCSI PDU’s with IP Packets
3.1.6 iSCSI Discovery
An initiator must discover the location of its targets on the network and the names of
the targets available to it before it can establish a session. This discovery can take place
in two ways: Send Targets discovery or internet Storage Name Service (iSNS). iSNS
(see Figure 3-6)enables automatic discovery of iSCSI devices on an IP network. The
initiators and targets can be configured to automatically register themselves with the
iSNS server. Whenever an initiator wants to know the targets that it can access, it can
query the iSNS server for a list of available targets.
3.1.7 iSCSI Names
A unique worldwide iSCSI identifier, known as an iSCSI name, is used to identify the
initiators and targets within an iSCSI network to facilitate communication.
Following are two types of iSCSI names commonly used:
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
• iSCSI Qualified Name (IQN): An organization must own a registered domain name
to generate iSCSI Qualified Names. This domain name does not need to be active
or resolve to an address. It just needs to be reserved to prevent other organizations
from using the same domain name to generate iSCSI names. A date is included in
the name to avoid potential conflicts caused by the transfer of domain names. An
example of an IQN is iqn.2008 02.com.example:optional_string. The
optional_string provides a serial number, an asset number, or any other device
identifiers. An iSCSI Qualified Name enables storage administrators to assign
meaningful names to iSCSI devices, and therefore, manage those devices more
easily.
• Extended Unique Identifier (EUI): An EUI is a globally unique identifier based on
the IEEE EUI-64 naming standard. An EUI is composed of the eui prefix followed by
a 16-character hexadecimal name, such as eui.0300732A32598D26.
Fig 3.6: Discovery using iSNS
3.1.8 iSCSI Session
An iSCSI session is established between an initiator and a target, as shown in Figure 3-
7. A session is identified by a session ID (SSID), which includes part of an initiator ID and
a target ID. The session can be intended for one of the following:
• The discovery of the available targets by the initiators and the location of a specific
target on a network
• The normal operation of iSCSI (transferring data between initiators and targets)
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Fig 3.7: iSCSI Session
3.1.9 iSCSI Command Sequencing
The iSCSI communication between the initiators and targets is based on the request
response command sequences. A command sequence may generate multiple PDUs. A
command sequence number (CmdSN) within an iSCSI session is used for numbering all
initiator-to-target command PDUs belonging to the session. This number ensures that
every command is delivered in the same order in which it is transmitted, regardless of
the TCP connection that carries the command in the session.
Fig 3.8: Command and status sequence number
3.2 FCIP
FC SAN provides a high-performance infrastructure for localized data movement.
Organizations are now looking for ways to transport data over a long distance between
their disparate SANs at multiple geographic locations. One of the best ways to achieve
this goal is to interconnect geographically dispersed SANs through reliable, high speed
links.
3.2.1 FCIP Protocol Stack
The FCIP protocol stack is shown in Figure: 3-9. Applications generate SCSI commands
and data, which are processed by various layers of the protocol stack. The FCIP layer
encapsulates the Fibre Channel frames onto the IP payload and passes them to the TCP
layer (see Figure 3-10). TCP and IP are used for transporting the encapsulated
information across Ethernet, wireless, or other media that support the TCP/IP traffic.
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Fig 3.9: FCIP Protocol Stack
Fig 3.10 : FCIP Encapsulation
3.2.2 FCIP Topology
In an FCIP environment, an FCIP gateway is connected to each fabric via a standard FC
connection (see Figure 3-11). The FCIP gateway at one end of the IP network
encapsulates the FC frames into IP packets. The gateway at the other end removes the
IP wrapper and sends the FC data to the layer 2 fabric.
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Fig 3.11: FCIP Topology
3.2.3 FCIP Performance and Security
Performance, reliability, and security should always be taken into consideration when
implementing storage solutions. The implementation of FCIP is also subject to the
same considerations.
3.3 FCoE
Data centers typically have multiple networks to handle various types of I/O traffic — for
example, an Ethernet network for TCP/IP communication and an FC network for FC
communication. TCP/IP is typically used for client-server communication, data backup,
infrastructure management communication, and so on.
3.3.1 I/O Consolidation Using FCoE
The key benefit of FCoE is I/O consolidation. Figure 3-12 represents the infrastructure
before FCoE deployment. Here, the storage resources are accessed using HBAs, and the
IP network resources are accessed using NICs by the servers.
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Fig 3.12: Infrastructure before using FCoE
Figure 3-13 shows the I/O consolidation with FCoE using FCoE switches and Converged
Network Adapters (CNAs). A CNA (discussed in the section “Converged Network
Adapter”) replaces both HBAs and NICs in the server and consolidates both the IP and
FC traffic. This reduces the requirement of multiple network adapters at the server to
connect to different networks. Overall, this reduces the requirement of adapters, cables,
and switches. This also considerably reduces the cost and management overhead.
Converged Network Adapter
A CNA provides the functionality of both a standard NIC and an FC HBA in a single adapter
and consolidates both types of traffic. As shown in Figure 3-14, a CNA contains separate
modules for 10 Gigabit Ethernet, Fibre Channel, and FCoE Application Specific Integrated
Circuits (ASICs).
Cables
Currently two options are available for FCoE cabling: Copper based Twinax and standard
fiber optical cables. A Twinax cable is composed of two pairs of copper cables covered
with a shielded casing. The Twinax cable can transmit data at the speed of 10 Gbps over
shorter distances up to 10 meters. Twinax cables require less power and are less
expensive than fiber optic cables. The Small Form Factor Pluggable Plus (SFP+) connector
is the primary connector used for FCoE links and can be used with both optical and copper
cables.
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Fig 3.14:Converged Network Adapter
FCoE Switches
An FCoE switch has both Ethernet switch and Fibre Channel switch functionalities. The
FCoE switch has a Fibre Channel Forwarder (FCF), Ethernet Bridge, and set of Ethernet
ports and optional FC ports, as shown in Figure 3-15.
Fig 3.15: FcoE switch generic Architecture
3.3.3 FCoE Frame Structure
An FCoE frame is an Ethernet frame that contains an FCoE Protocol Data Unit. Figure 3.16
shows the FCoE frame structure. The first 48-bits in the frame are used to specify the
destination MAC address, and the next 48-bits specify the source MAC address. The 32bit
IEEE 802.1Q tag supports the creation of multiple virtual networks (VLANs) across a single
physical infrastructure. FCoE has its own Ethertype, as designated by the next 16 bits,
followed by the 4-bit version field. The next 100-bits are reserved and are followed by
the 8-bit Start of Frame and then the actual FC frame. The 8-bit End of Frame delimiter
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
is followed by 24 reserved bits. The frame ends with the final 32-bits dedicated to the
Frame Check Sequence (FCS) function that provides error detection for the Ethernet
frame.
Fig 3.16: FCoE Frame Structure
FCoE Frame Mapping
The encapsulation of the Fibre Channel frame occurs through the mapping of the FC
frames onto Ethernet, as shown in Figure 3-17. Fibre Channel and traditional networks
have stacks of layers where each layer in the stack represents a set of functionalities.
Fig 3.17: FCoE Frame Mapping
3.3.4 FCoE Enabling Technologies
Conventional Ethernet is lossy in nature, which means that frames might be dropped or
lost during transmission. Converged Enhanced Ethernet (CEE), or lossless Ethernet,
provides a new specification to the existing Ethernet standard that eliminates the lossy
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
nature of Ethernet. This makes 10 Gb Ethernet a viable storage networking option, similar
to FC. Lossless Ethernet requires certain functionalities. These functionalities are defined
and maintained by the data center bridging (DCB) task group, which is a part of the IEEE
802.1 working group, and they are:
• Priority-based flow control
• Enhanced transmission selection
• Congestion Notification
• Data center bridging exchange protocol
Priority-Based Flow Control (PFC)
PFC provides a link level flow control mechanism. PFC creates eight separate virtual links
on a single physical link and allows any of these links to be paused and restarted
independently. PFC enables the pause mechanism based on user priorities or classes of
service. Enabling the pause based on priority allows creating lossless links for traffic, such
as FCoE traffic. This PAUSE mechanism is typically implemented for FCoE while regular
TCP/IP traffic continues to drop frames. Figure 3-18 illustrates how a physical Ethernet
link is divided into eight virtual links and allows a PAUSE for a single virtual link without
affecting the traffic for the others.
Fig 3.18 Priority Based Flow Control
Enhanced Transmission Selection (ETS)
Enhanced transmission selection provides a common management framework for the
assignment of bandwidth to different traffic classes, such as LAN, SAN, and Inter Process
Communication (IPC). When a particular class of traffic does not use its allocated
bandwidth, ETS enables other traffic classes to use the available bandwidth.
Congestion Notification (CN)
Congestion notification provides end-to-end congestion management for protocols, such
as FCoE, that do not have built-in congestion control mechanisms. Link level congestion
notification provides a mechanism for detecting congestion and notifying the source to
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
move the traffic flow away from the congested links. Link level congestion notification
enables a switch to send a signal to other ports that need to stop or slow down their
transmissions. The process of congestion notification and its management is shown in
Figure 6-19, which represents the communication between the nodes A (sender) and B
(receiver).
3.4 Network Attached Storage
Network-attached storage (NAS) is a dedicated, high-performance fi le sharing and
storage device. NAS enables its clients to share files over an IP network. NAS provides
the advantages of server consolidation by eliminating the need for multiple fi le servers.
It also consolidates the storage used by the clients onto a single system, making it
easier to manage the storage.
A NAS device uses its own operating system and integrated hardware and software
components to meet specific fi le-service needs. Its operating system is optimized for
file I/O and, therefore, performs fi le I/O better than a general-purpose server. As a
result, a NAS device can serve more clients than general-purpose servers and provide
the benefit of server consolidation.
3.4.1General-Purpose Servers versus NAS Devices
A NAS device is optimized for file-serving functions such as storing, retrieving, and
accessing files for applications and clients. As shown in Figure 3.4.1, a general-purpose
server can be used to host any application because it runs a general-purpose operating
system. Unlike a general-purpose server, a NAS device is dedicated to file-serving. It
has specialized operating system dedicated to file serving by using industry-standard
protocols. Some NAS vendors support features, such as native clustering for high
availability.
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Fig 3.19:Genera urpose Serversl Vs NAS Devices
3.4.2 Benefits of NAS
NAS offers the following benefits:
■ Comprehensive access to information: Enables efficient file
sharing and supports many-to-one and one-to-many configurations. The
many-to-one configuration enables a NAS device to serve many clients
simultaneously. The one-to-many configuration enables one client to
connect with many NAS devices simultaneously.
■ Improved efficiency: NAS delivers better performance compared
to a general-purpose file server because NAS uses an operating
system spe- cialized for file serving.
■ Improved flexibility: Compatible with clients on both UNIX and
Windows platforms using industry-standard protocols. NAS is flexible and
can serve requests from different types of clients from the same
source.
■ Centralized storage: Centralizes data storage to minimize data
duplication on client workstations, and ensure greater data protection
■ Simplified management: Provides a centralized console that
makes it possible to manage file systems efficiently.
■ Scalability: Scales well with different utilization profiles and
types of business applications because of the high-performance
and low-latency design
■ High availability: Offers efficient replication and recovery options,
enabling high data availability. NAS uses redundant components
that provide maximum connectivity options. A NAS device
supports clustering tech- nology for failover.
■ Security: Ensures security, user authentication, and file locking
with industry-standard security schemas
■ Low cost: NAS uses commonly available and inexpensive
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Ethernet components.
■ Ease of deployment: Configuration at the client is minimal, because
the clients have required NAS connection software built in.
3.4.3 File Systems and Network File Sharing
A file system is a structured way to store and organize data files. Many file systems
maintain a file access table to simplify the process of searching and accessing files.
3.4.3.1 Accessing a File System
A file system must be mounted before it can be used. In most cases, the operating system
mounts a local file system during the boot process. The mount process creates a link
between the file system on the NAS and the operating system on the client. When
mounting a file system, the operating system organizes files and directories in a tree-
like structure and grants the privilege to the user to access this structure. The tree is
rooted at a mount point. The mount point is named using operating system conventions.
Users and applications can traverse the entire tree from the root to the leaf nodes as file
system permissions allow. Files are located at leaf nodes, and directories and
subdirectories are located at intermediate roots. The access to the file system terminates
when the file system is unmounted. Figure 7.2 shows an example of a UNIX directory
structure.
3.4.3.2 Network File Sharing
Network file sharing refers to storing and accessing files over a network. In a file-
sharing environment, the user who creates a file (the creator or owner of a file)
determines the type of access (such as read, write, execute, append, and delete) to
be given to other users and controls changes to the file. When multiple users try to
access a shared file at the same time, a locking scheme is required to maintain data
integrity and, at the same time, make this sharing possible Some examples of file-
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
sharing methods are file transfer protocol (FTP), Distributed File System (DFS), client-
server models that use file-sharing protocols such as NFS and CIFS, and the peer-to-
peer (P2P) model.
FTP is a client-server protocol that enables data transfer over a network. An FTP
server and an FTP client communicate with each other using TCP as the transport
protocol. FTP, as defined by the standard, is not a secure method of data transfer
because it uses unencrypted data transfer over a network. FTP over Secure Shell
(SSH) adds security to the original FTP specification. When FTP is used over SSH, it
is referred to as Secure FTP (SFTP).
A distributed file system (DFS) is a file system that is distributed across several hosts.
A DFS can provide hosts with direct access to the entire file system, while ensuring
efficient management and data security. Standard client-server file- sharing
protocols, such as NFS and CIFS, enable the owner of a file to set the required type
of access, such as read-only or read-write, for a particular user or group of users.
Using this protocol, the clients mount remote file systems that are available on
dedicated file servers.
A name service, such as Domain Name System (DNS), and directory services such
as Microsoft Active Directory, and Network Information Services (NIS), helps users
identify and access a unique resource over the network. A name service protocol
such as the Lightweight Directory Access Protocol (LDAP) creates a namespace, which
holds the unique name of every network resource and helps recognize resources on the
network.
A peer-to-peer (P2P) file sharing model uses a peer-to-peer network. P2P
enables client machines to directly share files with each other over a network.
Clients use a file sharing software that searches for other peer clients. This differs
from the client-server model that uses file servers to store files for sharing
3.4.4Components of NAS
A NAS device has two key components: NAS head and storage (see Figure 7.3).
In some NAS implementations, the storage could be external to the NAS device and
shared with other hosts. The NAS head includes the following components:
• CPU and memory
• One or more network interface cards (NICs), which provide connectivity to the
client network. Examples of network protocols supported by NIC include Gigabit
Ethernet, Fast Ethernet, ATM, and Fiber Distributed Data Interface (FDDI).
• An optimized operating system for managing the NAS functionality. It translates
fi le-level requests into block-storage requests and further converts the data
supplied at the block level to fi le data.
• NFS, CIFS, and other protocols for fi le sharing
• Industry-standard storage protocols and ports to connect and manage physical
disk resources.
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
3.4.4 NAS I/O Operation
NAS provides file-level data access to its clients. File I/O is a high-level request that
specifies the file to be accessed. For example, a client may request a file by specifying
its name, location, or other attributes. The NAS operating system keeps track of the
location of files on the disk volume and converts client file I/O into block-level I/O to
retrieve data. The process of handling I/Os in a NAS environment is as follows:
1. The requestor (client) packages an I/O request into TCP/IP and forwards it through
the network stack. The NAS device receives this request from the network.
2. The NAS device converts the I/O request into an appropriate physical storage
request, which is a block-level I/O, and then performs the operation on the physical
storage.
3. When the NAS device receives data from the storage, it processes and
repackages the data into an appropriate file protocol response.
4. The NAS device packages this response into TCP/IP again and forwards it to the client
through the network.
Figure 3.4.4 illustrates this process.
3.4.5 NAS File-Sharing Protocols
Most NAS devices support multiple file-service protocols to handle file I/O requests to a
remote fi le system. As discussed earlier, NFS and CIFS are the common protocols for
file sharing. NAS devices enable users to share file data across different operating
environments and provide a means for users to migrate transparently from one
operating system to another.
3.4.5.1 NFS
NFS is a client-server protocol for fi le sharing that is commonly used on UNIX systems.
NFS was originally based on the connectionless User Datagram Protocol (UDP). It uses a
machine-independent model to represent user data. It also uses Remote Procedure Call
(RPC) as a method of inter-process communication between two computers. The NFS
protocol provides a set of RPCs to access a remote fi le system for the following
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
operations:
• Searching files and directories
• Opening, reading, writing to, and closing a file
• Changing file attributes
•
Modifying file links and directories
Currently, three versions of NFS are in use:
• NFS version 2 (NFSv2): Uses UDP to provide a stateless network connection
between a client and a server. Features, such as locking, are handled outside
the protocol.
• NFS version 3 (NFSv3): The most commonly used version, which uses UDP or
TCP, and is based on the stateless protocol design. It includes some new
features, such as a 64-bit file size, asynchronous writes, and additional fi le
attributes to reduce refetching.
• NFS version 4 (NFSv4): Uses TCP and is based on a stateful protocol design. It
offers enhanced security. The latest NFS version 4.1 is the enhancement of NFSv4
and includes some new features, such as session model, parallel NFS (pNFS),
and data retention.
3.4.5.2 CIFS
CIFS is a client-server application protocol that enables client programs to make requests
for files and services on remote computers over TCP/IP. It is a public, or open, variation
of Server Message Block (SMB) protocol.
• It uses file and record locking to prevent users from overwriting the work
of another user on a file or a record.
• It supports fault tolerance and can automatically restore connections and
reopen files that were open prior to an interruption. The fault tolerance features
of CIFS depend on whether an application is written to take advantage of these
features.
3.5 NAS Implementations
Three common NAS implementations are unified, gateway, and scale-out. The unified
NAS consolidates NAS-based and SAN-based data access within a unified storage
platform and provides a unified management interface for managing both the
environments.
In a gateway implementation, the NAS device uses external storage to store and
retrieve data, and unlike unified storage, there are separate administrative tasks for
the NAS device and storage.
The scale-out NAS implementation pools multiple nodes together in a cluster. A node
may consist of either the NAS head or storage or both. The cluster performs the NAS
operation as a single entity.
3.5.1 Unified NAS
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Unified NAS performs file serving and storing of file data, along with providing access to
block-level data. It supports both CIFS and NFS protocols for file access and iSCSI and
FC protocols for block level access. Due to consolidation of NAS-based and SAN-based
access on a single storage platform, unified NAS reduces an organization’s infrastructure
and management costs.
A unified NAS contains one or more NAS heads and storage in a single system. NAS heads
are connected to the storage controllers (SCs), which provide access to the storage.
These storage controllers also provide connectivity to iSCSI and FC hosts. The storage
may consist of different drive types, such as SAS, ATA, FC, and flash drives, to meet
different workload requirements.
3.5.1.1 Unified NAS Connectivity
Each NAS head in a unified NAS has front-end Ethernet ports, which connect to the IP
network. The front-end ports provide connectivity to the clients and service the file I/O
requests. Each NAS head has back-end ports, to provide connectivity to the storage
controllers.
iSCSI and FC ports on a storage controller enable hosts to access the storage directly
or through a storage network at the block level. Figure 7-5 illustrates an example of
unified NAS connectivity.
3.5.2 Gateway NAS
A gateway NAS device consists of one or more NAS heads and uses external and
independently managed storage. Similar to unified NAS, the storage is shared with
other applications that use block-level I/O. Management functions in this type of
solution are more complex than those in a unified NAS environment because there are
separate administrative tasks for the NAS head and the storage. A gateway solution
can use the FC infrastructure, such as switches and directors for accessing SAN-
attached storage arrays or direct- attached storage arrays.
The gateway NAS is more scalable compared to unified NAS because NAS heads and
storage arrays can be independently scaled up when required
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
3.5.2.1 Gateway NAS Connectivity
In a gateway solution, the front-end connectivity is similar to that in a
unified storage solution. Communication between the NAS gateway and the
storage system in a gateway solution is achieved through a traditional FC SAN. To
deploy a gateway NAS solution, factors, such as multiple paths for data,
redundant Implementation of both unified and gateway solutions requires
analysis of the SAN environment. This analysis is required to determine the
feasibility of combining the NAS workload with the SAN workload. Typically,
NAS workloads are random with small I/O sizes. Introducing sequential
workload with random workloads can be disruptive to the sequential workload.
Therefore, it is recommended to separate the NAS and SAN disks. Also,
determine whether the NAS workload performs adequately with the configured
cache in the storage system.
3.5.3 Scale-Out NAS
Both unified and gateway NAS implementations provide the capability to scale- up their
resources based on data growth and rise in performance requirements. Scaling up these
NAS devices involves adding CPUs, memory, and storage to the NAS device.
Scalability is limited by the capacity of the NAS device to house and use additional NAS
heads and storage.
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
Scale-out NAS enables grouping multiple nodes together to construct a clustered
NAS system. A scale-out NAS provides the capability to scale its resources by
simply adding nodes to a clustered NAS architecture. The cluster works as a
single NAS device and is managed centrally. Nodes can be added to the cluster,
when more performance or more capacity is needed, without causing any
downtime. Scale-out NAS provides the flexibility to use many nodes of moderate
performance and availability characteristics to produce a total system that has
better aggregate performance and availability. It also provides ease of use, low cost,
and theoretically unlimited scalability.
Scale-out NAS creates a single file system that runs on all nodes in the cluster. All
information is shared among nodes, so the entire file system is accessible by clients
connecting to any node in the cluster. Scale-out NAS stripes data across all nodes
in a cluster along with mirror or parity pro- tection. As data is sent from clients to
the cluster, the data is divided and allocated to different nodes in parallel. When
a client sends a request to read a file, the scale-out NAS retrieves the appropriate
blocks from multiple nodes, recombines the blocks into a file, and presents the file
to the client. As nodes are added, the file system grows dynamically and data is
evenly distributed to every node. Each node added to the cluster increases the
aggregate storage, memory, CPU, and network capacity. Hence, cluster
performance also increases.
3.6 NAS File-Sharing Protocols
• Most NAS devices support multiple file-service protocols to handle file I/O
requests to a remote file system.
• Remote file systems enable an application that runs on a client computer to
access files stored on a different computer.
• NFS and CIFS are the common protocols for file sharing.
NFS
• NFS is a client-server protocol for file sharing that is commonly used on UNIX
systems.
• NFS was originally based on the connectionless User Datagram Protocol
(UDP).
• It uses a machine-independent model to represent user data.
• It also uses Remote Procedure Call (RPC) as a method of inter-process
communication between two computers.
The NFS protocol provides a set of RPCs to access a remote file system for the
following operations:
• NFS version 4 (NFSv4): Uses TCP and is based on a stateful protocol design.
It offers enhanced security.
• The latest NFS version 4.1 is the enhancement of NFSv4 and includes some
new features, such as session model, parallel NFS (pNFS), and data holding.
CIFS
• CIFS is a client-server application protocol that enables client programs to
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
make requests for files and services on remote computers over TCP/IP.
• It is a public, or open, variation of Server Message Block (SMB) protocol.
• The CIFS protocol enables remote clients to gain access to files on a server.
• CIFS enables file sharing with other clients by using special locks.
• Filenames in CIFS are encoded using unicode characters.
CIFS provides the following features to ensure data integrity:
• It uses file and record locking to prevent users from overwriting the work of
another user o.
• It supports fault tolerance and can automatically restore connections and
reopen files that were open prior to an interruption.
• CIFS is a stateful protocol because the CIFS server maintains connection
information regarding every connected a file or a record.
• If a network failure or CIFS server failure occurs, the client receives a
disconnection notification.
• User interruption is minimized if the application has the embedded intelligence
to restore the connection.
• However, if the embedded intelligence is missing, the user must take steps to
reestablish the CIFS connection.
Users refer to remote file systems with an easy-to-use file-naming scheme:
\\server\share or \\servername.domain.suffix\share.
3.7 Factors Affecting NAS Performance
• NAS uses IP network; therefore, bandwidth and latency issues associated with
IP affect NAS performance.
• Network congestion is one of the most significant sources of latency (Figure
7-8) in a NAS environment.
Other factors that affect NAS performance at different levels follow:
• Number of hops: A large number of hops can increase potential because IP
processing is required at each hop, adding to the delay caused at the router.
• Retransmission: Link errors and buffer overflows can result in
retransmission.This causes packets that have not reached the specified destination
to be re-sent. Care must be taken to match ,Improper configuration might result in
errors and retransmission, adding to latency.
• Overutilized routers and switches: The amount of time that an over- utilized
device in a network takes to respond is always more than the response time of an
optimally utilized or underutilized device. Network administrators can view
utilization statistics to determine the optimum utilization of switches and routers in
a network. Additional devices should be added if the current devices are overutilized.
• File system lookup and metadata requests: NAS clients access files on
NAS devices. The processing required to reach the appropriate file or directory can
cause delays. Poor file system layout and an overutilized disk system can also
degrade performance.
• Over utilized NAS devices: Clients accessing multiple files can cause high
CSE DEPT Vemana IT
18CS822 Storage Area Networks Module-3 IP SAN & FCoE
utilization levels on a NAS device, which can be determined by viewing utilization
statistics. High memory, CPU, or disk subsystem utilization levels can be caused by
a poor file system structure or insufficient resources in a storage subsystem.
• Over utilized clients: The client accessing CIFS or NFS data might also be
over utilized. An overutilized client requires a longer time to process the requests
and responses. Specific performance-monitoring tools are available for various
operating systems to help determine the utilization of client resources.
CSE DEPT Vemana IT